An update on the data ethics landscape
Last year saw the implementation of the General Data Protection Regulation and an uptake in interest on the data practices of large companies following the Cambridge Analytica scandal. Questions were raised as to whether personal data was held and used responsibly and whether users of online services really had the correct level of transparency required to make meaningful decisions about how their information was used. Whilst Cambridge Analytica was important for raising awareness, the requirement for ethical data practices has been on the radar of data regulators and other large organisations since before the scandal.
What is data ethics?
What is objectively “ethical” cannot be defined. However, in order to achieve what may be considered ethical, legal compliance is taken as the minimum standard rather than as the optimal. Ethical data practices aim to provide principles for organisations to hold and share data which encompasses some or all of the following ethical aspects.
Why is data ethics important?
On the very basic level, people want to ensure that their personal data is being kept in a safe and secure environment and to have control over what that data is used for. For example, it caused controversy when Google decided to bring its bought subsidiary company DeepMind Health closer to the rest of the organisation. The concern was that large data sets containing information about 1.6million NHS patients would be brought under Google’s control essentially leaving those patients without a say what that data was used for.
Data ethics is also important where personal data is not concerned. Sometimes smart well-meaning applications sometimes have unintended consequences which are discriminatory or do not benefit society in a way that is equal or fair. A (now classic) example of this is Boston’s Street Bump application that unintentionally underserved the needs of poorer communities. In 2012, Boston’s officials sought the help of an application to identify areas of road that needed repair. The user would leave the app open in their smart road while they drove and it would register bumps in the road. However more people had access to the technology from affluent areas than poorer areas of the city so this led to a greater disparity in road quality.
What is the current state of play?
There are a plethora of organisations focussing on the development of artificial intelligence, machine learning and identifying algorithmic trends in big data sets. There are also many official bodies (and some private) that have started focussing on developing ethical frameworks for managing this.
Most notably, the Information Commissioner has considered data ethics practices with regard to big data, machine learning and AI. The EU Commission has released a paper on ethics guidelines for trustworthy AI. In addition, there are standards being written which help outline what ethical autonomous systems look like. There has also been uptake from some large private companies like IBM and Vodafone who have put in place ethical privacy practices. Ultimately, this area is still undergoing rapid development.
The future of data ethics
As well as large institutions such as regulators and government bodies (such as the EU Commission), it is likely that we will see a number of private companies publish ethical data practices. The trend appears to show that organisations are starting to consider the substantial reputational risk of having bad data practices and are now being pro-active in their approach.
At BPE, we have worked with a number of other organisations (private and public) who have asked to be supplied with data sharing agreements which are not just legally compliant, but also ethical. In addition, we have been involved with pioneering new legal work considering the viability of setting up a legal data trust and considered how this will facilitate ethical data practices.