Following the Facebook-Cambridge Analytica data scandal, everyone is a little more conscious about their digital activity, and keeping their data secure. But with the rise of Artificial Intelligence (AI), what other factors may we need to consider?
Big brother is watching you
Imagine a scenario where you are in the supermarket. Your eyes fixed on a box of cereal for a few seconds, you hear your mobile phone beep and you receive a special offer for that particular product.
This futuristic-sounding scenario may not actually be too far away from becoming a reality, and some important questions need to be asked. Will one have control over this type of monitoring, with the ability to ‘opt out’ like we may do to unsubscribe from an email list, or to ‘update our privacy settings’ like we may do in our Facebook account?
It is easy to see how the development of AI technologies in an unregulated manner could lead to a breach of basic human rights. These developments also have the potential to fundamentally change society, where decisions are made on a larger scale by an elite team of technical experts, rather than individuals.
In the article “Safeguarding human rights in the era of artificial intelligence”, Dunja Mijatović discusses some of the key considerations as to how AI may affect our human rights.
One key point discussed is that machines function on the basis of what humans tell them. If a system is fed with human biases, the result will inevitably be biased, which could reinforce discrimination and prejudices.
Criminal justice systems around the world are increasingly looking into the opportunities that AI provides, from policing to crime prediction and reducing reoffending. Making important decisions about people’s lives based on algorithms, without questioning the results, could have serious human rights implications.
Facial recognition technology may prove useful in locating suspected terrorists and criminals, but at what cost? Could this be a move towards becoming a police state where our basic rights to privacy and freedom of speech are denied?
A recent report commissioned by the Council of Europe states that “AI is unavoidably based on data processing. Therefore, AI algorithms necessarily have an impact on personal data use and pose questions about the adequacy of the existing data protection regulations in addressing the issues that these new paradigms raise.”
How are AI and Data Processing currently regulated?
The existing regulatory framework applicable to AI and data processing is mainly grounded on the Council of Europe Convention 108 – Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data.
The Council of Europe report recommends that AI data-centric development should therefore be based on the principles of Convention 108 as the foundations for the development of digital society. The key elements of this approach are:
Proportionality – the key focus should be on the individual’s rights and freedom, as opposed to technological or market factors. Individuals must have the right to ‘opt out’ from automated AI systems, and legislation should be able to limit AI applications to safeguard society.
Responsibility – developers and decision-makers need to act in a socially responsible manner, with the creation of specific bodies to oversee their actions.
Risk management – the potential negative consequences of AI applications need to be assessed, and measures adopted to mitigate these risks.
Participation – participatory models in risk assessment are essential to give voice to citizens. At the same time, citizens’ participation must not diminish the accountability of decision-makers.
Transparency – transparency can allow for effective participation, and assessment of the consequences of AI applications.
The rise of Artificial Intelligence (AI) raises serious human rights implications. While regulations do exist to cover AI and data processing, such as Convention 108 and the General Data Protection Regulation (GDPR), is this really enough to protect our basic human rights?