The development of Internet of Things (IoT) and artificial intelligence (AI) technologies raises the issue on whether they should also act ethically.
On 26 October 2016, I attended the IoT Solutions World Congress, one of the largest events in the world on the Internet of Things, and I had the pleasure of being part of a panel on “Ethical Uses of Data”, together with Edy Liongosari from Accenture, Prith Banerjee from Schneider Electric, Derek O’Halloran from the World Economic Forum, Sven Schrecker from Intel and David Blaszkowsky from the Financial Semantics Collaborative.
The discussion was very interesting on a topic that is quite uncommon at such type of events and below are my top 3 takeaways from the debate:
1. Individuals will care about their privacy
In a few years, we will not own almost anything. Our car, our house and whatever we use during the course of the day will become “as a service“. In this context, the sole asset that will belong to individuals is their “digital identity“.
As a consequence, it is reasonable to expect that people will exponentially care about their privacy rights. In order to avoid that privacy compliance becomes an unbearable cost for businesses though, companies themselves shall “educate” their customers. This is to ensure that compliance becomes a competitive advantage, rather than disadvantage. And it is interesting that this is happening just after the approval of the EU Privacy Regulation that will lead to a major change in the approach to privacy compliance, also because of the applicable sanctions.
2. Compliance will no longer be enough for IoT and AI
In a previous blog post, I had called for a standardisation of security measures to be implemented in IoT and AI technologies, as well as in any other technology, in order to create a higher level of certainty which is necessary to foster investments.
I am still a strong supporter of security standards for new technologies such as those of the Internet of Things. However, as raised during the debate of the DLA Piper European Tech Summit, the growth of IoT technologies requires the establishment of relationship of trust between the supplier and its customers.
No software can be 100% secure, but the ability of proving the company’s diligence in having performed whatever was necessary to comply with applicable obligations, together with the ability to promptly react to a potential data breach, are absolutely crucial to acquire customers and avoid to quickly lose them in case of potential issues.
The certification of compliance will become a “must-have” especially once the new fines provided by the EU General Data Protection Regulation will come into force. But, this is just a solution aimed at avoiding potential regulatory sanctions, while it will not be sufficient to ensure that customers really trust a company.
3. Machines need to act ethically, not just reasonably
During the debate, there was a long discussion concerning artificial intelligence systems such as those of self-driving cars and whether they should be educated to have also an ethical behaviour. The most commonly used example is the one of a self-driving car that is hitting a bus with many kids and does not just gets off the road since it reaches the conclusion that statistically the best conduct to take was to remain on the street.
In the scenario above, can the driver of the vehicle potentially bring a claim against the manufacturer of the car? This is especially if the latter is not able to adequately prove why that specific behaviour had been considered the most appropriate. And artificial intelligence systems might be so complex to prevent a fully tracking of the reasoning followed by the machine.
This would put manufactures of AI systems in a quite weak position as they might end up with no defence in a potential litigation.
The alternative would be to
- design AI systems so that they are able to track and justify their conduct. And this is very important in systems processing personal data, especially after the coming into force of the EU Privacy Regulation;
- previously disclose to customers that the IoT or AI system have be set up to also ensure ethical behaviours. This would require to define what is ethical not only in the Ts&Cs, but also in the customers’ settings, without ending up in too complicated definitions; and
- “evangelize” customers on the need to ensure ethical conducts, also establishing an internal ethical committee.
There is no doubt that we will still hear about this topic. Ethics and its different interpretations might lead to endless litigations. Also, my expectation is that companies will establish an internal Ethical Committee quite soon to address the issue.