From personal assistants to customer services, AI that can talk to us is already big business. But what happens when AI gets a bit too good at pretending to be a human?
OpenAI announced recently that they had created an algorithm which generates incredibly humanlike content. They soon realised that the humanlike interactions that their AI could generate might fall into the wrong hands and announced that they would not be releasing the code to the public. Instead, they only released a smaller, more restricted sample.
Confirming OpenAI’s worst nightmares, some companies and individuals have already begun adapting this smaller, pre-trained AI, and creations such as Todorov’s Facebook API are beginning to surface. Whilst Todorov’s API simply creates a goofy, mostly nonsensical chatbot, the fact that the restricted code is being modified and put to use in new ways may herald a new wave of realistic AI.
One of OpenAI’s main concerns was that their bot could be used to generate fake news, a problem that has plagued the internet since its inception but which has only come to the fore in recent years. With this recent advancement in realistic AI and chatbots, it will only be a matter of time until AI-created fake news starts becoming indistinguishable from real news. Aside from being wholly unethical, once AI starts learning more about the world around it and how to create sensational news, what happens when it starts making wild, untrue allegations against individuals and companies?
Liability for AI
The world of sensational journalism continues to have its fair share of defamation cases, and fake news is also being targeted and taken down, with the perpetrators being fined and/or taken to court. Whereas some may post fake news to suit a political agenda, others would simply do it to gain fame or notoriety, but governments and individuals are gaining ground in deterring fake content creators, and some ground is being made.
What then happens if a content-creating AI posts a defamatory story? Does the creator of the algorithm get taken to court? What about the person who trained the AI to create a certain type of content or the person who set the AI running? These questions are not straightforward to answer, especially when dealing with an open-source AI code that can be adapted and fed data by anyone and everyone. Due to the nature and complexity of the algorithms behind AI it may be difficult to distinguish who is ultimately at fault if the AI produces something unexpected which turns out to be defamatory.
While defamation may be less of an issue with a chatbot, where the interaction is normally limited to a one-on-one chat rather than an article which could potentially gain the viewership of millions, AI enabled fraud, is becoming a growing concern.
AI Fraud
AI developers are focusing on making interactions more real, moving from basic mimicking to fluid interactions and dynamic conversations. With these advancements AI could potentially be more convincing than a human con-artist, mimicking with ease the style and mannerisms of a particular individual. Combine this with the volume of data we post publicly online and our reliance on email and instant messaging and it’s not hard to see how an AI could mimic a family member of an elderly relative and convince them to part with funds, or set up a fake online transaction for the purposes of harvesting data.
It’s easy to see then why OpenAI refused to release their code into the public domain, for fear of inadvertently creating a monstrosity, though the decision has been met with disagreement too. The language model that OpenAI has created is clearly advanced, and has capabilities above and beyond most other available algorithms. With the right people and the right intentions, this AI could be used to develop and evolve our current understanding of technology. But perhaps, for now, it is best kept under lock and key, until a time when people better understand the technology they’re dealing with.