Robots are responsible for one in eight tweets about the General Election.
The killer robot looms large in the popular imagination. From the relentless, humanoid Terminator to the innumerable squid-ish sentinels of The Matrix, we have imagined our computerized demise in countless ways. A recent survey found that 36% of the public believe that the development of artificial intelligence poses a threat to the long-term survival of humanity.
While we are still in charge (or at least, while we’re allowed to think so), our robo-anxiety may be misdirected. A different robot has been hitting the headlines: the ‘Twitterbot’. These bots are a type of software, which run via a twitter account, and can be programmed to perform tasks automatically. They can be useful: @DearAssistant will reply with an answer to almost any question you could think to ask; or they can be bizarre - @everyailment has been tweeting every affliction listed in the International Classification of Diseases for the last 18 months. One significant study found that up to 15% of all twitter accounts are in fact automated bots.
Exciting the most passions, however, is the political Twitterbot. Social media has been praised for democratising debate, giving everyone a platform for their opinions. The problem comes from the fact that unlike humans, bots can voice their pre-programmed opinions more rapidly, more frequently and with (even) fewer consequences. The day before the 2016 US presidential election, a study found that there were 400,000 Twitterbots in operation, generating around 20% of all election-related messages. Another study found that bots which favoured Donald Trump outnumbered those supporting Hillary Clinton 5:1.
The research shows that a growing number of political movements, from the Venezuelan radical opposition to the campaign for Brexit, are making deliberate and strategic use of these bots to attempt to shape political debate and influence election results. Bots can quickly spread potentially damaging information, such as #MacronLeaks on the eve of the French election. They can also propagate so-called ‘fake news’, as with the fabricated story of the FBI agent murder-suicide connected to Mrs Clinton’s email disclosures. An enormous network comprising 350,000 dormant Twitterbots was recently unearthed. If so directed, these could be used to spread misinformation or to create a convincing, but entirely artificial, impression of public opinion.
Indeed, Twitterbots are becoming increasingly sophisticated in imitating their human counterparts. Bots are being retweeted at substantially the same rate as humans, suggesting that they are effective in hiding their algorithmic origins. Mr Trump has reportedly quoted from bot accounts 150 times, including one occasion where he was baited into citing @ilduce2016 – a bot which has been tirelessly forwarding him quotes from fascist dictator Benito Mussolini. This, and certain other bots, may be easy to distinguish as technological rather than biological, particularly those which tweet thousands of times per hour or produce nonsensical gibberish. Newer bots, however, can be more convincing. They can emulate human sleep by going offline for several hours, or pull pictures and information from online sources to look like real people. So, in addition to creating feasible trends en masse, we may soon be personally talking to, and absorbing opinions from, Twitterbots without ever realising it.
So what does this mean for the 2017 UK General Election? The answer is an unsatisfying, but resounding, ‘not sure’. It isn't that UK campaigning is immune to robotic electioneering; on the contrary, one study found that one third of all Twitter traffic concerning Brexit was most likely caused by bots. The best estimate for the present vote in June is that 12.3% of all election-related content is being generated by bots. The same study found that the parties' presence in the bot-scape was roughly equal, though Labour party accounts were more effective in spreading content.
What hasn't been shown, however, is whether Twitterbot activity actually makes any difference. Instinctively we might think it could. We expect people to vote based on the information they receive, and the opinions of those around them. If Twitterbots are spreading information relevant to the election, and pretending to be people with opinions, the implication is that voting will be affected. But there is no evidence for that conclusion. None of the studies cited above have proved a causal connection between automated tweeting and election outcomes. For that reason, and for general good sense, starring roles in action films should remain the preserve of the old-fashioned killer robot. At least for now.