Having already set the scene for the debate and examined the framework for managing the risks posed by AI in our series of articles on this topic, this article will focus on the implications of AI for the workplace, with a particular focus on employment law and related HR issues.
Artificial intelligence can be deployed by businesses in myriad ways; on the one hand, enhancing and augmenting human capability and making our daily jobs easier, removing the tedium of repetitive tasks and speeding up our efforts, but on the other hand, replacing us and leaving the human worker superfluous to requirements.
AI can be used to crunch through massive amounts of data and reach conclusions based on more data points than humans could manage. One recent study pitted an AI system against qualified lawyers, challenging them to predict the outcome of PPI cases submitted to the legal ombudsman. The computer was right 86% of the time, versus the lawyers’ 62.3%. Does this mean lawyers are obsolete? Arguably not; PPI claims are formulaic and require methodical application of a series of questions to the facts at hand. Not all legal problems can be reduced in this way. More complex questions, involving the weighing up of various factors and the careful balancing of evidence and argument will remain the domain of the human lawyer (for now).
Instead, what this study shows us is that AI can be thoughtfully deployed to aspects of a job which are capable of being reduced to a mechanised or formulaic process. In the same way that legal due diligence can now be greatly sped up by using AI to scan through potentially hundreds of thousands of emails, flagging only those which are relevant to the case at hand for review by a more senior lawyer – this sort of technology can free up the human professional to undertake the more complex, varied, and generally more interesting work, leaving the computer to churn through the bulky, repetitive tasks. This is not just true in a legal context; the UK is plagued by low productivity rates vs. hours worked across many sectors, and AI is one way of enabling employees to achieve greater productivity.
Not all jobs require this level of creative human insight. What of jobs that are rendered obsolete by AI? A recent article in the Financial Times described the plight of a fictional trader, whose role had evaporated in the face of regulatory change, and the impact of AI and its ability to place orders in his stead. It is not difficult to imagine this issue in a wider context – it is analogous to a robot replacing twenty labourers on an assembly line. Does this lead to a ‘post-work’ economy, with rising levels of unemployment and spiralling social issues?
One way to address this on a small scale is for employers to look carefully at the impact of deploying AI solutions in their businesses. Rather than moving to outright replace and dismiss swathes of the workforce, consider: with higher productivity and more free time, what other ‘value add’ roles could individuals move to? The lawyer might have more time to spend on perfecting a carefully crafted brief; the assembly line labourer might have the time to work with a colleague on another part of the line which lends itself less well to automation.
Radically: with workers ‘assisted’ by AI and rising productivity levels, could this herald the age of the six-hour working day, or a four-day working week? One idea gaining popularity is a ‘universal basic income’ paid to all individuals, whether they are working or not, obviating the need for work entirely. Initially a shorter working week sounds brilliant – more time for the pursuit of hobbies, family life, rest and recreation and so on – but being engaged in the pursuit of a useful economic activity gives people a huge amount of self-respect and motivation, and is often a key part of an individual’s identity.
Perhaps the focus then needs to be on ensuring individuals have the opportunities to retrain and acquire skills that will be needed, and which cannot simply be replaced by AI. It seems that learning to work with AI has the greatest likelihood of leading to a sustainable working environment in the future; humans have skills that, at the moment, AI cannot replicate. Humans are creative, lateral thinkers. We might have seen the first examples of AI/journalist co-written newspaper articles, but we haven’t yet seen an AI War and Peace. Indeed, when software developers programmed AI to write a chapter of a new Harry Potter book, the result was a far cry from J.K. Rowling’s considered prose.
In the context of an evolving and changing workplace, with at least some displacement of labour, the main consideration from an employment law point of view is likely to be dismissals. In the UK, employees with two or more years’ service can only be dismissed if there is a fair reason for the dismissal, and a fair process is followed. One such potentially fair reason is redundancy: a reduced requirement for work of a particular kind. If AI can accomplish the work instead, then this could lead to a redundancy situation. A fair process in this context would involve consultation (on an individual and/or collective level, depending on the number of employees involved). Employers are obliged to consider whether there are any suitable alternative positions available – which echoes the point made above, about considering what the individual can add that AI can’t, and whether their role could be altered or amended to focus on these areas. New skills or training might be required, particularly for less skilled workers – but the cost of recruitment is high, redundancy can leave an employer open to claims and at least to the cost of the statutory redundancy payment – and the value of an employee who is loyal to the company and feels valued is high.
Discrimination is another issue to keep in mind, particularly in the context of looking at new and alternative roles. Failing to offer training opportunities to an older employee, perhaps because they are perceived to be less up-to–date, less inclined to understand and work well in conjunction with AI or less computer literate, may well be discriminatory depending on the facts. Compensation for discrimination is technically uncapped.
The other point to make about discrimination is a word of caution on the use of AI itself. It has been seen that some applications of AI can actually result in discrimination, without the original programmer having intended any such outcome. The unconscious biases of the programmer can be reinforced, resulting in, say, sexist or racist recruitment decisions. Businesses seeking to import AI into its interview process to analyse candidates need to remember that the complex algorithms lying behind AI are opaque, and their workings not always self-evident. It’s not that different to a manager recruiting someone who is a good ‘fit’ for an organisation – after all, that might be what the software is setting out to do – but that good ‘fit’ is someone who looks like them, sounds like them, comes from a similar social background and went to the same university. This type of unconscious bias can of course result in homogenous businesses and can stifle diversity.
It has been said that in the short term we tend to overestimate the impact of new technology, and in the long run, underestimate it. It is clear that businesses and workplaces will be hugely affected by AI, and will need to adapt. Perhaps in the short term, we will see productivity gains brought about by AI being drafted in to do the repetitive ‘grunt work’, freeing us up for more interesting, creative, and varied tasks. In the long term though, it’s difficult to predict what impact AI will have, and what jobs might be replaced entirely. Perhaps in 30 years’ time, there will be a new series of Harry Potter written using AI – and we won’t be the ones writing this blog.