Is removing subjective human choice from HR decisions going to create more problems than it solves?
We are all very aware of human failings when it comes to people management in the workplace. Everything from unconscious bias through to wholly intentional discrimination. To that extent the handing over of some management decisions to algorithms and AI (a term for which there is no common definition but which can cover a scenario where many algorithms work together with the ability to improve their own function) may seem like a no brainer. The technology is certainly out there and being aggressively marketed.
The rise of the gig economy is tied into the increase in the use of algorithms and AI, as the software began to be used on platforms such as Uber in an attempt to optimise the deployment of workers. It has also been adopted in many other sectors and workplaces - including many global brands such as Amazon and Unilever. Common uses include recruitment, workforce management (eg task or shift allocation) and performance review. The benefits to business include faster decision making, more efficient workforce planning, improved speed of recruitment and the obvious reduction in opportunity for human bias.
However, the very nature of "algorithmic management" means increases in monitoring and collection of data upon which the automated, or semi-automated, decisions are made. This is particularly so for performance monitoring and brings with it the risk of monitoring and processing data without appropriate consent. Removing humans from the decision making process entirely also creates the potential for lack of accountability. Additionally, if bias is embedded into an algorithm this will increase rather than decrease the risk of discrimination.
In May 2021, the TUC and the AI Consultancy published a report - Technology Managing People - the legal implications - highlighting exactly these sorts of issues and calling for legal reform. One focus of the report is the lack of transparency in decision making that comes with the use of AI - the basis of the decision being made is often an unknown to those that the decisions are being made about. The report points out that where it is difficult to identify when, how and by whom discrimination is introduced, it becomes more difficult for workers and employees to enforce their rights to protection from discrimination.
Other issues identified by the report include a lack of guidance for employers explaining when workers' privacy rights under the ECHR may be infringed by AI and the risks posed by the lack of clarity in the application of the UK GDPR to the use of AI within the employment relationship. Although unfair dismissal rights provide some protection from dismissals that are factually inaccurate or opaque, and this could be applied to an AI based decision making processes, the need for qualifying service means this protection is not universal. The UK GDPR also provides protection for employees via the requirement, amongst other things, for all personal data that is processed by AI to be accurate but a complaint arising from such a breach cannot, in itself, be brought within the employment tribunal system.
The TUC report makes a number of recommendations on how theses issues can be overcome. The provision of statutory guidance on how to avoid discrimination in the use of AI and on the interplay between AI and workers' rights to privacy; the introduction of a statutory right not to subjected to detrimental treatment (including dismissal) due to the processing of inaccurate data; the right to "explainability" in relation to high risk AI systems; and a change to the UK's data protection regime to state that discriminatory data processing is always unlawful are amongst those recommendations. However, even if any of these proposals are acted upon by UK Government, they will take time to implement.
For employers looking for ideas on good practice in this area, the policy paper published by ACAS - My boss the algorithm: an ethical look at algorithms in the workplace - is a good starting point, although it should be noted this is not ACAS guidance. The recommendations look at what can be done at a human level within a business. Key to those recommendations is the need for human input - algorithms being used alongside human management rather than replacing it. This is something that the TUC report also picks up on, albeit more formally suggesting that there should be a comprehensive and universal right to human review of AI decisions made in the workplace that are "high risk". Both reports also highlight the need for good communication between employers and employees (or their representatives) to ensure technology is effectively used to improve workplace outcomes.
Given the growth in this area, further regulation to manage the use of algorithms and AI in the workplace seems inevitable. In the meantime, businesses making use of this technology need to fully understand exactly what it does, where there are risks to its use and the importance of transparency in its use.