Skip to content
  • PRO
  • Events
  • Login
  • Register
  • Home
      • Influencers
      • Lexology European Awards 2026
      • Client Choice Dinner 2026
  • Lexology Compete
  • About
  • Help centre
  • Blog
  • Lexology Academic
  • Lexology Talent Management
  • Login
  • Register
  • PRO
Lexology Article

Back Forward
  • Save & file
  • View original
  • Forward
  • Share
    • Facebook
    • Twitter
    • LinkedIn
    • WhatsApp
  • Follow
    Please login to follow content.
  • Like
  • Instruct

add to folder:

  • My saved (default)
  • Read later
Folders shared with you

Register now for your free, tailored, daily legal newsfeed service.

Find out more about Lexology or get in touch by visiting our About page.

Register

Is employment law ready for AI?

Ius Laboris

To view this article you need a PDF viewer such as Adobe Reader. Download Adobe Acrobat Reader

If you can't read this PDF, you can view its text here. Go back to the PDF .

European Union October 25 2023

This report concerns the use of artificial intelligence through the life cycle of employment: from recruitment, through management of work processes, to dismissal decisions.

The report is in two parts. The first is a research paper setting out some of the legal challenges that arise from the use of AI in an employment context, and can be expected to arise in the future, and identifies a genuine tension between the pursuit of benefits for businesses, on the one hand, and the protection of employees’ rights to privacy, freedom from discrimination, and access to good quality employment, on the other. The second part of the report presents the results of a survey of 28 jurisdictions on the state of AI-specific regulation around the world, which considers any existing legal barriers to the use of this technology as well as the extent to which its use is already regulated according to general principles of civil, employment or data privacy law. While we are able to report on a number of proposals for comprehensive regulation, including in the EU, few countries have yet reached the point of enacting AI-specific measures that are legally binding.

PDF GUIDE

This report concerns the use of artificial intelligence through the life cycle of employment: from recruitment, through management of work processes, to dismissal decisions.

Introduction Contact us via [email protected] to be put in touch with an expert 6 7 IS EMPLOYMENT LAW READY FOR AI? IS EMPLOYMENT LAW READY FOR AI? AI and employment law 1. Truth and illusion Scientifically and technologically, artificial intelligence (AI) has made spectacular progress in recent years. This has been driven by increases in computational capacity (‘Moore’s Law’ suggests that the number of transistors in a microprocessor should double about every two years), and the fact that we have more data than even before upon which to train machine learning algorithms. These build models based on sample data, known as training data, in order to make predictions or decisions without being explicitly programmed to do so. Technology is moving fast in this field. 1.1 LARGE LANGUAGE MODELS The ability of AI to process natural language (NLP) has undergone a revolution in recent times. The last 4 years have seen spectacular advances in large language models (LLMs) or, as they are sometimes called, ‘foundation’ or ‘base’ models. Large language models are algorithms that learn statistical associations between billions of words and phrases so as to be able to perform tasks such as generating summaries, performing translations, answering questions and classifying texts. A base model is a large artificial intelligence model trained on a huge amount of unlabelled data on a large scale (usually by self-supervised learning), resulting in a model that can be adapted to a wide range of downstream tasks. These are gigantic neural networks with billions of parameters that create language representations of extraordinary sophistication. Humans have many languages: natural languages like English or Spanish, the language of chemical science or the language of mathematics, among many others. We can try to teach machines to represent a given language (for example, a dictionary would be a limited representation of a natural language). A base model is created from one or more languages. From a base model, additional specific models can be created. Before the developments of the last 4 years, a specific model Román Gil PHD, PARTNER SPAIN had to be created for each action an AI was to perform and the model had to be trained laboriously. This meant, for example, labelling a great number of images ‘by hand’, one by one, if one wished to teach the model the difference between things that are stop signs and things that are not. Self-monitoring models are now capable of creating base models. Billions of sentences found on the internet are aggregated. From each sentence a word is extracted. Thus, from the sentence ‘a bird in the hand is worth a hundred in flight’ we could remove the word ‘bird’; but we keep it in the machine’s memory as the correct answer. The neural network then has to provide a guess, without, initially, having access to the correct answer, which is stored in its memory; this will be checked later to find out if the guess was correct. This is called self-monitoring. The system learns by trial and error. In the end, it will have created an accurate representation of a language (e.g. general, non-specialised Spanish). That general representation can then be applied to a particular language (e.g. legal Spanish) at 100 to 1,000 times less cost, and with greater accuracy, than would have been possible before these breakthroughs in language modelling. Different languages can also be combined and mixed. With this we can carry out tasks such as translations, summaries, questions and answers, or dialogues with a chatbot. One of the best-known examples of this is ChatGPT, a chatbot (i.e. a computer program) capable of ‘conversing’ with a user. This is impressive, of course, but it is still just the recreation, arrangement, modification or processing of what is already there: ChatGPT has no real creative capability. This is already affecting computer programming. In the future, we will give instructions to AI in natural language, and the AI will simply write the code and perform the required operations. Programming will thereby be democratised. It will be able to translate between languages (and not only between natural languages). It will also correct much of what we do: we will have a dialogue with the AI as we think or write, and perhaps lawyers will ask it to suggest arguments to use in court, or in a negotiation with opposing counsel, or as part of a collective bargaining process. This would be similar to what many language processors do now, but with much more sophistication. More and more processes will be automated, and more will be managed by algorithms. 1.2 WHAT AI CANNOT (YET) DO However, there are many very important human tasks that AI cannot yet do, and this is unlikely to change in the short (or even medium) term. What we know today as artificial intelligence is not capable of representing the world (i.e. space and time), nor of acting 8 9 IS EMPLOYMENT LAW READY FOR AI? IS EMPLOYMENT LAW READY FOR AI? “ with what we call common sense (operating in a complex environment in an efficient way), nor can it carry out mental experiments (something so important for science and mathematics). AI is capable of neither prudence (the ability to make the right decision in specific, unique circumstances) nor wisdom (the ability to see, to contemplate the whole). In general, it is incapable of sophisticated reasoning related to abstract ideas. AI, in its current state, is not capable of accurately imagining what will happen in circumstances that are uncommon or unusual. To get things right, it needs to have seen enough precise examples of something happening, according to certain rules. Its usefulness will therefore be limited when it comes to anticipating what will happen in a specific judicial procedure, or with respect to political decisions in moments of crisis, for example. Unsurprisingly, perhaps, algorithms lack non-algorithmic skills. This is because the progress that is being made is still in relation to what is known as weak or ‘narrow’ AI. This kind of AI does not claim to have general cognitive capabilities; rather, weak AI is any program that is designed to solve exactly one problem. (Note that some academic sources reserve the term ‘weak AI’ for programs that do not experience consciousness or ‘have a mind’ in the same way people do). Strong AI, full AI or general intelligence is the ability of an intelligent agent to understand or learn any intellectual task that a human being can perform. (Again, note that some academic sources reserve the term ‘strong AI’ for computer programs that experience sentience or consciousness.) 1.3 LEGAL REASONING The law, and labour law is no exception to this, rarely generates a clear and unique interpretation that could be made subject to an algorithm, except perhaps in the simplest cases (or cases on identical facts). As lawyers know, legal reasoning often considers things on a case-by-case basis, subject to ever-changing variables and specific circumstances, even if this is not always squarely acknowledged. It takes into account things like historical and social circumstances, ideology, modes of thought, and politics, as well as the facts of each specific case. Consider the following legal example. Judges explain their decisions and lawyers devise their arguments according to norms and practices of legal reasoning. But these explanations and arguments can be imprecise and open to interpretation. Lawyers do not explain why they followed one strategy rather than another. Judges do not always give the reasons that truly motivated their decisions. It is therefore difficult to use the explanations provided to construct an algorithm in the sense described above; that is, a set of instructions designed to solve a problem. However, if a large number of court decisions based on identical or very similar facts are available, a learning algorithm can be trained to propose a solution based on previous decisions. Algorithms could then recommend solutions based on these previous, near-identical cases, but this does not mean that the use of machine learning is the same as legal reasoning, which is a more complex task. In the past, the development of expert systems, capable of reproducing logical reasoning based on a knowledge base and an inference engine, suggested that these could be used to reproduce legal reasoning. Expert systems rewrite legal rules into computer language, in order to establish a decision tree with various branches related by conditional logic. However, they have generally been considered disappointing in the legal field, even when used to address highly technical issues where it only seems necessary to reproduce relatively simple syllogistic reasoning to reach the correct solution. This relative failure can be explained by the rather reductive reasoning of expert systems. They are unable to consider presumptions or analogies and cannot engage in the constant back-and-forth between fact and law that characterises legal reasoning. Significantly, they cannot deal with (apparently) contradictory rules. This is problematic as legal rules often lack the precision of mathematical rules and include many contradictions. 1.4 A COMPOSITE MYTH We shouldn’t forget that many have good reasons to exaggerate what artificial intelligence is and what it is likely to be today or in the foreseeable future. These are business reasons for those who offer products based on artificial intelligence, and exaggerating what can be done by a product being offered to a potential customer is nothing new in business. The media and entertainment industries have similar incentives: exaggerating and representing our worst fears has always captured our attention. But the truth is that today, we have serious reasons to continue to believe that human and machine intelligence are radically different. A myth circulates that the differences between artificial and human intelligence are only temporary and that increasingly powerful computer systems will erase them. As Erik Larson explains, there are two important aspects to this myth, the scientific and the cultural.1 AI is capable of neither prudence (the ability to make the right decision in specific, unique circumstances) nor wisdom (the ability to see, to contemplate the whole). In general, it is incapable of sophisticated reasoning related to abstract ideas. Román Gil 10 11 IS EMPLOYMENT LAW READY FOR AI? IS EMPLOYMENT LAW READY FOR AI? The scientific aspect of the myth assumes that we only need to chip away at the challenge of general artificial intelligence by making progress in the field of narrow artificial intelligence (e.g. in tasks such as gaming or image recognition). This is a wrong inference. Improvements in performing concrete tasks— performing them faster, with more data, say—will not bring us any closer to general artificial intelligence. It will not allow us to leap to common sense, to have a real conversation or see a machine read a newspaper in a human way. There is no algorithm for general artificial intelligence, and it would require scientific breakthroughs that are not yet foreseeable if it were to become a reality. We should not delude ourselves into assuming that we know what we do not know, not least because this belief could well stand in the way of real scientific progress. For now, machines are capable of two types of inferences. The first is deduction, where a conclusion follows logically from given premises, and the second is induction, where the truth of the premises supports, but does not establish, the truth of the conclusion. Machines are not capable of reasoning by abduction, as humans are able to do. Abduction is a type of reasoning where, from the description of a fact or phenomenon (but without factual certainty from which to deduce or induce) a hypothesis can be arrived at—a conjecture, the best or most probable explanation—which explains the possible reasons or motives for some circumstance or matter. Abductive reasoning is an essential and unique feature of human intelligence. Culturally, the myth of artificial intelligence is also detrimental. It would discourage real innovation in this field if we were to assume that the current path is sufficient to achieve general artificial intelligence; if we were to claim, in this way, to know what we do not really know. It is also likely to cause unnecessary fear and concern—apocalyptic anxieties which, when not due to ignorance, often express The Massachusetts Institute of Technology (MIT) has proposed in a recent publication on AI and the future of employment what seems to me a useful strategy for approaching this question. We should start by considering the tasks that make up each particular job and think about which of them can be done better by computers and which can be done better by people. Taking this approach would mean thinking less about people or computers and more about people and computers.3 In January 2016, Klaus Schwab, the founder and executive chairman of the World Economic Forum, declared that the world was entering the Fourth Industrial Revolution: ‘We are on the brink of a technological revolution that will fundamentally alter the way we live, work and interact. In its scale, scope and complexity, the transformation will be unlike anything humanity has ever experienced before’.4 Schwab spoke of the impact of the accelerating rise of computing power and sought to alert the world to its ability to analyse and use data to take and execute decisions about us and for us. His concern was the effect this could have on all aspects of our lives if left unchecked. His aim was to stress the absolute necessity for human beings to take charge of this process and not be mere victims, arguing that ‘the response must be integrated and comprehensive, involving all stakeholders in world politics, from the public and private sectors to academia and civil society’. He concluded with a warning: In the end, it all comes down to people and values. We have to forge a future that works for everyone by putting people first and empowering them. At its most pessimistic and dehumanised, the Fourth Industrial Revolution may have the potential to ‘robotise’ humanity and thus deprive us of our heart and soul. But as a complement to the best parts of human nature— creativity, empathy, stewardship— it can also elevate humanity to a new collective and moral consciousness based on a shared sense of destiny. It is incumbent on all of us to ensure that the latter prevails. other fears or interests, and could encourage a twenty-first century Luddism which, like the old version, would not conduce to the economic and cultural betterment of humans. 2. Technological change and employment 2.1 A FOURTH INDUSTRIAL REVOLUTION There is no doubt that the narrow artificial intelligence that already exists, is developing at great speed and is going to affect employment. As James Maniyika, Google’s vicepresident of technology and society has recently explained, three things will happen at once as a result of AI development. Some jobs will be created, some jobs will be lost, and some jobs will change. 2 “ Klaus Schwab 12 13 IS EMPLOYMENT LAW READY FOR AI? IS EMPLOYMENT LAW READY FOR AI? His warning was right then and is even more right now. Little more than five years after those words were uttered, the fact is that the Fourth Industrial Revolution is already here. By September 2020, in the EU-27, Norway, Iceland and the UK, more than 40% of companies had adopted at least one AI-based technology and a quarter at least two, with a further 18% planning to adopt AI by 2022. 5 2.2 THE USE OF AI AT WORK In the context of workforce management, AI systems are being used to collect and analyse data about workers and to make decisions about them that impact on their working lives. Professor Jeremias Adams-Prassl calls this process ‘algorithmic management’, and has studied this in the context of working in the gig economy, via digital platforms. 6 AI-driven management practices are well established for work involving digital platforms. Algorithms operate to match service providers to tasks, monitor their activities, assign ratings (often derived from end-used feedback), offer rewards or take disciplinary measures. These practices are by no means, however, limited to work done through digital platforms. Work that involves AI is becoming more and more common, and this has been driven, in part, by certain consequences of the Covid-19 pandemic. In June 2020, The Financial Times reported on a survey conducted by the Institute of Student Employers in the UK, which showed that in 2019 only 30% of companies conducted face-to-face interviews at the first stage of the graduate recruitment process.7 The survey led the Institute to conclude that ‘online recruitment may become the new normal’. AI-powered tools are being used at all stages of the recruitment process, from the sourcing of candidates to screening, interviewing and making job offers based on predictive modelling. The UK Trades Union Congress (TUC) recently investigated the AI-based technologies used in the recruitment of workers. The most commonly-used systems automatically scan CVs for keywords to decide whether to take the candidate to the next stage of the process (17%), carry out automated background checks (16%) or involve ‘gaming’ (i.e. video simulations and game-based assessment) (14%). 8 People are not only hired by AI, they are also evaluated, monitored and managed by it. AI-powered tools that enable analytical goal-setting and performance evaluation are also becoming more common. For example, the company BetterWorks offers an AI-based performance evaluation tool that aims to replace the traditional human-driven performance management review processes. The AI is based on a ‘working graph’, which maps the connection between an organisation’s functions, objectives and goals. The process is continuous, with performance analysis being done in real time. Many of the AI platforms mentioned above also offer AI-based tools for team dynamics analysis, personality analysis and team coaching and restructuring. Gamified training (the use of game-like tasks and experiences, often through simulation) seems to be on the rise. On work platforms it is common for low ratings to trigger a series of ‘standard performance tests’, where workers are confined to lowvalue tasks, or simply dismissed. There have been numerous reports in the media of automated processes being used to track and then dismiss people on productivity grounds, as well as reports of human error that can lead AI systems to make unfair termination decisions. There is a trend towards monitoring worker behaviour in order to collect data that can then be analysed by AI. This has transformed monitoring tools into data sources. Once this data is accumulated, it is often processed by AI to reach conclusions and make decisions about workers. According to the CIPD, it is common for HR professionals to use people data to address important challenges facing their organisations. A CIPD survey found that 75% of HR professionals worldwide are addressing workforce performance and productivity issues using people data, illustrating the importance of this information for strategic workforce issues. Monitoring by employer


Back Forward
  • Save & file
  • View original
  • Forward
  • Share
    • Facebook
    • Twitter
    • LinkedIn
    • WhatsApp
  • Follow
    Please login to follow content.
  • Like
  • Instruct

add to folder:

  • My saved (default)
  • Read later
Folders shared with you

Filed under

  • European Union
  • Employment & Labor
  • IT & Data Protection
  • Ius Laboris

Topics

  • Artificial intelligence
  • Machine learning
  • Gig economy
  • Gaming
  • Data protection and privacy
  • ChatGPT
  • Generative AI

Popular articles from this firm

  1. Transfer of personal data outside the European Economic Area: what’s new? *
  2. How employers in European countries should deal with workplace sexual harassment *
  3. Living and working in Bulgaria after Brexit: what are the new rules ? *
  4. Dismissal protection for people with disabilities: new European Court ruling *
  5. The GDPR: one year on *
Interested in contributing?
Get closer to winning business faster with Lexology's complete suite of dynamic products designed to help you unlock new opportunities with our highly engaged audience of legal professionals looking for answers.
Learn more
Powered by Lexology

Professional development

  • Company Law - 2026 Virtual Conference

    MBL Seminars | 5 CPD hours
    Online
    13 April 2026
  • High Risk AI Systems vs Non-High Risk AI Systems - Seeking Solutions Between Regulatory Compliance & Effectiveness - Learn Live

    MBL Seminars | 2 CPD hours
    Online
    21 April 2026
  • IP Disputes - Litigation, Jurisdiction & Emerging Trends Explored - Learn Live

    MBL Seminars | 1.25 CPD hours
    Online
    5 May 2026
View all

Related practical resources PRO

  • How-to guide How-to guide: Managing the use of AI in the workplace (USA)
  • Checklist Checklist: Policy for employee use of an organization’s social media accounts (USA) Recently updated
  • Checklist Checklist: Data subject access rights under the GDPR (EU)
View all

Related research hubs

Machine learning

Artificial intelligence

European Union

Employment & Labor

IT & Data Protection

Resources
  • Daily newsfeed
  • Panoramic
  • Research hubs
  • Learn
  • In-depth
  • Lexy: AI search
  • Scanner
  • Contracts & clauses
Lexology Index
  • Find an expert
  • Reports
  • Research methodology
  • Submissions
  • FAQ
  • Instruct Counsel
  • Client Choice 2025
More
  • About us
  • Legal Influencers
  • Firms
  • Blog
  • Events
  • Popular
  • Lexology Academic
  • Lexology Talent Management
Legal
  • Terms of use
  • Cookies
  • Disclaimer
  • Privacy policy
Contact
  • Help centre
  • Contact
  • RSS feeds
  • Submissions
 
  • Login
  • Register
  • TwitterFollow on X
  • LinkedInFollow on LinkedIn

© Copyright 2006 - 2026 Law Business Research

Law Business Research