Introduction

In today’s world, Artificial intelligence (“AI”) and Big Data are making massive transformation in the way workers are recruited, hired, evaluated and even fired/dismissed from their positions. Human resources, after all, often are called upon to take care of some of the stickiest and/or emotionally charged aspects of an organization’s operations.

Catalysed by the challenge to traditional employment processes caused by Covid-19, use of technologies including Blockchain, Big Data and AI has become more mainstream among major corporations worldwide.

In the age of social media and the social distancing requirements of many places due to Covid-19, job applicants will now find that employment technologies play a role in all stages of their recruitment. Microsoft Corporation’s LinkedIn offers employers algorithmic rankings of candidates based on their fit for job postings on its site. HireVue, a startup near Salt Lake City, analyzes candidates’ speech and facial expressions in video interviews to reduce reliance on resumes. Goldman Sachs has created its own resume analysis tool that matches candidates with the division where they would be the “best fit”. Tesla and JPMorgan Chase use the pymetrics platform, which replaces resume with online games and quizzes, to evaluate candidates for specific roles within their company.

Limitations of using AI in the employment process

While bringing benefits such as remote connecting, time-saving and improving HR decision-making, the adoption of AI technologies in recruitment does not come without limitations.

In 2015, Amazon halted its experiment with a recruiting engine which reviews job applicants’ resumes upon discovering that the system was not rating candidates for software developer jobs and other technical posts in a gender-neutral way[1]. That is because Amazon’s computer models were trained to observe patterns in resumes submitted to the company, most of which came from men over the past decade, thus reflecting male dominance across the tech industry. In effect, Amazon’s system taught itself that male candidates were preferable and penalized resumes that included the word “women’s,” as in “women’s chess club captain.”

Another 2018 study[2] found that Face++ and Microsoft Face API, softwares offering facial recognition services, both interpret black applicants as having more negative emotions than their white counterparts. While Face++ consistently interprets black applicants as angrier than white players, Microsoft more readily interprets black applicants as being contemptuous when their facial expressions are ambiguous.

Concerns have also been raised regarding workplace accommodations for pregnant, disabled and religious employees. Most of the time, relevant employees take the initiative to notify the employer of the need for a reasonable accommodation and such conversation can be sensitive, personal and even difficult for employees. If an employee’s primary interface with his employer is an app or an algorithm, initiating that process may be daunting and employees may be reluctant to disclose some of their most personal and protected issues to a chatbot.

Legal landscape

It follows that use of AI solutions in recruitment decisions may lead to run-ins with the law, especially in the anti-discrimination arena. This section briefly explores the relevant laws in several jurisdictions that may have a bearing on the adoption of technologies in employment.

US laws

In the United States, while there is no new federal law to evaluate the use of AI in hiring, AI technologies will be evaluated under foundational equal employment opportunity laws, including Title VII of the Civil Rights Act, the Age Discrimination in Employment Act, the Americans with Disabilities Act and equivalent state laws. Under the legal framework of disparate impact, enforcement agencies will look at various tools employers use and their impact. If it is determined that the tools result in adverse impact, the burden is on the employer to properly validate that tool, meaning proving that the tool is job-related and required by business necessity. There is also an obligation to evaluate whether there are alternatives which would lessen the adverse impact.

Illinois is currently the only state with a law that covers the new area of HR technology. The Artificial Intelligence Video Interview Act effective from January 2020 requires employers to (i) notify applicants before the interview that AI may be used to analyze their video interview and consider the applicant’s fitness for the position and (ii) obtain the applicants’ consent to use the AI system to review their interviews. The statute provides that an employer may not share applicant videos and gives applicants the right to request that their video is deleted.

Meanwhile, the New York City Council passed a bill in November 2021 that prohibits the use of “automated employment decision tools” to screen a candidate or employee for an employment decision, unless annual “bias audit” is conducted on the processes and tools. Fines will be imposed for undisclosed or biased AI use, charging up to $1,500 per violation on employers and vendors. The new law will take effect on 1 January 2023.

Singapore laws

Although currently without any laws targeting AI specifically, Singapore is one of the early movers for establishing guidelines to ensure that ethics are applied to the design, development and use of AI so that outcomes are explainable and not subject to unintended bias. In January 2019, the Personal Data Protection Commission (“PDPC”) released its first edition of the Model AI Governance Framework (“Model Framework”) which provides detailed and readily-implementable guidance to private sector organisations to address key ethical and governance issues when deploying AI solutions. In January 2020, the PDPC released the second edition of the Model Framework.

The two guiding principles of the framework are that (i) decisions made by AI should be “explainable, transparent and fair”, and (ii) AI systems should be human-centric, in other words the design and deployment of AI should protect people’s interests including their safety and wellbeing. To this end, the Model Framework provides guidance in four areas including (a) internal governance structures and measures, (b) determining the level of human involvement in AI-augmented decision-making, (c) operations management issues to be considered when developing, selecting and maintaining AI models and (d) stakeholder interaction and communication strategies for communicating with an organisation’s stakeholders, and the management of relationships with them.

Although not legally binding, the Model Framework translates ethical principles into pragmatic measures that businesses can adopt voluntarily, which would help them deploy AI in a responsible and ethical manner.

Hong Kong laws

Hong Kong currently has no AI-specific laws or regulations. The potential workplace discrimination scenario caused by AI mentioned in the earlier section would fall within the purview of existing laws that target discrimination. There are two kinds of discrimination in Hong Kong – (i) direct discrimination, which occurs when a person is treated less favourably than another person because of protected characteristics including gender, pregnancy, breastfeeding, marital status, disability, family status or race, and (ii) indirect discrimination, which arises when a condition or requirement, which is not justifiable, is applied to everyone but in practice adversely affects persons who possess the aforementioned protected characteristic.

The four major anti-discrimination ordinances in Hong Kong are the Sex Discrimination Ordinance (Cap.480), the Disability Discrimination Ordinance (Cap. 487), the Family Status Discrimination Ordinance (Cap.527) and the Race Discrimination Ordinance (Cap.602). It remains to be seen how the Ordinances would be applied to a discrimination claim caused by the use of AI in the employment process when such case is escalated to the court.

Key takeaways

In view of the legal pitfalls explored above, employers who intend to rely on algorithms, big data or AI in their hiring practice should ensure that their AI systems account for reasonable accommodations and produce no biased outcomes relating to gender, disability, pregnancy and religious observance etc. The Human Resources team should conduct thorough due diligence to understand how the AI tool works and consult legal professionals when in doubt regarding any potential run-ins with local anti-discrimination or privacy laws the AI tools produce.