With the age of artificial intelligence (AI) unfolding, products aimed at automating the recruiting and hiring process are hitting the market with increasing frequency.
Companies have been utilizing AI for tasks such as screening resumes, and even interviewing candidates and assessing whether they will be successful employees. These automated tools range from algorithms that “weed through” resumes to personality assessments and biometric analyses that employ AI to analyze a candidate’s facial expressions, body language, voices, and inflections in video interviews.
The increased reliance on AI in employment-related decisions has been met with concerns about the potential legal and ethical risks, such as implicit bias and disparate impact, stemming from the use of AI. In response, various state and city governments are scrambling to regulate this new use of technology.
Last week, on February 27, 2020, New York City introduced a bill (Intro No. 1894-2020) to regulate the use of AI in hiring, compensation and other human resources-related decisions. If enacted, the bill would prohibit the sale of AI technology unless it had been audited for bias and passed certain defined anti-bias testing in the year prior to sale. The bill also includes continued auditing obligations. New York City’s proposed law would require any company that uses AI tools for hiring and other employment purposes to disclose to candidates when AI technology was utilized to assess their candidacy for employment within 30 days of use, as well as disclose to the candidate the specific job qualifications or characteristics for which the company used AI to screen.
The bill proposes a civil penalty of $500 for the first individual violation with up to a $1,500 penalty for each subsequent violation. The New York City Commission on Human Rights, or any other agency designated by the mayor, would be responsible for enforcement. If signed into law, the bill would take effect January 2022.
New York City is not the first to attempt to regulate the use of AI by employers. Previously, on January 1, 2020, the Illinois Artificial Intelligence Video Interview Act (which Hunton previously reported about here) went into effect, which regulates employer use of AI to analyze video interviews. New Jersey and Washington have introduced similar legislation aimed at auditing AI for potential bias, inaccuracy and disparate impact. Additionally, cities and states across the country are creating task forces to understand the impact of AI prior to proposing new legislation.
While proponents of AI argue that stepping away from subjective assessments will eliminate implicit bias in employment decision-making, AI critics argue that algorithms and other AI tools are embedded with bias that employers may not understand, creating a “black box” that leaves employers unable to explain the criteria that have been used to make a given hiring decision.
As the use of AI in employment decisions continues to grow – and likely prompts additional criticism – companies should carefully consider what they can do to validate the effectiveness of the decisions being made by AI and, if challenged, whether they can “unpack” the decisions reached using AI to justify its legality.