The Equal Employment Opportunity Commission (EEOC) and the Department of Justice (DOJ) have each issued substantively similar technical assistance documents explaining how employers’ use of algorithmic decision-making software, including artificial intelligence (AI) tools, to support employment functions can potentially cause unlawful discrimination against people with disabilities in violation of the Americans with Disabilities Act (ADA). In response to this guidance, employers should review their use of algorithmic decision-making software, ensure they understand the tools being used, and offer reasonable accommodations to applicants as necessary to comply with the ADA.
Employers are increasingly using algorithmic decision-making software to support employment functions, such as making hiring decisions, monitoring performance, offering pay increases or promotions, and establishing terms and conditions of employment. In response to concerns that automated employment tools could become a “high-tech pathway to discrimination,” the EEOC launched its Artificial Intelligence and Algorithmic Fairness Initiative in 2021 to “ensure that the use of software, including artificial intelligence (AI), machine learning, and other emerging technologies used in hiring and other employment decisions comply with the federal civil rights laws that the EEOC enforces.”
As part of this initiative, the EEOC recently released a technical assistance document and a summary document explaining how using AI and machine learning in the employment context may violate existing requirements under Title I of the ADA, providing tips to employers on how to use these tools in compliance with ADA requirements, and informing applicants and employees of their rights. The DOJ, which enforces discrimination laws with respect to government employers, concurrently issued similar guidance.
Below we provide an overview of the ADA’s mandates and the EEOC guidance for employers that are subject to the ADA.
ADA mandates in brief
The ADA prohibits employers with 15 or more employees from discriminating against employees and applicants on the basis of disability, which is any physical or mental impairment that substantially limits one or more major life activities. Among other things, the ADA requires employers to provide reasonable accommodations to employees and applicants with disabilities to allow them to apply for or do the job, so long as the accommodation does not impose an undue burden on the employer.
The EEOC’s technical assistance document contains 15 questions and answers relating to the use of algorithmic decision-making software in the employment context. The EEOC identifies three primary situations where an employer’s use of algorithmic decision-making software may run afoul of the ADA. Specifically:
- The employer fails to provide a reasonable accommodation that is necessary for an individual with a disability to be rated fairly and accurately by the algorithm.
- The tool intentionally or unintentionally prevents an individual with a disability from meeting a selection criterion, and the individual therefore loses a job opportunity, despite the individual being able to do the job with a reasonable accommodation.
- The tool operates in a way that violates the ADA’s restrictions on disability-related inquiries and medical examinations.
The EEOC Guidance provides a number of recommendations for employers to avoid ADA missteps when using algorithmic decision-making software.
Understand the tools.
The EEOC makes clear that in many cases employers will be responsible for the algorithmic decision-making tools that they use, even if the tools are designed and administered by a third party. Employers should therefore consult with their vendors to confirm ADA compliance and whether the tool was developed with individuals with disabilities in mind. For example, an employer should discuss whether the software’s interface is accessible to as many individuals with disabilities as possible, whether the materials can be presented to individuals in alternative formats or if there are any disabilities that may not be able to be accommodated, and whether the vendor has considered if the software’s algorithm disadvantages those with disabilities.
Inform employees and applicants that reasonable accommodations are available, and grant them when requested.
The EEOC permits employers to inform applicants and employees what an evaluation process entails and ask whether they will need a reasonable accommodation to complete it. By making individuals aware of this option, employers are less likely to violate the first pitfall described above.
Employees and applicants do not have to use any magic words to trigger an employer’s obligation to accommodate. For instance, if an individual informs an employer that completing a certain step of the process will be difficult due to a medical condition, the employer should treat this as an accommodation request. Employers should promptly respond to these requests, and may request supporting medical documentation if it is not obvious that the individual has an ADA disability.
When an accommodation is requested and the documentation demonstrates that the algorithmic decision-making tool may not accurately test the requester’s ability or make it more difficult for the individual to complete the assessment, it is the employer’s responsibility to provide an alternative testing format for that applicant, unless doing so would be an undue hardship on the employer. An “undue hardship” exists when, focusing on the resources and circumstances of the particular employer, providing the accommodation would involve significant difficulty or expense.
Be selective in the use of algorithmic decision-making software.
To help avoid ADA discrimination, the EEOC suggests that employers develop and use algorithmic decision-making tools only to measure abilities or qualifications that are truly necessary for the job, even if a reasonable accommodation was provided. Further, the EEOC encourages employers to consider avoiding the use of algorithmic decision-making tools that do not measure these qualifications directly but rather draw inferences based on characteristics, as this approach may have a higher risk of screening out individuals with disabilities that may otherwise be able to perform the necessary job duties with a reasonable accommodation. The EEOC provides the example of using voice analytics software to evaluate applications for certain traits, which could screen out individuals with speech impediments even though they have the sought-after traits.
Review algorithmic decision-making tools to confirm that they do not include “disability-related inquiries” or “medical examinations.”
Any question that is likely to elicit information about a disability or directly asks whether an employee or applicant has a disability qualifies as a “disability-related inquiry.” Any question that seeks information about an individual’s physical or mental impairments or health qualifies as a “medical examination.” If an automated decision-making tool is used to ask either of these types of question before a conditional offer of employment is made, such use would violate the ADA. By not including these lines of inquiries in the assessments, employers will reduce the risk of violating the ADA, particularly when the AI is being used for hiring purposes.
Algorithmic decision-making and discrimination
Algorithmic decision-making tools, including tools that rely on artificial intelligence, can implicate not just the ADA, but other employment discrimination statutes as well, including Title VII of the Civil Rights Act and the Age Discrimination in Employment Act, which together prohibit discrimination against individuals because of race, gender, ethnicity, national origin, religion, sexual orientation, age, etc. In its guidance document, the EEOC notes that the steps taken to avoid disability bias are “typically distinct” from those that might be taken to avoid other forms of discrimination. Neither the EEOC nor the DOJ have (yet) issued guidance on non-disability discrimination, but employers should be mindful of the potential for AI to have a disparate impact on individuals who have certain protected characteristics.
* * *
Beginning January 1, 2023, New York City will require employers who use algorithmic decision-making tools in the context of employment and have a presence in the city to conduct bias audits of their tools. You can read our client alert on this requirement for more information.