The EEOC has identified the following common instances where an employer's use of AI could violate the Americans with Disabilities Act (ADA):
The employer does not provide a reasonable accommodation necessary for a job applicant or employee to be treated fairly and accurately by the algorithm. The employer should ensure the AI tool affirmatively advises applicants that reasonable accommodations may be requested and provides clear instructions for requesting the accommodation. Staff must be trained to recognize these requests and respond as quickly as possible. Examples of accommodations include:
Specialized equipment,
Alternative tests or testing formats,
Permission to test in a quiet setting or take a longer amount of time to test, or
Materials are provided in different formats to ensure accessibility.
The employer relies on the algorithmic decision-making tool that intentionally or unintentionally "screens out" an individual with a disability who is otherwise able to do the job with a reasonable accommodation. "Screen out" occurs when a disability prevents the job applicant or employee from meeting a selection criterion or lowers their performance, which causes the applicant or employee to lose the job opportunity. For example:
A Chatbot screens out a candidate who had gaps in employment due to disability or medical treatment, or
A Chatbot screens out an applicant due to speech patterns, impediments, facial expressions, or lack of eye contact.
The employer’s use of AI violates the ADA’s restrictions on medical-related inquiries if the AI tool asks applicants or employees questions likely to elicit information about a disability or physical or mental impairment. That would include “disability-related inquiries” or seeking information that could qualify as a “medical examination” before giving the candidate a conditional offer of employment.
As such, employers should take the following steps to make sure their AI tool does not violate the ADA:
The AI tool should only measure abilities or qualifications that are truly necessary for the job. Those qualifications should be measured directly, rather than the AI making inferences about an applicant’s qualifications based on characteristics correlated with those abilities or qualifications.
Employers should ask the software vendor who developed their AI tool:
Was the tool developed with differently abled individuals in mind? If so, what groups did the vendor assess using the tool?
Did the vendor attempt to determine if the tool disadvantaged individuals with disabilities?
Can the vendor confirm the tool does not ask questions that might elicit information about an individual's physical or mental impairments?
An employer should test the AI tool for any discriminatory practices before it is used, as recommended by both the EEOC and the DOJ.
Any EEOC investigator investigating a discriminatory hiring practice on the basis of AI will expect to see proof that these steps were taken. Employers who use a third-party vendor or AI designed by another company will not be shielded from liability for the discrimination, as is the case of outsourced employment compliance generally.
It is important for employers to review their AI hiring tools to ensure they are in compliance with the latest DOJ and EEOC guidance. While the DOJ issued technical guidance, it provides less detail than that of the EEOC. If an employer is in compliance with the EEOC guidance, then they will most likely satisfy the DOJ guidance as well. Employers should always consult with their legal counsel if ever in doubt about their compliance.
If you have questions about this topic or other employment law matters, please contact Chris or a member of the HSB Employment Law practice team.