The EEOC has just posted additional guidance on the use of AI in hiring practices. The guidance can be found here and pretty much follows what I recorded on the video and wrote below. Here is a summary of the guidance in bullet points:
- AI is not, by itself, going to solve discrimination issues
- You still have to monitor your selections
- Review choices against parameters you set so they don't have a disparate impact
- Your organization can't hind behind 3rd party software
- The buck stops with the employer
- As suspected, there is a safe harbor provision
- The EEOC wants you to review your selection criteria and correct adverse impacts
Artificial intelligence is becoming ubiquitous, and it has the potential to revolutionize many aspects of society, including human resources. However, there are also risks associated with the use of AI, and the Equal Employment Opportunity Commission (EEOC) is paying close attention to this issue.
According to a recent hearing by the EEOC, there are many potential benefits to using AI in the employment process, such as streamlining the hiring process and saving time and money. However, there are also many risks associated with AI, and the EEOC is interested in all aspects of the employment relationship.
One of the biggest concerns about using AI in the hiring process is the issue of "garbage in, garbage out." This means that if the data used to train the AI is biased, the results will be biased as well. Additionally, AI changes how it makes decisions over time, sometimes with unpredictable results. This means that even if the AI is unbiased when it is first implemented, it may become biased over time.
The EEOC launched the Artificial Intelligence and Algorithmic Fairness Initiative in 2021 to study the use of AI in the employment process. In 2022 they released their first technical assistance document, which focuses on the Americans with Disabilities Act. While this document is not a regulation, it provides guidance for employers who are considering using AI in the hiring process.
The EEOC is most concerned with the four stages of the hiring process that are most easily affected by AI: announcing the job, reviewing applications, screening candidates, and final selection. Large companies may create their own AI software, but most businesses will buy software from third party vendors. This means that they may not be able to control the biases in the data used to train the AI.
The use of AI in the employment process has potential benefits, but there are many risks that need to be fleshed out. As we await further guidance from the EEOC, employers should be aware of the risks associated with AI and take steps to ensure that their systems comply with employment discrimination laws.