On May 18, 2023, the EEOC released a technical assistance document, “Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964.” This document offers advice on the use of algorithmic decision-making tools that employers are more often using for recruitment, promotion and firing. The document also attempts to distinguish between different types of technology used during selection procedures, and defines three “central terms” relating to automated systems and artificial intelligence:
- Software: information technology programs/procedures that provide instruction to a computer on how to perform a given task. Examples include: automatic resume-screening software, hiring software, chatbot software for hiring and workflow, video-interviewing software, employee monitoring software, and analytics software.
- Algorithm: a set of instructions utilized by a software program that can be followed by a computer to accomplish some end.
- Artificial Intelligence: a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. The guidance concerns a computer’s own analysis of data to determine which criteria to use when making decisions. Examples of AI include: machine learning, computer vision, natural language processing, and autonomous systems.
The guidance applies to the following types of systems that employers are using with growing frequency:
- resume scanners that prioritize applications using certain keywords;
- employee monitoring software that rates employees on the basis of their keystrokes or other factors;
- “virtual assistants” or “chatbots” that ask job candidates about their qualifications and reject those who do not meet pre-defined requirements;
- video interviewing software that evaluates candidates based on their facial expressions and speech patterns;
- testing software that provides “job fit” scores for applicants or employees regarding their personalities, aptitudes, cognitive skills, or perceived “cultural fit” based on their performance on a game or on a more traditional test.
The EEOC explicitly states that if an algorithmic decision-making tool has an adverse impact on individuals of a particular race, color, religion, sex, or national origin, then the use of the tool will violate Title VII unless the employer can demonstrate that the tool is job-related and consistent with business necessity.
The guidance also provides that an employer can be liable under Title VII for its use of an algorithmic decision-making tool even if the tool is designed or administered by a third-party, such as an outside software vendor. Employers therefore should inquire with any third-party software vendors about whether the tool causes a substantially lower selection rate for individuals protected by Title VII. Even if the vendor assures the employer that there is no lower selection rate, the employer still must monitor the outcome of the selection procedures in the event that the vendor is incorrect about their software.
With the inevitable advances in artificial intelligence and its use in employment practices, it will be critical for employers to take extra care to ensure Title VII compliance when human beings are making fewer and fewer employment decisions.