Using Artificial Intelligence to Reimagine Enforcement of Workplace Discrimination Laws

May 10, 2021 by Liam Barrett

Artificial Intelligence (AI) remains a hotly debated topic in the second decade of the Twenty-First Century. From its implications in the future of disinformation[1] to its expanding use in warfare,[2] artificial intelligence and machine learning present a host of moral, ethical, and legal dilemmas.[3] The implications of artificial intelligence also looms large in the field of employment law.

In recent years, high-profile instances of bias in AI-driven hiring tools have illustrated concerns over its use in the hiring processes[4]. Amazon, for example, ended use of an algorithmic resume-reviewing program which preferred male over female applicants, after it discovered this bias.[5] HireVue, a recruiting company used by many large U.S corporations,[6] came under fire after various groups complained that its facial recognition and analysis software could “perpetuate discriminatory hiring practices.”[7] Concern over these uses of AI in hiring have gone so far as to prompt U.S. Equal Employment Opportunity Commission (EEOC) investigations and proposed legislation to combat potential bias.[8] Radical shifts in the labor market caused by the COVID-19 pandemic, including an increase of individuals working at home[9] and accelerating shifts to online recruiting and hiring,[10] set the stage for an increased use of AI in recruiting—and discrimination caused by thin increase—if these shifts remain permanent[11]

While the dangers of AI remain hot topics, AI also represents an opportunity to combat discrimination employment through proactive enforcement. In a new working paper, economists analyze the ability of machine learning to “identify phrases in job ads that are linguistically related to standard ageist stereotypes from the industrial psychology literature.” [12] Using phrases corresponding to ageist stereotypes,[13] the authors find “that machine learning methods are sensitive enough to detect the presence of stereotyped language, even when only one sentence in the job ad is highly related to the ageist stereotype.”[14]

This is not without precedent. In 2019, researchers announced an AI program capable of discerning discrimination in the workplace with caused by race and sex.[15] These technologies could enable novel methods for the enforcement for antidiscrimination laws and[16] proactive policing of hiring practices.

Currently, the vast majority of EEOC enforcement actions are initiated by individual complaints. In Fiscal Year 2019, the EEOC received a total of 77,675 from individuals alleging discrimination against them.[17] In contrast, in the EEOC initiated a total of 66 Commissioners Charges and Directed Investigations, separate investigatory tools which originate from the Commission without individual complaints, in the same year.[18]

Despite its prevalence, filing individual charges can be a long and arduous process. Individuals must first assess whether they are eligible to file a charge.[19] They also face procedural hurdles and a lengthy adjudication process, with the average EEOC investigation taking approximately 10 months.[20] For a Title VII claim, alleging employment discrimination based on race, color, religion, sex, or national origin as per the Civil Rights Act of 1964,[21] complainants must wait at least 180 days after filing before receiving a right to file a complaint in federal court.[22] Additionally, in order to take any of these actions, potential complainants must be aware that such processes even exist.

This system is not ideal. Procedural intricacies and timelines create obvious obstacles to individuals looking to report discrimination, especially amongst vulnerable communities. The length and difficulty of this process, lack of procedural knowledge, an overwhelmed EEOC investigatory staff, and a (at times well-founded)  perceived unfairness and inefficiency in the EEOC charge system dissuades reporting by the most vulnerable workers and worsen results for complainants. [23] Finally, except for the most obvious instances of overt discrimination, it may be difficult for those rejected from positions to know that illicit factors played a role in the hiring decision.

Utilizing new technologies, including AI, to shift the enforcement model toward proactive enforcement can alleviate hurdles that prevent or obstruct vulnerable individuals from reporting discrimination. Finally, such advances could minimize the amount of people actively harmed by discriminatory hiring practices by rooting out illicit practices as soon as they can be identified, rather than waiting for an impacted individual to learn of the true nature of the practice.

[1] See e.g. Daniel Victor, Your Loved Ones, and Eerie Tom Cruise Videos, Reanimate Unease With Deepfakes, The New York Times (Mar. 10, 2021), available at

[2] Zachary Fryer-Briggs, Can Computer Algorithms Lear to Fight Wars Ethically? Washington Post Magazine (Feb. 17, 2021), available at

[3]  See also Lawrence Solum, Legal Personhood for Artificial Intelligences, 70 N.C. L. Rev. 1231 (1992).

[4] Courtney Hinkle, Employment Discrimination in the Digital Age. 21 Geo. J. on Gender & The L. (2019) (providing a comprehensive summary of the concerns regarding use of AI in hiring processes, replete with additional examples).

[5] Jeffrey Dastin, Amazon Scraps Secret Recruiting Tool That Showed Bias Against Women. Reuters (Oct. 10, 2018), available at

[6] Drew Harwell, A Face-Scanning Algorithm Increasingly Decides Whether You Deserve the Job, The Washington Post (Nov. 6, 2019), available at–alert-national&wpmk=1

[7] Drew Harwell, Rights Group Files Federal Complaint Against AI-Hiring Firm HireVue, Citing “Unfair and Deceptive Practices,” The Washington Post (Nov. 6, 2019), available at

[8] Chris Opfer, Ben Penn, and Jacklyn Diaz, Punching In: Workplace Bias Police Look at Hiring Algorithms. Bloomberg Law (Oct. 29, 2019).

[9] Gary Friedman and Thomas McCarthy, Employment Law Red Flags in the Use of Artificial Intelligence in Hiring. American Bar Association (Oct. 1, 2020).

[10] Id.

[11] Id.

[12] Ian Burn, Daniel Firoozi, Daniel Ladd, and David Neumark, Machine Learning and Perceived Age Stereotypes in Job Ads: Evidence From an Experiment 22 (National Bureau of Economic Research Working Paper 28328, 2021).

[13] Id. at 6-7, 22

[14] Id.

[15] Using Artificial Intelligence to Detect Discrimination, Penn State News (July 11, 2019).

[16] Patrick Kline and Christopher Walters, Audits as Evidence: Experiments, Ensembles, and Enforcement, Institute for Research on Labor and Employment, University of California Berkeley. IRLE Working Paper No. 108-19, 1, 25, 27 (2019) (Providing another recent experiment by researchers determined that sending out as few as ten applications coded with racial markers can “detect a non-trivial share of discriminatory employers” (seven to ten percent of discriminating employers) using data on interview callbacks, with a 0.2% rate of false accusations), available at

[17] All Charge Data, U.S. Equal Employnent Opportunity Commission (EEOC). (Last accessed Mar. 15, 2021).

[18] Commissioners Charges and Directed Investigations, U.S. Equal Employment Opportunity Commission, (Last Accessed Mar. 15, 2021).

[19] See e.g. Thompson v. North American Stainless, LP, 560 U.S. 170, 174-179 (2011) (Holding that a third-party’s termination as retaliation for his fiancé’s protected Title VII actions at their shared place of employment has legal standing under Title VII, as he fell within the statute’s intended “zone of interest”).

[20] What You Can Expect After You File A Charge. U.S. Equal Employment Opportunity Commission, (Last Accessed Mar. 15, 2021).,in%20less%20than%203%20months).

[21] 42 U.S.C. § 2000e-2

[22] Id.

[23] Maryam Jameel and Joe Yerardi, Workplace Discrimination is Illegal. But Our Data Shows it’s Still a Huge Problem, Vox (Feb 28, 2019),; See also Paul Igusaki, Is Complaining Worth the Risk? Retaliation for Discrimination Complaints, IMDiversity (2013).