By Anthony May
Although the onset of artificial intelligence (AI) in the workplace may offer benefits, for many workers or future workers, it can have potentially negative—and discriminatory—impacts. In this article, I outline the ways that prospective and current employees can identify when their rights are being adversely impacted by AI and some of the legal protections afforded to workers when an employer uses AI in a discriminatory way.
Three federal laws primarily address discrimination in employment: Title VII of the Civil Rights Act of 1964 (Title VII), the Americans with Disabilities Act (ADA), and the Age Discrimination in Employment Act (ADEA). Generally, these laws prohibit employers from using—or contracting with companies to use—technology that screens out or denies applicants or employees the benefits of employment, including consideration for employment, due to race, color, religion, sex (including sexual orientation), age, national origin, veteran status, disability status, or other genetic information. Disparate impact claims can be brought against a company that relies on AI (intentionally or not) and the result is that certain applicants, such as females, are disproportionately excluded from consideration. But how is an employee or candidate to know when they are being discriminated against?
First, AI can discriminate before you even know about a position. In December 2022, Real Women in Trucking, a non-profit formed by female commercial-motor-vehicle drivers, filed a charge of discrimination with the Equal Employment Opportunity Center (EEOC) against Meta Platforms, Inc.—formerly Facebook. The charge asserts that Meta violated both Title VII and the ADEA by targeting job ads based on the gender and age of users, which disproportionally targeted male truck drivers for certain positions while failing to provide those same advertisements to female applicants. According to Real Women in Trucking’s counsel, while women make up 54%, and people over 55 make up 28%, of people on Facebook searching for jobs, these demographics only see a fraction of open positions on a routine basis due to algorithmic bias: “Facebook [Meta] is one of the go-to resources for these life opportunities [and] the consequences of this kind of discrimination are far reaching.”
Second, AI testing can disparately screen out applicants, particularly those with disabilities. For example, if an employer requires an applicant to take a timed math test on a computer, an individual with a disability that impacts their dexterity might perform poorly and be removed from consideration. Similarly, if an employer tests a blind applicant on visual memory, the applicant’s memory may be sufficient for the job, but she may nevertheless be excluded because she cannot meaningfully participate in the test because of her disability. Moreover, if the AI screening tool is inaccessible using a screen reader, the applicant may not be able to complete the screening task at all and could lose out on an opportunity by reason of their disability.
Third, AI can be used to conduct video interviews of prospective applicants that may misinterpret disability-related mannerisms, such as “micro-expressions,” that could eliminate an applicant for facial expressions that have nothing to do with the job. In those instances, employers, such as those in Maryland, may be required by law to provide you with notice that you are being recorded and assessed. These are only a few examples of how AI can deny qualified applicants fair and equal employment opportunities in the hiring process.
AI can similarly discriminate when it comes to a company’s current employee’s wages and promotional opportunities. In 2023, CBS News reported a study conducted by UC Law, San Francisco that described how gig workers, such as Uber drivers, are subjected to algorithmic wage discrimination: “Algorithmic wage discrimination allows firms to personalize and differentiate wages for workers in ways unknown to them, paying them to behave in ways that the firm desires, perhaps [paying] as little as the system determines that they may be willing to accept[.]”
When it comes to promotions, AI tools can use predictive analysis to evaluate current employees, identify factors that establish patterns the company views as indicative of success in a given role, and identify candidates for promotions or raises based on those factors. But when the data the system uses to discern who is “successful” is biased—e.g., based on factors from a group of formerly “successful” employees who are all middle-aged white males—the systems should be evaluated for bias audits to prevent the company from promoting only those favored by the biased inputs system.
The above examples only scratch the surface of the various ways employers can use AI to discriminate, but they can be useful for spotting when discrimination is happening. Importantly, they can trigger when certain actions can be taken to ameliorate discrimination.
Here are some tips for ways you can exercise your rights to prevent and/or remediate when AI discriminates:
- Know Your Rights: Familiarize yourself with the ways AI can discriminate and how to protect yourself. In addition to the federal laws, states are increasingly adopting their own laws to combat the discriminatory use of AI. Workers should familiarize themselves with the unique laws of their state and question employers’ use of AI that runs afoul of those requirements.
- Request a Reasonable Accommodation: If you are a person with a disability, an employer or prospective employer is required to provide you a reasonable accommodation, such as an alternative testing method, unless it would pose an undue burden. If you receive notice that AI is being used, contact HR to determine how it is being used and if reasonable accommodation is available. If you are not hired, contact the employer’s HR department to determine if the decision was based on an assessment for which you were entitled to an accommodation and request to be reevaluated. If an employer asks you for additional information or proof of disability, you should familiarize yourself with your privacy rights under the ADA. If an employer denies your request, you should consider filing a charge of discrimination with the EEOC.
- Consult with an Attorney: If there’s any question that you were denied an employment opportunity due to AI bias, contact our office to explore options such as filing a charge of discrimination with the EEOC or pursuing litigation against the employer.
- Strength in Numbers: If you are part of an organized labor movement or trade organization (and even if you’re not!), talk with others about potential barriers to employment. You may recognize patterns that can lead to potential class action suits. For example, in 2023, the EEOC settled a claim against an online tutoring company following allegations that its use of AI “automatically reject[ed] female applicants aged 55 or older and male applicants aged 60 or older,” and rejected “more than 200 qualified applicants based in the United States because of their age.” Among other things, the settlement required the employer utilizing the AI to pay $365,000 in damages and adopt new policies and training to remediate the discrimination that occurred and prevent it from happening again in the future.
We know that AI in the workforce is here for the long run, but that does not mean that an employer can use it to discriminate. I have written and presented extensively on the intersection of AI and employment law, most recently presenting on this topic at the Society for Human Resource Management (SHRM) Talent Conference & Expo and at the 2024 National Employment Lawyers Association (NELA) Annual Convention. You can read my blog series, Algorithmically Excluded, here. If you have questions about this matter or feel like you’ve been subject to discrimination in the hiring process, please call us at 410-962-1030 for a consultation and learn more about my practice here.