A Wake-Up Call on AI-based Decisionmaking
Last week a U.S. District Court green-lit a collective action suit against Workday, Inc. accusing it of using an artificial-intelligence résumé filter that allegedly discriminated against older applicants.
Workday operates a platform that allows businesses to post job notices and accept applications. In its marketing materials, Workday claimed that its technology "utilizes artificial intelligence to parse an employer’s job posting and an applicant’s application and/or resume; extract skills in the employer’s job posting, on the one hand, and skills from the application and/or resume on the other hand; and determine the extent to which the applicant’s skills match the role to which they applied."
The plaintiffs, who are all over the age of 40, sued under the federal Age Discrimination in Employment Act ("ADEA") alleging that they received automated rejection notices from over 100 positions for which they met the stated qualifications. The decision marks one of the first times a court has allowed plaintiffs to challenge an AI-based screening filter at scale. Notably, this case was brought not against the prospective employer, but against the vendor of the HR platform used by the employer.
Why Liability Is Growing
- Accountability of AI Providers – Courts are increasingly just as likely to hold the providers of AI technology solutions liable as the employers who use their solutions, based on representations the providers make about the functionality of their products.
- Delegated Discrimination – Traditional anti-bias statutes (Title VII, ADEA, ADA, etc.) apply even if the discrimination is carried out by code rather than people. Automating a biased process does not immunize an employer or vendor.
- Negligent Deployment – Beyond intentional discrimination claims, plaintiffs are pleading negligence: failing to validate an AI tool, ignoring disparate-impact testing, or relying on vendors without contractual assurances.
Practical Takeaways for Companies
- Audit Early, Audit Often – Conduct pre-deployment bias and validity assessments, then re-test periodically. Document everything.
- Demand Vendor Transparency – Push for algorithmic explainability clauses, indemnification, and shared liability provisions in AI procurement contracts.
- Human-in-the-Loop Safeguards – Consider maintaining manual override and appeal mechanisms so applicants or customers can challenge adverse decisions.
- Update Policies & Training – Align HR, procurement, and compliance teams around AI governance frameworks such as NIST’s AI Risk Management Framework or ISO/IEC 42001.
Looking Ahead
The court’s willingness to certify a class signals that AI-related disputes will no longer be confined to isolated individual claims. As regulators and plaintiffs’ lawyers gain fluency in machine-learning concepts, companies that treat AI as a black-box shortcut will face mounting legal exposure. The safest path forward is to treat algorithmic decision-making with the same rigor—and perhaps more transparency—than any other high-risk business process.