AI Hiring Under Scrutiny: The Eightfold Lawsuit and What It Means for Your Vendor Selection

The January 2026 class-action lawsuit against Eightfold AI alleges the company violated FCRA and state privacy laws by secretly scraping candidate data and scoring job seekers 0-5 without their knowledge. For HR leaders, this signals that "black box" AI hiring tools carry serious legal, reputational, and compliance risks, and that vendor selection now requires asking specific questions about transparency, data sources, and human oversight.

Paul Jones

February 11, 2026

Industry Insights

Two California job seekers just filed a lawsuit that every HR leader should read.

The class-action complaint against Eightfold AI, filed January 20, 2026, alleges that one of the most widely-used AI hiring platforms has been building "hidden dossiers" on candidates, scraping data from social media, scoring applicants 0-5 on "likelihood of success," and selling those rankings to Fortune 500 employers. All without telling the candidates.

If the allegations hold, 2026 will be remembered as the year AI hiring came under the legal microscope.

This isn't an isolated incident. It's a warning shot for anyone using, or considering, AI tools in their hiring stack. The questions at the heart of this case apply to every vendor in the space: Where does the data come from? Does the system score or rank candidates? And do candidates know they're being evaluated by algorithms they can't see or dispute?

What Happened: The Eightfold Lawsuit Explained

Eightfold AI is a well-funded recruiting technology company whose AI-powered "Talent Intelligence Platform" is used by Microsoft, PayPal, and dozens of other major employers.

The lawsuit makes several serious allegations:

The Core Claims

1. Secret data scraping

The complaint alleges that Eightfold compiled extensive candidate profiles by scraping data from unverified third-party sources like social media profiles, location signals, device activity, cookies, and more, without candidates' knowledge or consent.

2. Hidden scoring system

According to the lawsuit, Eightfold's AI assigned candidates scores from 0 to 5, representing their "likelihood of success" for specific roles. These scores were allegedly shared with employers but not disclosed to candidates.

3. FCRA violations

The Fair Credit Reporting Act (FCRA) requires consumer reporting agencies to follow strict procedures when compiling reports used in employment decisions. The plaintiffs argue that Eightfold operated as a de facto consumer reporting agency without complying with any of these requirements.

4. State privacy law violations

The complaint cites violations of California's Investigative Consumer Reporting Agencies Act (ICRAA), which has similar transparency and consent requirements.

Why This Matters for the Industry

The Eightfold case isn't just about one company. It tests fundamental questions about how AI can, and cannot, be used in hiring.

The CFPB clarified in October 2024 that AI tools used to evaluate employees and candidates may trigger FCRA obligations if they meet the definition of a consumer reporting agency.

"I think I deserve to know what's being collected about me and shared with employers. And they're not giving me any feedback, so I can't address the issue." — Erin Kistler, plaintiff

If the lawsuit succeeds, it could establish a precedent that AI candidate-scoring platforms must comply with FCRA, meaning mandatory disclosures, accuracy requirements, and dispute rights for every candidate they evaluate.

Why This Matters to Your Hiring

FCRA violations aren't just a vendor problem. Under the law, employers who use non-compliant consumer reports in hiring decisions can face their own liability.

FCRA provides for statutory damages of $100 to $1,000 per violation in class actions, plus punitive damages for willful violations. With thousands or millions of candidates processed through AI tools, exposure adds up fast.

The Compliance Burden Is Growing

NYC Local Law 144, the first regulation requiring mandatory bias audits for automated employment decision tools (AEDTs), has been in effect since July 2023. Illinois, Maryland, and Colorado have all enacted or proposed AI hiring regulations.

The trend is clear: transparency requirements are expanding.

Candidate Trust Is at Stake

Beyond legal risk, there's reputational cost. When candidates learn they were secretly scored by algorithms they couldn't see, especially if those scores led to rejections, trust erodes.

Kistler, one of the plaintiffs, has a computer science degree from Ohio State, nearly six years at Microsoft, and 19 years of product management experience. Her qualifications are strong. She suspects Eightfold's hidden scoring may have filtered her out of roles she was qualified for.

How to Evaluate AI Vendors: A Checklist

The Eightfold lawsuit provides a roadmap for evaluating any AI hiring tool. Ask these questions before you buy or renew:

Question 1: Does the tool score or rank candidates?

Some AI hiring tools generate candidate scores behind the scenes. Hiring managers see rankings without knowing what produced them.

What to look for:

  • Tools that provide summaries, not scores

  • Systems where humans make final decisions based on transparent information

  • No opaque rankings or hidden scoring systems

Question 2: Where does the tool get its data?

The Eightfold allegations center on data sourcing. If your AI vendor is scraping social media, purchasing third-party data, or aggregating information candidates didn't provide directly, you may have an FCRA problem.

What to look for:

  • Tools that only use data candidates provided directly

  • No external scraping or social media data collection

  • No "enrichment" from data brokers

Question 3: Who makes the final hiring decision?

The EEOC and legal experts consistently say that humans must retain meaningful control over employment decisions.

What to look for:

  • Clear human-in-the-loop architecture

  • AI can summarize and surface information, but hiring managers make decisions

  • Audit trails showing human involvement

Question 4: Are candidates aware they're being evaluated by AI?

Transparency to candidates isn't just ethical, it's increasingly required by law. NYC LL 144 mandates notice to candidates at least 10 business days before using AEDTs.

What to look for:

  • Built-in candidate notification workflows

  • Opt-out options for candidates who prefer human-only screening

  • Clear disclosure about what AI does and doesn't do

Question 5: Can candidates opt out or dispute findings?

FCRA gives consumers the right to dispute inaccurate information in reports used for employment decisions.

What to look for:

  • Mechanisms for candidates to request information about their evaluation

  • Dispute processes for inaccuracies

  • Alternative screening paths for candidates who opt out of AI

Question 6: Does the vendor have third-party bias audits?

NYC LL 144 requires independent bias audits for AEDTs. Even outside NYC, audits demonstrate that the vendor takes fairness seriously.

What to look for:

  • Published bias audit results

  • Third-party auditors with recognized credentials

  • Commitment to annual or more frequent audits

The Classet Approach: Safe by Design

At Classet, we built our AI interviewing platform on a simple belief: hiring technology should make the process fairer, not more opaque. We saw legal and ethical scrutiny coming, but that's not why we made these choices. We made them because we believe candidates deserve transparency and employers deserve tools that actually help them find great people. The design choices that protect Classet customers aren't retrofitted patches. They're architectural foundations built on principles that matter whether or not there's a law requiring them.

Transparent Voice Interviews

Classet's AI interviewer, Joy, conducts voice conversations with candidates. The only data Joy uses is what candidates provide in those conversations.

  • ✓ No social media scraping

  • ✓ No third-party data enrichment

  • ✓ No hidden background profiles

Every Candidate Gets a Fair Shot

Here's what makes Classet different: every single candidate who applies gets the same opportunity to complete the screening interview with Joy.

No hidden algorithm filters people out before they get a chance to speak. No secret scoring that rejects qualified candidates before a human ever sees them.

When candidates complete their conversation with Joy, they move into your review pipeline. You see high-quality candidates who've had the same fair opportunity to share their experience, availability, and interest in the role.

Human-Centered Decision Making

Joy doesn't rank candidates. Joy produces structured summaries of candidate conversations for hiring managers to review and make their own decisions.

Example summary:

"The candidate has 4 years of HVAC experience, including residential and commercial systems. They expressed interest in the lead technician track. They can start in two weeks and have reliable transportation. They mentioned preferring morning shifts, but are flexible. They asked about certification support during the interview."

No numeric score. No hidden ranking. Just a clear summary of what the candidate said.

Third-Party Bias Audit

Classet has completed a third-party bias audit examining whether Joy's interview process produces disparate outcomes across demographic groups. We publish results because transparency is the point.

Key Takeaways

  • The same legal frameworks (FCRA, state privacy laws) that govern credit reports may apply to AI hiring tools that score candidates using third-party data

  • "Safe AI hiring" means transparent, human-driven, and audited

  • Vendor selection matters more than ever—use the six-question checklist before buying or renewing

  • The best AI hiring tools don't hide decision-making behind algorithms—they surface better information so humans can make better decisions

  • Classet gives every candidate a fair opportunity to complete screening, moving quality candidates to your review pipeline without hidden filtering

FAQ

Is my company liable if our AI vendor violates FCRA?

Potentially yes. Under the FCRA, employers who use non-compliant consumer reports in hiring decisions can face liability, even if the vendor created the report.

Does NYC Local Law 144 apply to companies outside New York?

The law applies to employers making hiring decisions about roles performed in New York City, regardless of where the company is headquartered. Classet is fully compliant with NYC Local Law 144.

How do I know if my AI vendor has a bias audit?

Ask for published audit results. Under NYC LL 144, bias audit summaries must be publicly available on the employer's website before the AEDT is used. If a vendor is reluctant to provide audit documentation, that's a significant red flag. Classet publishes their bias audit results here: https://trust.warden-ai.com/classet/ai-phone-interviewer/nyc

Next Steps

Wondering whether your current AI hiring tools meet compliance standards? Request a vendor evaluation consultation or see how Classet's transparent approach works to revolutionize your hiring with a 10-minute demo.