Quick answer: Before buying any AI hiring tool, ask where it gets candidate data, whether it scores or ranks applicants, who makes the final hiring decision, and whether the vendor has published third-party bias audit results. If a vendor can't answer these questions directly, that tells you something.
Every AI hiring vendor publishes an evaluation checklist. And every one of those checklists is designed to make that vendor look good.
Here's the problem: the questions that actually protect your company from legal exposure, candidate backlash, and compliance failures are the ones vendors don't put on their own checklists. Two major lawsuits filed in the past year, the Eightfold AI class action alleging secret candidate scoring and the Workday discrimination case covering 1.1 billion rejected applications, show exactly what happens when employers skip due diligence on their AI tools.
This isn't a feature comparison guide. It's a liability audit disguised as a vendor checklist.
The Legal Ground Has Shifted
The regulatory environment around AI hiring changed faster in 2025 than most HR teams expected.
The CFPB clarified in October 2024 that AI-generated candidate profiles and algorithmic scores qualify as consumer reports under the Fair Credit Reporting Act. That means FCRA obligations (mandatory disclosures, accuracy requirements, dispute rights) can apply to AI hiring tools, not just traditional background checks.
Illinois HB 3773 took effect January 1, 2026, making it unlawful for employers to use AI in ways that discriminate regardless of intent. California's FEHA regulations now require meaningful human oversight of automated decision systems in employment. Colorado's AI Act (delayed to June 2026) will require impact assessments for high-risk AI systems, including hiring tools.
NYC Local Law 144 has been active since July 2023, but a December 2025 audit by the NY State Comptroller found enforcement was catching just a fraction of violations, meaning the law is real even if enforcement has been uneven.
The federal picture is murkier. The current administration pulled EEOC AI guidance from its website in early 2025. But existing statutes still apply, and the state-level trend is clearly toward more regulation, not less.
If you're buying an AI hiring tool in 2026, compliance isn't a bonus feature. It's the starting line.
Eight Questions Your Vendor Doesn't Want You to Ask
1. Where does the tool get its candidate data?
This is the question at the heart of the Eightfold lawsuit. The complaint alleges Eightfold scraped social media profiles, location signals, device activity, and cookies to build candidate profiles, all without telling the people being profiled.
If your vendor is pulling data from sources candidates didn't provide directly, you may have an FCRA problem. And FCRA liability doesn't just land on the vendor. It lands on you, the employer.
What a good answer sounds like: "We only use data candidates provide during the application or interview process. No third-party enrichment, no social media scraping, no data broker purchases."
Red flag: Any mention of "enriched profiles," "talent intelligence," or "passive candidate insights" built from sources candidates didn't opt into.
2. Does the tool score or rank candidates?
Some AI hiring tools assign numeric scores behind the scenes. Hiring managers see rankings without knowing what generated them or how to interpret them. The Eightfold lawsuit alleges candidates received 0-5 scores they were never told about and couldn't dispute.
Scores feel efficient. They're also opaque, hard to audit, and nearly impossible for candidates to contest.
What a good answer sounds like: "We produce structured summaries of candidate information. Hiring managers review those summaries and make their own decisions."
Red flag: Any system where candidates are automatically ranked, sorted, or filtered before a human reviews their information.
3. Who actually makes hiring decisions?
The EEOC has consistently said that humans must retain meaningful control over employment decisions, even when AI is involved. "Meaningful" is the operative word. A rubber stamp on an AI recommendation doesn't count.
What a good answer sounds like: "AI surfaces information and handles logistics. Humans make every decision about who advances, who gets an offer, and who doesn't."
Red flag: Workflows where candidates are rejected or advanced by the system without a human explicitly choosing to do so.
4. Are candidates told they're being evaluated by AI?
NYC Local Law 144 requires notice at least 10 business days before using automated employment decision tools. Illinois now requires detailed disclosure including the AI product name and what data it collects. Even where disclosure isn't legally required yet, the trajectory is obvious.
What a good answer sounds like: "Candidates are clearly informed that AI is part of the screening process before it begins, with opt-out options available."
Red flag: Any framing where AI evaluation happens silently. If the vendor describes their tool as "invisible" or "behind the scenes" to candidates, ask why.
5. Can candidates dispute their evaluation or opt out?
FCRA gives consumers the right to dispute inaccurate information in reports used for employment decisions. Even outside strict FCRA contexts, candidates increasingly expect the ability to ask "why was I rejected?" and get a real answer.
What a good answer sounds like: "Candidates can opt out of AI screening at any time and be routed to a human alternative. They can request information about their evaluation."
Red flag: No dispute mechanism at all, or a process that exists on paper but has never actually been used.
6. Show me your latest bias audit results.
Not "do you have bias audits?" The answer to that question is always yes. Ask to see the actual results. Published, with methodology, from a recognized third-party auditor.
NYC LL 144 requires annual independent bias audits for automated employment decision tools. But even outside New York, published audits tell you whether a vendor is serious about fairness or just checking a box.
What a good answer looks like: A link to published results from a named auditor, with demographic breakdowns and methodology disclosed. To see Classet's results, see here. To understand how Classet is audited, see here.
Red flag: "We conduct internal audits" or "our audit results are confidential." If you can't see the results, they don't help you.
7. What happens when regulations change?
AI hiring laws are evolving on different timelines across different states. New laws are going live, existing ones are being enforced more aggressively, and more states are drafting bills right now.
A vendor that built their product around a single compliance framework is going to struggle as requirements multiply. What you want to see is either a dedicated internal compliance team or, more commonly, a partnership with a third-party compliance organization like Warden AI that monitors regulatory changes across jurisdictions and ensures the product stays current.
What a good answer sounds like: "We partner with a third-party compliance organization that monitors regulatory changes across all jurisdictions where our customers hire. When requirements change, we update our product and processes accordingly."
Red flag: A vendor with no internal compliance team and no external compliance partner. If nobody is watching the regulatory landscape on their behalf, nobody is watching it on yours either.
8. What's different about AI interviewing vs. AI resume screening?
This is the question almost nobody is asking, and it matters.
The Eightfold lawsuit concerns AI that screens resumes and scores candidates using third-party data. AI interviewing tools, where candidates actually participate in a conversation, are a different category with different compliance profiles.
An AI tool that uses only what a candidate says during a structured interview has fundamentally less FCRA exposure than one that builds profiles from scraped data. But you won't hear that distinction from a vendor who does both.
What a good answer sounds like: A clear explanation of what data the tool uses, where that data comes from, and why the tool's architecture does or doesn't trigger specific regulatory obligations.
Red flag: Conflating resume screening, background checks, and interviews under one "AI hiring" umbrella without distinguishing the compliance implications of each.
How Classet Approaches These Questions
We built Classet's AI phone screening around answers to these exact questions, not because we anticipated lawsuits, but because we believe candidates deserve to know how they're being evaluated.
Joy, Classet's AI phone screener, conducts voice conversations with candidates. The only data Joy uses is what candidates share during those conversations. No social media scraping. No third-party data enrichment. No hidden profiles.
Every candidate who applies gets the same opportunity to complete a screening interview. Nobody gets filtered out by an algorithm before they have a chance to speak. Joy produces structured summaries for hiring managers to review, not scores, not rankings. Humans make every decision.
Classet has a continuous third-party bias audit and publishes the results (we partner with Warden AI to keep us compliant with all regulations). We do this because transparency isn't a compliance checkbox for us. It's the product.
What to Do Next
Before you buy or renew any AI hiring tool, run it through these eight questions. Write down the answers. If your vendor hedges, deflects, or points you to a marketing page instead of actual documentation, factor that into your decision.
The cost of choosing the wrong AI vendor isn't just wasted software spend. It's potential FCRA liability (statutory damages of $100-$1,000 per violation in class actions, plus punitive damages for willful violations), regulatory fines, candidate trust erosion, and the operational headache of ripping out a tool mid-contract.
Want to see how a compliance-first AI hiring tool actually works? Book a 20-minute demo and ask us any of these questions yourself.
Frequently Asked Questions
Does using AI in hiring violate FCRA?
Not automatically. AI hiring tools trigger FCRA obligations when they compile candidate reports using third-party data sources, such as social media scraping, data broker purchases, or aggregated profiles from sources candidates didn't provide. AI tools that use only candidate-provided data (such as interview responses) typically have a different, typically smaller, FCRA exposure. The distinction matters because FCRA liability falls on both the vendor and the employer.
Which states regulate AI hiring tools?
As of early 2026, NYC Local Law 144 requires bias audits and candidate notice for automated employment decision tools. Illinois HB 3773 (effective January 2026) prohibits the use of discriminatory AI in hiring, regardless of intent. California's FEHA regulations (effective October 2025) require meaningful human oversight. Colorado's AI Act (effective June 2026) requires impact assessments for high-risk AI systems. More states are actively drafting legislation.
What's the difference between AI resume screening and AI interviewing?
AI resume screening tools evaluate candidates from application data and often third-party sources like social media or data brokers, which can trigger FCRA and privacy obligations. AI interviewing tools conduct structured conversations where the candidate actively participates and provides the data being evaluated. The compliance profile differs because the data source and candidate awareness are distinct.
How do I know if my AI vendor has real bias audits?
Ask to see published results from a named third-party auditor, not just a claim that audits have been conducted. Look for demographic breakdowns, methodology disclosure, and results that are publicly accessible. NYC LL 144 specifically requires independent audits using recognized statistical methods. If a vendor says their audit results are confidential, that's a red flag.

Gino Rooney