Aiconomy

AI Bias

Systematic errors in AI system outputs that reflect and amplify societal prejudices, leading to unfair treatment of certain groups in areas like hiring, lending, healthcare, and criminal justice.

AI bias has been documented across high-stakes applications: facial recognition systems show up to 34x higher error rates for dark-skinned women, healthcare algorithms have underserved Black patients, and language models reproduce gender stereotypes. The root causes include biased training data, underrepresentation in development teams, and feedback loops that amplify existing disparities. Major lawsuits and regulatory actions have followed — New York City's Local Law 144 mandates bias audits for AI hiring tools. The EU AI Act classifies AI systems used in employment and credit as high-risk, requiring mandatory bias testing.

Live Data

1,373AI Safety Incidents This Year

Explore the Data

AI Economy Pulse

Every Friday: the 3 AI data points that actually matter this week. Free, forever.

Built on data from Stanford HAI, IEA, OECD & IMF

Latest: “AI Investment Hits $42B in Q1 2026 — Here's Where It Went”

No spam, ever. Unsubscribe anytime.