Aiconomy

Algorithmic Bias

Systematic errors in AI outputs that create unfair outcomes for certain groups, typically arising from biased training data, flawed model design, or unrepresentative development teams.

Studies have documented significant bias in deployed AI systems: facial recognition error rates are up to 34x higher for dark-skinned women compared to light-skinned men, hiring algorithms have discriminated against women, and healthcare algorithms have underserved Black patients. Amazon scrapped an AI hiring tool in 2018 after it penalized resumes containing the word 'women's.' The EU AI Act classifies AI hiring and credit scoring tools as high-risk, requiring bias testing. Addressing algorithmic bias requires diverse training data, regular auditing, and inclusive development teams.

Live Data

1,373AI Safety Incidents This Year

Explore the Data

AI Economy Pulse

Every Friday: the 3 AI data points that actually matter this week. Free, forever.

Built on data from Stanford HAI, IEA, OECD & IMF

Latest: “AI Investment Hits $42B in Q1 2026 — Here's Where It Went”

No spam, ever. Unsubscribe anytime.