Aiconomy

Data Poisoning

A form of adversarial attack where malicious actors deliberately corrupt AI training data to manipulate model behavior, causing it to produce incorrect or harmful outputs.

Data poisoning attacks can compromise AI systems by injecting as little as 0.1% corrupted data into training sets. Researchers have demonstrated attacks where poisoned data causes image classifiers to misidentify objects and language models to produce biased outputs. As AI training increasingly relies on internet-scraped data, the attack surface expands. Nightshade, a tool released by University of Chicago researchers, allows artists to poison images so they disrupt AI training. Defense measures include data validation, anomaly detection, and training data provenance tracking, but no comprehensive solution exists.

Live Data

1,373AI Safety Incidents This Year

Explore the Data

AI Economy Pulse

Every Friday: the 3 AI data points that actually matter this week. Free, forever.

Built on data from Stanford HAI, IEA, OECD & IMF

Latest: “AI Investment Hits $42B in Q1 2026 — Here's Where It Went”

No spam, ever. Unsubscribe anytime.