AI Safety & Risks in 2026
As AI capabilities accelerate, so do the risks. Track documented incidents, deepfake proliferation, AI fraud costs, researcher concerns, and the state of AI governance.
Documented AI Incidents
Growing 56% year-over-year · Source: AIAAIC Repository
Key Safety Statistics
Documented AI incidents
The AIAAIC Repository has cataloged over 4,200 AI-related incidents and controversies through 2024.
Cost of AI fraud (2024)
AI-generated fraud cost businesses an estimated $25.5 billion in 2024, up 70% from prior year.
Deepfake videos per month
An estimated 500,000+ deepfake videos are generated monthly, up from 95,000 in 2023.
Researchers concerned about existential risk
52% of AI researchers believe there is a 10%+ chance of an 'extremely bad' outcome from AI.
Deepfakes that are NCII
96% of deepfake videos online are non-consensual intimate imagery targeting women.
AI text humans can't detect
Human evaluators failed to identify AI-generated text 73% of the time in studies.
Companies with AI governance
Only 32% of organizations using AI have formal governance frameworks.
Annual AI safety research spending
Global AI safety research spending exceeds $300M annually — less than 1% of total AI R&D.
Median prediction for AGI
AI researchers' median estimate for when AI surpasses humans at all tasks has moved from 2060 to 2030.
Safety Trends
The AI Safety Landscape
AI safety concerns span a wide spectrum — from today's tangible harms to long-term existential risks. The AIAAIC Repository has documented over 4,200 AI-related incidents, growing 56% year-over-year. These range from algorithmic bias in hiring and healthcare to deepfake fraud and autonomous weapon concerns.
The economic impact is already significant. AI-generated fraud cost businesses an estimated $25.5 billion in 2024, up 70% from the prior year. Over 500,000 deepfake videos are created monthly, with 96% being non-consensual intimate imagery. Human evaluators fail to identify AI-generated text 73% of the time.
Among AI researchers themselves, concern is growing. 52% believe there is a 10%+ probability of an "extremely bad" outcome from AI. The median prediction for when AI surpasses humans at all tasks has shifted from 2060 to 2030. Yet global spending on AI safety research remains under $300 million annually — less than 1% of total AI R&D.
The governance gap is perhaps the most actionable concern: only 32% of organizations using AI have formal governance frameworks. Six national AI Safety Institutes have been established, and the Bletchley/Seoul summit process is building international consensus, but the pace of regulation still lags well behind the pace of capability development.
Related Data Pages
Frequently Asked Questions
AI Economy Pulse
Weekly AI economy data in your inbox. Free forever.
Latest: “AI Investment Hits $42B in Q1 2026 — Here's Where It Went”
No spam, ever. Unsubscribe anytime.