Aiconomy
Last Updated: March 22, 2026

AI Safety & Risks in 2026

As AI capabilities accelerate, so do the risks. Track documented incidents, deepfake proliferation, AI fraud costs, researcher concerns, and the state of AI governance.

4,200+

Documented AI Incidents

Growing 56% year-over-year · Source: AIAAIC Repository

Key Safety Statistics

4,200+
56%

Documented AI incidents

The AIAAIC Repository has cataloged over 4,200 AI-related incidents and controversies through 2024.

Source: AIAAIC
$25.5B
70%

Cost of AI fraud (2024)

AI-generated fraud cost businesses an estimated $25.5 billion in 2024, up 70% from prior year.

Source: Deloitte
500K+
430%

Deepfake videos per month

An estimated 500,000+ deepfake videos are generated monthly, up from 95,000 in 2023.

Source: Sumsub
52%

Researchers concerned about existential risk

52% of AI researchers believe there is a 10%+ chance of an 'extremely bad' outcome from AI.

Source: AI Impacts
96%

Deepfakes that are NCII

96% of deepfake videos online are non-consensual intimate imagery targeting women.

Source: Sensity AI
73%

AI text humans can't detect

Human evaluators failed to identify AI-generated text 73% of the time in studies.

Source: Nature
32%
10%

Companies with AI governance

Only 32% of organizations using AI have formal governance frameworks.

Source: McKinsey
$300M+
40%

Annual AI safety research spending

Global AI safety research spending exceeds $300M annually — less than 1% of total AI R&D.

2030

Median prediction for AGI

AI researchers' median estimate for when AI surpasses humans at all tasks has moved from 2060 to 2030.

Source: AI Impacts

Safety Trends

The AI Safety Landscape

AI safety concerns span a wide spectrum — from today's tangible harms to long-term existential risks. The AIAAIC Repository has documented over 4,200 AI-related incidents, growing 56% year-over-year. These range from algorithmic bias in hiring and healthcare to deepfake fraud and autonomous weapon concerns.

The economic impact is already significant. AI-generated fraud cost businesses an estimated $25.5 billion in 2024, up 70% from the prior year. Over 500,000 deepfake videos are created monthly, with 96% being non-consensual intimate imagery. Human evaluators fail to identify AI-generated text 73% of the time.

Among AI researchers themselves, concern is growing. 52% believe there is a 10%+ probability of an "extremely bad" outcome from AI. The median prediction for when AI surpasses humans at all tasks has shifted from 2060 to 2030. Yet global spending on AI safety research remains under $300 million annually — less than 1% of total AI R&D.

The governance gap is perhaps the most actionable concern: only 32% of organizations using AI have formal governance frameworks. Six national AI Safety Institutes have been established, and the Bletchley/Seoul summit process is building international consensus, but the pace of regulation still lags well behind the pace of capability development.

Frequently Asked Questions

AI Economy Pulse

Weekly AI economy data in your inbox. Free forever.

Join 2,500+ subscribers

Latest: “AI Investment Hits $42B in Q1 2026 — Here's Where It Went”

No spam, ever. Unsubscribe anytime.