Aiconomy

Catastrophic AI Risk

The potential for AI systems to cause widespread, severe, and potentially irreversible harm to society through misuse, accidents, or loss of human control over increasingly powerful systems.

Catastrophic risk scenarios include AI-enabled bioweapons, large-scale cyberattacks, economic disruption, and loss of human agency. The Frontier Model Forum, founded by OpenAI, Anthropic, Google, and Microsoft, was established partly to address catastrophic risks. Anthropic's responsible scaling policy includes commitments to pause development if models demonstrate dangerous capabilities. Six national AI Safety Institutes have been established to evaluate catastrophic risk potential. AI safety spending remains under 1% of total AI R&D, raising concerns about whether sufficient resources are allocated to preventing worst-case scenarios.

Live Data

1,373AI Safety Incidents This Year

Explore the Data

AI Economy Pulse

Every Friday: the 3 AI data points that actually matter this week. Free, forever.

Built on data from Stanford HAI, IEA, OECD & IMF

Latest: “AI Investment Hits $42B in Q1 2026 — Here's Where It Went”

No spam, ever. Unsubscribe anytime.