Aiconomy

AI Existential Risk (X-Risk)

The subset of AI existential risk focused on scenarios where advanced AI could cause human extinction or permanently curtail humanity's potential, a concern taken seriously by many leading AI researchers.

In 2023, a one-sentence statement that 'mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war' was signed by hundreds of AI researchers and CEOs including those from OpenAI, DeepMind, and Anthropic. The Center for AI Safety, Machine Intelligence Research Institute (MIRI), and the Future of Humanity Institute focus on x-risk research. Estimated annual spending on AI x-risk research is under $100 million globally. Key x-risk scenarios include misaligned superintelligence, AI-enabled bioweapons, and cascading failures in AI-dependent infrastructure systems.

Explore the Data

AI Economy Pulse

Every Friday: the 3 AI data points that actually matter this week. Free, forever.

Built on data from Stanford HAI, IEA, OECD & IMF

Latest: “AI Investment Hits $42B in Q1 2026 — Here's Where It Went”

No spam, ever. Unsubscribe anytime.