Aiconomy

NVIDIA A100

NVIDIA's previous-generation data center GPU for AI training and inference, which became the workhorse of AI infrastructure from 2020-2023 before being succeeded by the H100.

The A100, launched in May 2020, was built on the Ampere architecture and offered 80GB of HBM2e memory. It was the GPU that trained many of the models that launched the generative AI boom, including early GPT models and Stable Diffusion. A100 prices ranged from $10,000-15,000. US export controls restricted A100 sales to China in October 2022, making it a geopolitically significant chip. While superseded by the H100 and B200 for frontier training, millions of A100s remain in active service for inference and smaller-scale training workloads.

Live Data

10,023,606,133GPU Hours Consumed by AI Today

Explore the Data

AI Economy Pulse

Every Friday: the 3 AI data points that actually matter this week. Free, forever.

Built on data from Stanford HAI, IEA, OECD & IMF

Latest: “AI Investment Hits $42B in Q1 2026 — Here's Where It Went”

No spam, ever. Unsubscribe anytime.