Aiconomy

HBM (High Bandwidth Memory)

A specialized memory technology that stacks memory chips vertically and connects them with wide data buses, providing the massive bandwidth needed by AI accelerator chips.

HBM is essential for AI performance — the NVIDIA H100 uses 80GB of HBM3 memory with 3.35 TB/s bandwidth, while the B200 uses HBM3e with over 8 TB/s. SK Hynix and Samsung dominate HBM production, with a combined market share exceeding 90%. The HBM market is projected to exceed $20 billion by 2025, growing at over 80% annually. HBM supply constraints have been a bottleneck for AI chip production. Memory bandwidth, not compute power, is increasingly the limiting factor for AI model performance, making HBM a critical strategic technology.

Explore the Data

AI Economy Pulse

Every Friday: the 3 AI data points that actually matter this week. Free, forever.

Built on data from Stanford HAI, IEA, OECD & IMF

Latest: “AI Investment Hits $42B in Q1 2026 — Here's Where It Went”

No spam, ever. Unsubscribe anytime.