Aiconomy

NVIDIA B200

NVIDIA's next-generation AI GPU based on the Blackwell architecture, designed to deliver up to 5x the training and 30x the inference performance of the H100.

The B200, announced in 2024 and shipping in 2025, represents NVIDIA's biggest generational leap in AI performance. It features two dies connected on a single package with up to 192GB of HBM3e memory. The chip's FP4 precision support enables dramatically faster inference for large language models. Pricing is expected to exceed $30,000 per chip. The B200 and its GB200 variant (combining B200 GPU with Grace CPU) are the primary targets for Big Tech's $650+ billion AI infrastructure investment in 2026. Thermal requirements essentially mandate liquid cooling for B200 deployments.

Live Data

10,023,606,160GPU Hours Consumed by AI Today

Explore the Data

AI Economy Pulse

Every Friday: the 3 AI data points that actually matter this week. Free, forever.

Built on data from Stanford HAI, IEA, OECD & IMF

Latest: “AI Investment Hits $42B in Q1 2026 — Here's Where It Went”

No spam, ever. Unsubscribe anytime.