NVIDIA
The dominant manufacturer of GPUs used for AI training and inference, commanding approximately 64% of the $66.2 billion AI chip market as of 2024.
NVIDIA's H100 GPU ($25,000–40,000 each) is the workhorse of AI infrastructure. The company's data center revenue grew 300% year-over-year in 2024. NVIDIA benefits from the Big Tech capex boom, with companies spending $650B+ collectively on AI infrastructure in 2026. US export controls on NVIDIA's advanced chips to China have made the company central to AI geopolitics. NVIDIA's market capitalization briefly exceeded $3 trillion.
Live Data
Explore the Data
Related Terms
AI Compute
The computational resources — primarily GPU and TPU processing power — required to train and run AI models, typically measured in FLOP (floating-point operations) or GPU-hours.
Capex (Capital Expenditure)
Long-term investment spending by companies on physical assets like data centers, GPU clusters, and networking infrastructure — the backbone of AI deployment at scale.
ChatGPT
OpenAI's conversational AI assistant, launched in November 2022, which catalyzed the current generative AI boom by demonstrating the capabilities of large language models to a mainstream audience.
Data Center
A facility housing computer systems and infrastructure used to process, store, and distribute data — increasingly built specifically for AI training and inference workloads.
Enterprise AI Adoption
The rate at which businesses integrate AI technologies into their operations, measured across functions like customer service, software development, marketing, and supply chain management.
Fine-Tuning
The process of further training a pre-trained AI model on a specific, smaller dataset to specialize it for a particular task or domain, requiring far less compute than training from scratch.
AI Economy Pulse
Weekly AI economy data in your inbox. Free forever.
Latest: “AI Investment Hits $42B in Q1 2026 — Here's Where It Went”
No spam, ever. Unsubscribe anytime.