Capex (Capital Expenditure)
Long-term investment spending by companies on physical assets like data centers, GPU clusters, and networking infrastructure — the backbone of AI deployment at scale.
Big Tech companies plan over $650 billion in AI-related capex for 2026, a 60% increase from 2025. Amazon leads with $200B, followed by Google ($175B), Microsoft ($145B), and Meta ($125B). Approximately 75% of this spending goes directly to AI infrastructure — GPUs, data centers, and custom chips. This capex boom is driving record data center construction and surging electricity demand globally.
Live Data
Explore the Data
Related Terms
AI Compute
The computational resources — primarily GPU and TPU processing power — required to train and run AI models, typically measured in FLOP (floating-point operations) or GPU-hours.
Data Center
A facility housing computer systems and infrastructure used to process, store, and distribute data — increasingly built specifically for AI training and inference workloads.
Fine-Tuning
The process of further training a pre-trained AI model on a specific, smaller dataset to specialize it for a particular task or domain, requiring far less compute than training from scratch.
Foundation Model
A large AI model trained on broad data that can be adapted to a wide range of downstream tasks — examples include GPT-4, Claude, Gemini, and Llama.
Frontier Model
The most capable and advanced AI models at any given time, typically trained with the largest compute budgets and achieving state-of-the-art performance on benchmarks.
GPU (Graphics Processing Unit)
A specialized processor originally designed for rendering graphics, now the primary hardware used for training and running AI models due to its parallel processing capabilities.
AI Economy Pulse
Weekly AI economy data in your inbox. Free forever.
Latest: “AI Investment Hits $42B in Q1 2026 — Here's Where It Went”
No spam, ever. Unsubscribe anytime.