Aiconomy

Interconnect

The high-speed networking technology that connects GPUs within and across servers in AI training clusters, where communication bandwidth critically determines training efficiency.

InfiniBand, produced by NVIDIA's Mellanox division, dominates AI cluster networking with 400 Gb/s per link, and 800 Gb/s (NDR) is deploying. NVIDIA's NVLink provides 900 GB/s bandwidth between GPUs within a single server. Training large models across thousands of GPUs requires constant synchronization, making interconnect bandwidth a critical bottleneck. Google uses custom-designed optical interconnects in its TPU pods. Ethernet-based alternatives are gaining ground for AI clusters, with Ultra Ethernet Consortium developing AI-optimized standards. Interconnect costs can represent 15-25% of total cluster investment.

Explore the Data

AI Economy Pulse

Every Friday: the 3 AI data points that actually matter this week. Free, forever.

Built on data from Stanford HAI, IEA, OECD & IMF

Latest: “AI Investment Hits $42B in Q1 2026 — Here's Where It Went”

No spam, ever. Unsubscribe anytime.