Aiconomy

Groq

An AI chip company that designs custom Language Processing Units (LPUs) for ultra-fast AI inference, offering the fastest commercially available token generation speeds for large language models.

Groq's LPU chips can generate tokens at 500+ tokens per second, approximately 10x faster than GPU-based inference. The company's deterministic architecture eliminates the memory bottleneck that limits GPU inference speeds. Founded by a former Google TPU architect, Groq has raised over $640 million. The company offers cloud-based API access to models like Llama and Mixtral running on its LPU hardware. Groq represents a growing class of companies designing specialized AI chips optimized for inference rather than training workloads.

Explore the Data

AI Economy Pulse

Every Friday: the 3 AI data points that actually matter this week. Free, forever.

Built on data from Stanford HAI, IEA, OECD & IMF

Latest: “AI Investment Hits $42B in Q1 2026 — Here's Where It Went”

No spam, ever. Unsubscribe anytime.