Aiconomy

Phi Models

Microsoft Research's family of small language models that demonstrate competitive performance at a fraction of the size of larger models, emphasizing data quality over raw scale.

Phi-2, a 2.7-billion-parameter model, outperformed models 25x its size on reasoning benchmarks, demonstrating that high-quality curated training data can compensate for smaller model size. Phi-3 continued this trend at 3.8 billion parameters. Microsoft's Phi research challenged the assumption that bigger models are always better and influenced the industry's growing focus on data quality over quantity. The Phi models are designed for edge deployment on devices with limited compute, supporting Microsoft's strategy for on-device AI.

Live Data

80AI Models Released This Year

Explore the Data

AI Economy Pulse

Every Friday: the 3 AI data points that actually matter this week. Free, forever.

Built on data from Stanford HAI, IEA, OECD & IMF

Latest: “AI Investment Hits $42B in Q1 2026 — Here's Where It Went”

No spam, ever. Unsubscribe anytime.