Aiconomy

In-Context Learning

The ability of large language models to learn new tasks from examples provided within the input prompt, without any parameter updates or traditional training.

In-context learning was first demonstrated at scale with GPT-3 in 2020 and has become a defining capability of large language models. The model adapts its behavior based on the examples and instructions in the prompt, effectively learning on the fly. This eliminates the need for task-specific fine-tuning in many cases, dramatically reducing the cost and time to deploy AI for new applications. Research suggests in-context learning emerges as a capability at sufficient model scale, typically above 10 billion parameters.

Explore the Data

AI Economy Pulse

Every Friday: the 3 AI data points that actually matter this week. Free, forever.

Built on data from Stanford HAI, IEA, OECD & IMF

Latest: “AI Investment Hits $42B in Q1 2026 — Here's Where It Went”

No spam, ever. Unsubscribe anytime.