Aiconomy

Chain-of-Thought Prompting

A prompting technique that improves AI reasoning by instructing the model to break complex problems into intermediate steps before arriving at a final answer.

Chain-of-thought (CoT) prompting, introduced by Google researchers in 2022, dramatically improved LLM performance on math, logic, and multi-step reasoning tasks. On the GSM8K math benchmark, CoT prompting improved accuracy from around 18% to over 57% with PaLM 540B. The technique has evolved into variants like tree-of-thought and self-consistency sampling. CoT is now a standard technique used in models like OpenAI's o1 and o3 series, which integrate reasoning steps directly into their architecture.

Explore the Data

AI Economy Pulse

Every Friday: the 3 AI data points that actually matter this week. Free, forever.

Built on data from Stanford HAI, IEA, OECD & IMF

Latest: “AI Investment Hits $42B in Q1 2026 — Here's Where It Went”

No spam, ever. Unsubscribe anytime.