Aiconomy

Encoder-Decoder Architecture

A neural network design where an encoder compresses input data into a fixed representation and a decoder generates output from that representation, widely used in translation and summarization.

The encoder-decoder architecture was pivotal in the development of sequence-to-sequence models for machine translation. The original transformer (2017) used an encoder-decoder structure, though modern LLMs like GPT use decoder-only architectures. Encoder-decoder models like T5 and BART excel at tasks that require understanding input before generating output, such as summarization and translation. Google's T5 model demonstrated that framing all NLP tasks as text-to-text problems with an encoder-decoder could achieve state-of-the-art results.

Explore the Data

AI Economy Pulse

Every Friday: the 3 AI data points that actually matter this week. Free, forever.

Built on data from Stanford HAI, IEA, OECD & IMF

Latest: “AI Investment Hits $42B in Q1 2026 — Here's Where It Went”

No spam, ever. Unsubscribe anytime.