Aiconomy

Batch Normalization

A technique that normalizes the inputs to each layer in a neural network, stabilizing and accelerating the training process by reducing internal covariate shift.

Introduced by Ioffe and Szegedy in 2015, batch normalization normalizes layer inputs to have zero mean and unit variance across a mini-batch. This allows higher learning rates and reduces sensitivity to weight initialization, often cutting training time in half. The technique has become standard in convolutional neural networks and many deep learning architectures. It also provides a mild regularization effect, reducing the need for dropout in some cases.

Explore the Data

AI Economy Pulse

Every Friday: the 3 AI data points that actually matter this week. Free, forever.

Built on data from Stanford HAI, IEA, OECD & IMF

Latest: “AI Investment Hits $42B in Q1 2026 — Here's Where It Went”

No spam, ever. Unsubscribe anytime.