Optimization Lab
Six lessons on the mathematics of optimization for deep learning, published May 1 through May 7, 2026. Each lesson pairs the theory with a live in-browser visualization.
Labs
-
1 - Gradient Descent and the Geometry of Learning
May 01, 2026
-
2 - SGD, Mini-batches, and Gradient Noise
May 02, 2026
-
3 - Momentum and Nesterov Acceleration
May 03, 2026
-
4 - Adagrad, RMSProp, and Adam
May 04, 2026
-
5 - Learning Rate Schedules
May 05, 2026
-
6 - Newton's Method and Curvature
May 07, 2026
Live Demos