step 0
⚠ Diverging — η · max-eigenvalue exceeds 2. Lower the learning rate.
f(x,y) = ½(x² + κ·y²)   ∇f = (x, κ·y)   update: θ ← θ − η ∇f(θ)
Stable iff η < 2/L where L = max eigenvalue of Hessian = κ. Optimal η* = . Convergence rate ≈ per step.