README.md

May 9, 2026 · View on GitHub

Pytorch implementation of PSGD

An overview

PSGD (Preconditioned SGD) is a general purpose (mathematical and stochastic, convex and nonconvex) 2nd order optimizer. It reformulates a wide range of preconditioner estimation and Hessian fitting problems as a family of strongly convex Lie groups like Riemannian manifold optimization problems.

Notations: Ez[(θ,z)]E_z[\ell(\theta, z)] or (θ)\ell(\theta) the loss; gg the (stochastic) gradient wrt θ\theta; HH the Hessian; h=Hvh=Hv the Hessian-vector product (Hvp) with vN(0,I){v\sim\mathcal{N}(0,I)}; P=QTQP=Q^TQ the preconditioner applying on gg; tri{\rm tri} takes the upper or lower triangular part of a matrix; \lVert \cdot \rVert takes spectral norm; superscripts T^T, ^* and H^H for transpose, conjugate and Hermitian transpose, respectively.

The new PSGD implementation is a superset of the old one (deprecated), and further supports four more matmul-only/inverse-free geometries for updating QQ. The choices dQ=Q0.5EQ1.5dQ=Q^{0.5} \mathcal{E} Q^{1.5} (default) and dP=P0.5EPdP=P^{0.5} \mathcal{E} P recover the Newton-Schulz (NS) iterations with stability guarantee. A few torch.optim.Optimizer wrapping examples are provided:

  • Simplest standalone DDP wrapping. This is the simplest PSGD 0/1/2D momentum whitening optimizer using Kron preconditioner fitted with dQ=Q0.5EQ1.5dQ=Q^{0.5} \mathcal{E} Q^{1.5} and update rule QQμ(PggTPI)QQ\leftarrow Q - \mu (Pgg^TP - I)Q, which recovers the NS iterations for the inverse 4th root of E[ggT]E[gg^T]. It's light, fast and the recommended implementation.
  • DDP wrapping and FSDP wrapping. They are wrapping examples of the functions inside psgd.py for gradient/momentum whitening of tensors of any order (with noticeable overhead cost due to einsum etc.). For most problems, whitening either gradient or momentum works equally well. For problems with sparse gradients, typically momentum whitening is preferred.
  • My customized non-standard wrapping inside psgd.py. It supports a wide range of choices: gradient/momentum whitening, Hessian fitting with Hvp, different geometries and Lie groups for QQ ...

The PSGD theory has two orthogonal parts: criteria for preconditioner fitting and preconditioner fitting in Lie groups.

Criteria for preconditioner fitting

PSGD was originally designed for preconditioning the gradient such that metrics of the spaces of preconditioned gradient and parameters are matched, i.e., Eδθ,z[(Pδg)(Pδg)T]=Eδθ,z[δθδθT]E_{\delta \theta, z}[(P\delta g)(P\delta g)^T] = E_{\delta \theta, z}[\delta \theta \delta \theta^T], where δ\delta denotes the perturbation operation and PP is symmetric positive definite (SPD). This leads to the original preconditioner fitting criterion Eδθ,z[δgTPδg+δθTP1δθ]E_{\delta\theta, z}[\delta g^T P \delta g + \delta \theta^T P^{-1} \delta \theta] ref. The finite-difference notation may not be common in machine learning (ML). But, note that PSGD was invented before popular automatic differentiation (AD) tools like Tensorflow. Manually calculating the Hvp was cubersome then. With AD, we can simply replace pair (δθ,δg)(\delta \theta, \delta g) with (v,h)(v, h) to obtain the Newton-style preconditioner fitting criterion Ev,z[hTPh+vTP1v]E_{v, z}[h^T P h + v^T P^{-1} v]. For the gradient/momentum whitening preconditioner, we just replace pair (δθ,δg)(\delta \theta, \delta g) with (v,g)(v, g) to have criterion Ev,z[gTPg+vTP1v]E_{v, z}[g^T P g + v^T P^{-1} v] ref, where vv is an auxiliary variable and can be optionally integrated out as it is indepedent of gg.

Preconditioner fitting in Lie groups

The above preconditioner fitting criteria are always convex in the Euclidean space, the manifold of SPD matrices and the Lie groups. But, they are strongly convex only in the Lie groups ref. The QQ here defines the coordinate transform ϑ=QTθ\vartheta=Q^{-T}\theta such that PSGD reduces to an SGD for ϑ\vartheta. Lie group is a natural tool for this purpose by preserving invariances like the coordinate orientations such that QQ is always invertible. Also, the multiplicative updates in Lie group avoid explicit matrix inverse. There are virtually endless choices for the group forms of QQ, say the Kronecker product preconditioner ref, the affine Lie group ref, and the low rank approximation (LRA) group ref.

Table I: Variations of preconditioner fitting criterion

CriterionSolutionNotes
hTPh+vTP1vh^TPh + v^TP^{-1}vPhhTP=vvTPhh^TP = vv^TReduces to secant equation Ph=vPh=v when vTh>0v^Th>0 (see quasi-Newton methods, e.g., BFGS).
Ev[hTPh+vTP1v]E_v[h^TPh + v^TP^{-1}v]P2=H2P^{-2}=H^2Reduces to Newton's method when H0H\succ 0.
Ev,z[gzTPgz+vTP1v]E_{v,z}[g_z^TPg_z + v^TP^{-1}v]P2=Ez[gzgzT]P^{-2}=E_z[g_zg_z^T]P2P^{-2} reduces to Fisher information matrix FF with per-sample gradient gzg_z (see Gauss-Newton and natural gradient methods, e.g., KFAC).
tEvt[gtTPgt+vtTP1vt]\sum_t E_{v_t}[g_t^TPg_t + v_t^TP^{-1}v_t]P2=tgtgtTP^{-2}=\sum_t g_t g_t^TRelates to the AdaGrad family, e.g., Adam(W), RMSProp, Shampoo, \ldots.

Note 1: vv can be a nuisance or an auxiliary variable in the last two criteria since it is independent of gg and can be integrated out as EvN(0,I)[vTP1v]=tr(P1)E_{v\sim\mathcal{N}(0,I)}[v^TP^{-1}v]={\rm tr}(P^{-1}), i.e., the Hutchinson's estimator.

Table II: Lie group (dQ=EQdQ=EQ) preconditioners with storage and computation numbers for θ=vec(Θ)\theta={\rm vec}(\Theta) with ΘRm×m\Theta\in\mathbb{R}^{m\times m}

Lie GroupUpdate of QQ ($0<\mu\le 2$)StoragesComputationsClass
GL(n,R){\rm GL}(n, \mathbb{R})Q(IμQhhTQTQTvvTQ1Qh2+QTv2)QQ\leftarrow \left( I - \mu \frac{Qhh^TQ^T - Q^{-T}vv^TQ^{-1}}{ \lVert Qh\rVert ^2 + \lVert Q^{-T}v\rVert^2 } \right) QO(m4)\mathcal{O}(m^4)O(m4)\mathcal{O}(m^4)DenseNewton
Tri matricesQtri(IμQhhTQTQTvvTQ1Qh2+QTv2)QQ\leftarrow {\rm tri}\left( I - \mu \frac{Qhh^TQ^T - Q^{-T}vv^TQ^{-1}}{ \lVert Qh\rVert^2 + \lVert Q^{-T}v\rVert^2 } \right) QO(m4)\mathcal{O}(m^4)O(m6)\mathcal{O}(m^6)DenseNewton
Q=diag(q)Q={\rm diag}(q)q(1μ(qh)2(v/q)2max((qh)2+(v/q)2))qq\leftarrow \left( 1 - \mu \frac{(q\cdot h)^2 - (v\cdot/q)^2}{ \max\left((q\cdot h)^2 + (v\cdot/q)^2\right)} \right) \cdot qO(m2)\mathcal{O}(m^2)O(m2)\mathcal{O}(m^2)LRAWhiten/Newton
kron(Q2,Q1){\rm kron}(Q_2,Q_1)A=Q1uvec(h)Q2HA=Q_1 {\rm uvec}(h) Q_2^H, B=Q2H[uvec(v)]HQ11B=Q_2^{-H} [{\rm uvec}(v)]^H Q_1^{-1}, Q1tri(IμAAHBHBAAH+BHB)Q1Q_1\leftarrow {\rm tri}\left( I - \mu \frac{AA^H-B^HB}{\lVert AA^H+B^HB \rVert} \right) Q_1, Q2tri(IμAHABBHAHA+BBH)Q2Q_2\leftarrow {\rm tri}\left( I - \mu \frac{A^HA-BB^H}{\lVert A^HA+BB^H \rVert} \right) Q_2O(m2)\mathcal{O}(m^2)O(m3)\mathcal{O}(m^3)KronWhiten/Newton
kron(Q1,Q2,){\rm kron}(Q_1,Q_2,\ldots)Aab=(Q1)aα(Q2)_bβ(uvec(h))_αβA_{ab\ldots}=(Q_1)_{a \alpha}(Q_2)\_{b \beta}\ldots ({\rm uvec}(h))\_{\alpha\beta\ldots}, B\*_ab=(uvec(v))_αβ(Q11)_αa(Q21)_βbB^\*\_{ab\ldots}=({\rm uvec}(v^*))\_{\alpha\beta\ldots} (Q_1^{-1})\_{\alpha a} (Q_2^{-1})\_{\beta b}\ldots, (Qi)_actri(I_abμA_aA\*_bB_aB\*_bA_aA\*_b+B_aB\*_b)Q_bc(Q_i)\_{ac}\leftarrow {\rm tri}\left( I\_{ab} - \mu \frac{A\_{\ldots a\ldots}A^\*\_{\ldots b\ldots}-B\_{\ldots a\ldots}B^\*\_{\ldots b\ldots}}{\lVert A\_{\ldots a\ldots}A^\*\_{\ldots b\ldots}+B\_{\ldots a\ldots}B^\*\_{\ldots b\ldots} \rVert} \right) Q\_{bc}O(m2)\mathcal{O}(m^2)O(m3)\mathcal{O}(m^3)KronWhiten/Newton
Q=(I+UVT)diag(d)Q=(I+UV^T){\rm diag}(d), U,VRn×rU, V \in \mathbb{R}^{n\times r}, $0\le r\ll n$a=Qha=Qh, b=QTvb=Q^{-T}v, d(1μh(Ph)v(P1v)maxh(Ph)+maxv(P1v))dd\leftarrow \left( 1-\mu\frac{h\cdot (Ph)-v\cdot (P^{-1}v)}{\max\|h\cdot (Ph)\| +\max\|v\cdot (P^{-1}v)\|}\right) \cdot d, UUμ(aaTbbT)V(I+VTU)a,VVTa+b,VVTbU\leftarrow U - \mu\frac{(aa^T-bb^T)V(I+V^TU)}{\lVert a\rVert \\, \lVert VV^Ta \rVert + \lVert b\rVert \\, \lVert VV^Tb\rVert }, VVμ(I+VUT)(aaTbbT)Ua,UUTa+b,UUTbV\leftarrow V - \mu\frac{ (I+VU^T)(aa^T-bb^T)U }{\lVert a\rVert \\, \lVert UU^Ta\rVert + \lVert b\rVert \\, \lVert UU^Tb\rVert}O(rm2)\mathcal{O}(rm^2)O(rm2)\mathcal{O}(rm^2)LRAWhiten/Newton
diag(q1)diag(q2){\rm diag}(q_1)\otimes{\rm diag}(q_2)\otimes\ldotssame as kronO(m)\mathcal{O}(m)O(m2)\mathcal{O}(m^2)KronWhiten/Newton

Note 1: The other four inverse-free preconditioner update methods have similar forms and complexities. Please check ref for further details.

Note 2: For the gradient/momentum whitening preconditioner, we simply replace pair (v,h)(v, h) with (v,g)(v, g), where vv is a dummy variable that can be optionally integrated out.

Hessian fitting accuracy and connection to BFGS

This script generates the following plot showing the typical behaviors of different Hessian fitting methods.

  • With a static and noise-free Hessian-vector product model, both BFGS and PSGD converge linearly to the optimal preconditioner while closed-form solution P=(E[hhT])0.5P=\left(E[hh^T]\right)^{-0.5} only converges sublinearly with rate O(1/t)\mathcal{O}(1/t).
  • With a static additive noisy Hessian-vector model h=Hv+ϵh=Hv+\epsilon, BFGS diverges easily. With a constant step size μ\mu, the steady-state fitting errors of PSGD are proportional to μ\mu.
  • With a time-varying Hessian Ht+1=Ht+uuTH_{t+1}=H_t + uu^T and uU(0,1)u\sim\mathcal{U}(0,1), PSGD locks onto good preconditioner estimations quicker than BFGS without a divergence stage. The closed-form solution P=(E[hhT])0.5P=\left(E[hh^T]\right)^{-0.5} is not good at tracking due to its sublinear rate of convergence.

Implementation details for psgd.py

One can find the functional APIs for all the preconditioners in Table II in psgd.py. The update_precond... and precond_grad... functions are for updating QQ and applying P=QTQP=Q^TQ on gg, respectively. See kron and lra for their usages. These low level functional APIs provide all the flexibility for customizing your own PSGD implementations.

These functional APIs are lightly wrapped into classes KronWhiten/Newton, LRAWhiten/Newton and DenseNewton for easy use. Three main differences from torch.optim.SGD:

  1. The loss to be minimized is passed through as a closure to the optimizer to support more dynamic behaviors, notably, Hessian-vector product approximation with finite difference method when the 2nd order derivative is unavailable. The closure should return a loss or a list/tuple with its first element as the loss.
  2. Momentum here is the moving average of gradient so that its setting is decoupled from the learning rate, which is always normalized in PSGD.
  3. As any other regularizations, (coupled) weight decay should be explicitly realized by adding an L2L2 regularization to the loss. Similarly, decoupled weight decay is not included inside these PSGD class implementations.

A few more details. The Hessian-vector products are calculated as a vector-jacobian-product (vjp), i.e., autograd.grad(g,θ,v){\rm autograd.grad}(g, \theta, v) in torch, maybe not always the most efficient way for a specific problem. Except for the Kronecker product preconditioners, no native support of complex parameter optimization (you can define complex parameters as view of real ones in order to use other preconditioners).

Demos

There are plenty of demos: Rosenbrock function minimization, vision transformer, generative pre-trained transformer, logistic regression, tensor rank decomposition, gradient by the straight through estimator (STE), etc.. For this tiny vision transformer demo, the PSGD-Kron-gradient-whitening preconditioner can outperform Adam(W) with the same hyperparameter settings.

For very sparse gradients, PSGD prefers whitening the momentum, and this GPT2 example shows that PSGD can outperform Adam(W) again with virtually the same hyperparameter settings (needs to reduce lr_params by (1+β)/(1β)\sqrt{(1 + \beta)/(1 - \beta)} times to match with Adam(W)'s lr).

Resources

  1. Preconditioned stochastic gradient descent, arXiv:1512.04202, 2015. (General ideas of PSGD, preconditioner fitting criteria and Kronecker product preconditioners.)
  2. Preconditioner on matrix Lie group for SGD, arXiv:1809.10232, 2018. (Focus on affine Lie group preconditioners, including feature normalization or whitening (per batch or layer) as special affine preconditioners. Use PSGD for gradient whitening.)
  3. Black box Lie group preconditioners for SGD, arXiv:2211.04422, 2022. (Mainly about the LRA preconditioner. I also have prepared these supplementary materials for detailed math derivations.)
  4. Stochastic Hessian fittings with Lie groups, arXiv:2402.11858, 2024. (Properties of PSGD, also a good summary of PSGD. The Hessian fitting problem is shown to be strongly convex in GL(n,R){\rm GL}(n, \mathbb{R}) under certain mild assumptions.)
  5. Curvature-informed SGD via general purpose Lie-group preconditioners, arXiv:2402.04553, 2024. (Plenty of benchmark results and analyses for PSGD vs. other optimizers.)
  6. There are a few more efficient and specialized PSGD implementations, say Evan's, Lucas' and more. My Tensorflow implementations (TF 1.x and TF 2.x) are too old and not maintained. The implementation here provides a testbed for different choices and focuses on the clarity of math.