Global convergence of SGD on two layer neural nets
In this note, we consider appropriately regularized $\ell _{2}-$empirical risk of depth $2$ nets with any number of gates and show bounds on how the empirical loss evolves for Stochastic Gradient Descent (SGD) iterates on it—for arbitrary data and if the activation is adequately smooth and bounded like sigmoid and tanh. This, in turn, leads to a proof of global convergence of SGD for a special class of initializations. We also prove an exponentially fast convergence rate for continuous time SGD that also applies to smooth unbounded activations like SoftPlus. Our key idea is to show the existence of Frobenius norm regularized loss functions on constant-sized neural nets that are ‘Villani functions’ and thus be able to build on recent progress with analyzing SGD on such objectives. Most critically, the amount of regularization required for our analysis is independent of the size of the net.
Top-30
Publishers
|
1
|
|
|
Institute of Electrical and Electronics Engineers (IEEE)
1 publication, 100%
|
|
|
1
|
- We do not take into account publications without a DOI.
- Statistics recalculated weekly.