Advances in Computational Mathematics, volume 45, issue 5-6, pages 2745-2770
Fast and strong convergence of online learning algorithms
Zheng-Chu Guo
1
,
Lei Shi
2
Publication type: Journal Article
Publication date: 2019-06-06
scimago Q1
SJR: 0.995
CiteScore: 3.0
Impact factor: 1.7
ISSN: 10197168, 15729044
Computational Mathematics
Applied Mathematics
Abstract
In this paper, we study the online learning algorithm without explicit regularization terms. This algorithm is essentially a stochastic gradient descent scheme in a reproducing kernel Hilbert space (RKHS). The polynomially decaying step size in each iteration can play a role of regularization to ensure the generalization ability of online learning algorithm. We develop a novel capacity dependent analysis on the performance of the last iterate of online learning algorithm. This answers an open problem in learning theory. The contribution of this paper is twofold. First, our novel capacity dependent analysis can lead to sharp convergence rate in the standard mean square distance which improves the results in the literature. Second, we establish, for the first time, the strong convergence of the last iterate with polynomially decaying step sizes in the RKHS norm. We demonstrate that the theoretical analysis established in this paper fully exploits the fine structure of the underlying RKHS, and thus can lead to sharp error estimates of online learning algorithm.
Found
Are you a researcher?
Create a profile to get free access to personal recommendations for colleagues and new articles.