Journal of Complexity, volume 87, pages 101920

A revisit on Nesterov acceleration for linear ill-posed problems

Duo Liu
QIN HUANG
Qinian Jin
Publication typeJournal Article
Publication date2025-04-01
scimago Q1
wos Q1
SJR1.115
CiteScore3.1
Impact factor1.8
ISSN0885064X, 10902708
Jin Q.
Numerische Mathematik scimago Q1 wos Q1
2022-06-22 citations by CoLab: 9 Abstract  
In this paper we consider a dual gradient method for solving linear ill-posed problems $$Ax = y$$ , where $$A : X \rightarrow Y$$ is a bounded linear operator from a Banach space X to a Hilbert space Y. A strongly convex penalty function is used in the method to select a solution with desired feature. Under variational source conditions on the sought solution, convergence rates are derived when the method is terminated by either an a priori stopping rule or the discrepancy principle. We also consider an acceleration of the method as well as its various applications.
Kindermann S.
Inverse Problems scimago Q1 wos Q2
2021-05-04 citations by CoLab: 16 Abstract  
Abstract We show that Nesterov acceleration is an optimal-order iterative regularization method for linear ill-posed problems provided that a parameter is chosen accordingly to the smoothness of the solution. This result is proven both for an a priori stopping rule and for the discrepancy principle under Hölder source conditions. Furthermore, some converse results and logarithmic rates are verified. The essential tool to obtain these results is a representation of the residual polynomials via Gegenbauer polynomials.
Zhong M., Wang W., Jin Q.
Numerische Mathematik scimago Q1 wos Q1
2019-07-29 citations by CoLab: 33 Abstract  
In this paper, we propose and analyze a two-point gradient method for solving inverse problems in Banach spaces which is based on the Landweber iteration and an extrapolation strategy. The method allows to use non-smooth penalty terms, including the $$L^1$$ -like and the total variation-like penalty functionals, which are significant in reconstructing special features of solutions such as sparsity and piecewise constancy in practical applications. The design of the method involves the choices of the step sizes and the combination parameters which are carefully discussed. Numerical simulations are presented to illustrate the effectiveness of the proposed method.
Hubmer S., Ramlau R.
Inverse Problems scimago Q1 wos Q2
2018-07-12 citations by CoLab: 24 Abstract  
In this paper, we consider Nesterov's Accelerated Gradient method for solving Nonlinear Inverse and Ill-Posed Problems. Known to be a fast gradient-based iterative method for solving well-posed convex optimization problems, this method also leads to promising results for ill-posed problems. Here, we provide a convergence analysis for ill-posed problems of this method based on the assumption of a locally convex residual functional. Furthermore, we demonstrate the usefulness of the method on a number of numerical examples based on a nonlinear diagonal operator and on an inverse problem in auto-convolution.
Hubmer S., Ramlau R.
Inverse Problems scimago Q1 wos Q2
2017-08-01 citations by CoLab: 45 Abstract  
We perform a convergence analysis of a two-point gradient method which is based on Landweber iteration and on Nesterov's acceleration scheme. Additionally, we show the usefulness of this method via two numerical example problems based on a nonlinear Hammerstein operator and on the nonlinear inverse problem of single photon emission computed tomography.
Neubauer A.
2017-01-01 citations by CoLab: 37 Abstract  
AbstractIn this paper we deal with Nesterov acceleration and show that it speeds up Landweber iteration when applied to linear ill-posed problems. It is proven that, if the exact solution
Jin Q.
Inverse Problems scimago Q1 wos Q2
2016-08-05 citations by CoLab: 36 Abstract  
In recent years Landweber(-Kaczmarz) method has been proposed for solving nonlinear ill-posed inverse problems in Banach spaces using general convex penalty functions. The implementation of this method involves solving a (nonsmooth) convex minimization problem at each iteration step and the existing theory requires its exact resolution which in general is impossible in practical applications. In this paper we propose a version of Landweber-Kaczmarz method in Banach spaces in which the minimization problem involved in each iteration step is solved inexactly. Based on the $\varepsilon$-subdifferential calculus we give a convergence analysis of our method. Furthermore, using Nesterov's strategy, we propose a possible accelerated version of Landweber-Kaczmarz method. Numerical results on computed tomography and parameter identification in partial differential equations are provided to support our theoretical results and to demonstrate our accelerated method.
Hansen P.C., Saxild-Hansen M.
2012-02-01 citations by CoLab: 237 Abstract  
We present a MATLAB package with implementations of several algebraic iterative reconstruction methods for discretizations of inverse problems. These so-called row action methods rely on semi-convergence for achieving the necessary regularization of the problem. Two classes of methods are implemented: Algebraic Reconstruction Techniques (ART) and Simultaneous Iterative Reconstruction Techniques (SIRT). In addition we provide a few simplified test problems from medical and seismic tomography. For each iterative method, a number of strategies are available for choosing the relaxation parameter and the stopping rule. The relaxation parameter can be fixed, or chosen adaptively in each iteration; in the former case we provide a new “training” algorithm that finds the optimal parameter for a given test problem. The stopping rules provided are the discrepancy principle, the monotone error rule, and the NCP criterion; for the first two methods “training” can be used to find the optimal discrepancy parameter.
Natterer F., Wang G.
Medical Physics scimago Q1 wos Q1
2002-01-01 citations by CoLab: 37 Abstract  
The Mathematics of Computerized Tomography covers the relevant mathematical theory of the Radon transform and related transforms and also studies more practical questions such as stability, sampling, resolution, and accuracy. Quite a bit of attention is given to the derivation, analysis, and practical examination of reconstruction algorithm, for both standard problems and problems with incomplete data.
Hanke M.
Numerische Mathematik scimago Q1 wos Q1
1991-12-01 citations by CoLab: 123 Abstract  
In this paper, the potentials of so-calledlinear semiiterative methods are considered for the approximate solution of linear ill-posed problems and ill conditioned matrix equations. Several efficient two-step methods are presented, most of which have been introduced earlier in the literature. Stipulating certain conditions concerning the smoothness of the solution, a notion of optimal speed of convergence may be formulated. Various direct and converse results are derived to illustrate the properties of this concept. If the problem's right hand side data are contaminated by noise, semiiterative methods may be used asregularization methods. Assuming optimal rate of convergence of the iteration for the unperturbed problem, the regularized approximations will be of order optimal accuracy. To derive these results, specific properties of polynomials are used in connection with the basic theory of solving ill-posed problems. Rather recent results onfast decreasing polynomials are applied to answer an open question of Brakhage. Numerical examples are given including a comparison to the method of conjugate gradients.
Jin Q.
Inverse Problems scimago Q1 wos Q2
2025-01-23 citations by CoLab: 2 Abstract  
Abstract Nesterov's acceleration strategy is renowned in speeding up the convergence of gradient-based optimization algorithms and has been crucial in developing fast first order methods for well-posed convex optimization problems. Although Nesterov's accelerated gradient method has been adapted as an iterative regularization method for solving ill-posed inverse problems, no general convergence theory is available except for some special instances. In this paper, we develop an adaptive Nesterov momentum method for solving ill-posed inverse problems in Banach spaces, where the step-sizes and momentum coefficients are chosen through adaptive procedures with explicit formulas. Additionally, uniform convex regularization functions are incorporated to detect the features of sought solutions. Under standard conditions, we establish the regularization property of our method when terminated by the discrepancy principle. Various numerical experiments demonstrate that our method outperforms the Landweber-type method in terms of the required number of iterations and the computational time.

Top-30

Journals

1
1

Publishers

1
1
  • We do not take into account publications without a DOI.
  • Statistics recalculated only for publications connected to researchers, organizations and labs registered on the platform.
  • Statistics recalculated weekly.

Are you a researcher?

Create a profile to get free access to personal recommendations for colleagues and new articles.
Share
Cite this
GOST | RIS | BibTex
Found error?