Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Convergence Properties of the Regularized Newton Method for the Unconstrained Nonconvex Optimization

Convergence Properties of the Regularized Newton Method for the Unconstrained Nonconvex Optimization The regularized Newton method (RNM) is one of the efficient solution methods for the unconstrained convex optimization. It is well-known that the RNM has good convergence properties as compared to the steepest descent method and the pure Newton’s method. For example, Li, Fukushima, Qi and Yamashita showed that the RNM has a quadratic rate of convergence under the local error bound condition. Recently, Polyak showed that the global complexity bound of the RNM, which is the first iteration k such that ‖ ∇ f ( x k )‖≤ ε , is O ( ε −4 ), where f is the objective function and ε is a given positive constant. In this paper, we consider a RNM extended to the unconstrained “nonconvex” optimization. We show that the extended RNM (E-RNM) has the following properties. (a) The E-RNM has a global convergence property under appropriate conditions. (b) The global complexity bound of the E-RNM is O ( ε −2 ) if ∇ 2 f is Lipschitz continuous on a certain compact set. (c) The E-RNM has a superlinear rate of convergence under the local error bound condition. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Applied Mathematics and Optimization Springer Journals

Convergence Properties of the Regularized Newton Method for the Unconstrained Nonconvex Optimization

Loading next page...
 
/lp/springer-journals/convergence-properties-of-the-regularized-newton-method-for-the-eaWrJkV0Jg

References (8)

Publisher
Springer Journals
Copyright
Copyright © 2010 by Springer Science+Business Media, LLC
Subject
Mathematics; Numerical and Computational Physics; Mathematical Methods in Physics; Theoretical, Mathematical and Computational Physics; Systems Theory, Control; Calculus of Variations and Optimal Control; Optimization
ISSN
0095-4616
eISSN
1432-0606
DOI
10.1007/s00245-009-9094-9
Publisher site
See Article on Publisher Site

Abstract

The regularized Newton method (RNM) is one of the efficient solution methods for the unconstrained convex optimization. It is well-known that the RNM has good convergence properties as compared to the steepest descent method and the pure Newton’s method. For example, Li, Fukushima, Qi and Yamashita showed that the RNM has a quadratic rate of convergence under the local error bound condition. Recently, Polyak showed that the global complexity bound of the RNM, which is the first iteration k such that ‖ ∇ f ( x k )‖≤ ε , is O ( ε −4 ), where f is the objective function and ε is a given positive constant. In this paper, we consider a RNM extended to the unconstrained “nonconvex” optimization. We show that the extended RNM (E-RNM) has the following properties. (a) The E-RNM has a global convergence property under appropriate conditions. (b) The global complexity bound of the E-RNM is O ( ε −2 ) if ∇ 2 f is Lipschitz continuous on a certain compact set. (c) The E-RNM has a superlinear rate of convergence under the local error bound condition.

Journal

Applied Mathematics and OptimizationSpringer Journals

Published: Aug 1, 2010

There are no references for this article.