Access the full text.
Sign up today, get DeepDyve free for 14 days.
A. Auslender, R. Cominetti, J. Crouzeix (1993)
Convex Functions with Unbounded Level Sets and Applications to Duality TheorySIAM J. Optim., 3
Osman Güer (1991)
On the convergence of the proximal point algorithm for convex minimizationSiam Journal on Control and Optimization, 29
M. Todd (1989)
On Convergence Properties of Algorithms for Unconstrained MinimizationIma Journal of Numerical Analysis, 9
J. Bonnans, Jean Gilbert, C. Lemaréchal, C. Sagastizábal (1995)
A family of variable metric proximal methodsMathematical Programming, 68
K. Kiwiel (1983)
An aggregate subgradient method for nonsmooth convex minimizationMathematical Programming, 27
A. Auslender (1993)
Convergence of stationary sequences for variational inequalities with maximal monotone operatorsApplied Mathematics and Optimization, 28
R. Rockafellar (1976)
Monotone Operators and the Proximal Point AlgorithmSiam Journal on Control and Optimization, 14
B. Lemaire (1992)
About the Convergence of the Proximal Method
R. Correa, C. Lemaréchal (1993)
Convergence of some algorithms for convex minimizationMathematical Programming, 62
A. Auslender, J. Crouzeix (1989)
Well Behaved Asymptotical Convex FunctionsAnnales De L Institut Henri Poincare-analyse Non Lineaire, 6
Z. Wei, L. Ql (1996)
Convergence analysis of a proximal newton method 1Numerical Functional Analysis and Optimization, 17
Shiquan Wu (1992)
Convergence properties of descent methods for unconstrained minizationOptimization, 26
Abstract. Based on the notion of the ε -subgradient, we present a unified technique to establish convergence properties of several methods for nonsmooth convex minimization problems. Starting from the technical results, we obtain the global convergence of: (i) the variable metric proximal methods presented by Bonnans, Gilbert, Lemaréchal, and Sagastizábal, (ii) some algorithms proposed by Correa and Lemaréchal, and (iii) the proximal point algorithm given by Rockafellar. In particular, we prove that the Rockafellar—Todd phenomenon does not occur for each of the above mentioned methods. Moreover, we explore the convergence rate of {||x k || } and {f(x k ) } when {x k } is unbounded and {f(x k ) } is bounded for the non\-smooth minimization methods (i), (ii), and (iii).
Applied Mathematics and Optimization – Springer Journals
Published: Oct 1, 1998
Keywords: Key words. Nonsmooth convex minimization, Global convergence, Convergence rate. AMS Classification. 90C25, 90C30, 90C33.
Read and print from thousands of top scholarly journals.
Already have an account? Log in
Bookmark this article. You can see your Bookmarks on your DeepDyve Library.
To save an article, log in first, or sign up for a DeepDyve account if you don’t already have one.
Copy and paste the desired citation format or use the link below to download a file formatted for EndNote
Access the full text.
Sign up today, get DeepDyve free for 14 days.
All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.