Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Steepest Descent Methods with Generalized Distances for Constrained Optimization

Steepest Descent Methods with Generalized Distances for Constrained Optimization We consider the problem $$ \min f(x) $$ s.t. $$ x \in C $$ , where C is a closed and covex subset of $$ {\text{R}}^n $$ with nonempty interior, and introduce a family of interior point methods for this problem, which can be seen as approximate versions of generalized proximal point methods. Each step consists of a one-dimensional search along either a curve or a segment in the interior of C. The information about the boundary of C is contained in a generalized distance which defines the segment of the curve, and whose gradient diverges at the boundary of C. The objective of the search is either f or f plus a regularizing term. When $$ C{\text{ = R}}^n $$ , the usual steepest descent method is a particular case of our general scheme, and we manage to extend known convergence results for the steepest descent method to our family: for nonregularized one-dimensional searches,under a level set boundedness assumption on f, the sequence is bounded, the difference between consecutive iterates converges to 0 and every cluster point of the sequence satisfies first-order optimality conditions for the problem, i.e. is a solution if f is convex. For the regularized search and convex f, no boundedness condition on f is needed and full and global convergence of the sequence to a solution of the problem is established. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Acta Applicandae Mathematicae Springer Journals

Steepest Descent Methods with Generalized Distances for Constrained Optimization

Acta Applicandae Mathematicae , Volume 46 (2) – Oct 15, 2004

Loading next page...
 
/lp/springer-journals/steepest-descent-methods-with-generalized-distances-for-constrained-ONWp2NuAsV

References (20)

Publisher
Springer Journals
Copyright
Copyright © 1997 by Kluwer Academic Publishers
Subject
Mathematics; Mathematics, general; Computer Science, general; Theoretical, Mathematical and Computational Physics; Complex Systems; Classical Mechanics
ISSN
0167-8019
eISSN
1572-9036
DOI
10.1023/A:1005721827841
Publisher site
See Article on Publisher Site

Abstract

We consider the problem $$ \min f(x) $$ s.t. $$ x \in C $$ , where C is a closed and covex subset of $$ {\text{R}}^n $$ with nonempty interior, and introduce a family of interior point methods for this problem, which can be seen as approximate versions of generalized proximal point methods. Each step consists of a one-dimensional search along either a curve or a segment in the interior of C. The information about the boundary of C is contained in a generalized distance which defines the segment of the curve, and whose gradient diverges at the boundary of C. The objective of the search is either f or f plus a regularizing term. When $$ C{\text{ = R}}^n $$ , the usual steepest descent method is a particular case of our general scheme, and we manage to extend known convergence results for the steepest descent method to our family: for nonregularized one-dimensional searches,under a level set boundedness assumption on f, the sequence is bounded, the difference between consecutive iterates converges to 0 and every cluster point of the sequence satisfies first-order optimality conditions for the problem, i.e. is a solution if f is convex. For the regularized search and convex f, no boundedness condition on f is needed and full and global convergence of the sequence to a solution of the problem is established.

Journal

Acta Applicandae MathematicaeSpringer Journals

Published: Oct 15, 2004

There are no references for this article.