Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

An interior point multiplicative method for optimization under positivity constraints

An interior point multiplicative method for optimization under positivity constraints We analyze an algorithm for the problem minf(x) s.t.x ⩾ 0 suggested, without convergence proof, by Eggermont. The iterative step is given by x j k+1 =x j k (1-λk▽f(x k)j) with λk > 0 determined through a line search. This method can be seen as a natural extension of the steepest descent method for unconstrained optimization, and we establish convergence properties similar to those known for steepest descent, namely weak convergence to a KKT point for a generalf, weak convergence to a solution for convexf and full convergence to the solution for strictly convexf. Applying this method to a maximum likelihood estimation problem, we obtain an additively overrelaxed version of the EM Algorithm. We extend the full convergence results known for EM to this overrelaxed version by establishing local Fejér monotonicity to the solution set. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Acta Applicandae Mathematicae Springer Journals

An interior point multiplicative method for optimization under positivity constraints

Acta Applicandae Mathematicae , Volume 38 (2) – Dec 30, 2004

Loading next page...
 
/lp/springer-journals/an-interior-point-multiplicative-method-for-optimization-under-SMlAM0ZXez

References (15)

Publisher
Springer Journals
Copyright
Copyright
Subject
Mathematics; Computational Mathematics and Numerical Analysis; Applications of Mathematics; Partial Differential Equations; Probability Theory and Stochastic Processes; Calculus of Variations and Optimal Control; Optimization
ISSN
0167-8019
eISSN
1572-9036
DOI
10.1007/BF00992845
Publisher site
See Article on Publisher Site

Abstract

We analyze an algorithm for the problem minf(x) s.t.x ⩾ 0 suggested, without convergence proof, by Eggermont. The iterative step is given by x j k+1 =x j k (1-λk▽f(x k)j) with λk > 0 determined through a line search. This method can be seen as a natural extension of the steepest descent method for unconstrained optimization, and we establish convergence properties similar to those known for steepest descent, namely weak convergence to a KKT point for a generalf, weak convergence to a solution for convexf and full convergence to the solution for strictly convexf. Applying this method to a maximum likelihood estimation problem, we obtain an additively overrelaxed version of the EM Algorithm. We extend the full convergence results known for EM to this overrelaxed version by establishing local Fejér monotonicity to the solution set.

Journal

Acta Applicandae MathematicaeSpringer Journals

Published: Dec 30, 2004

There are no references for this article.