Access the full text.
Sign up today, get DeepDyve free for 14 days.
P. Eggermont (1990)
Multiplicative iterative algorithms for convex programmingLinear Algebra and its Applications, 130
Y. Vardi, L. Shepp, L. Kaufman (1985)
A Statistical Model for Positron Emission TomographyJournal of the American Statistical Association, 80
A. Iusem (1991)
Convergence analysis for a multiplicatively relaxed EM algorithmMathematical Methods in The Applied Sciences, 14
T. Cover (1984)
An algorithm for maximizing expected log investment returnIEEE Trans. Inf. Theory, 30
A. Iusem (1993)
On the convergence of iterative methods for symmetric linear complementarity problemsMathematical Programming, 59
I. Csiszár, G. Tusnády (1984)
Information geometry and alternating minimization proceduresStatistics and Decisions, Suppl. 1
J. Kiusalaas (2015)
Introduction to OptimizationApplied Evolutionary Algorithms for Engineers Using Python
A. Dempster, N. Laird, D. Rubin (1977)
Maximum likelihood from incomplete data via the EM - algorithm plus discussions on the paper
A. Ostrowski (1967)
Solution of equations and systems of equationsMathematics of Computation, 21
A. P. Dempster, M. N. Laird, D. B. Rubin (1977)
Maximum likelihood from incomplete data via the EM algorithmJ. Royal Statist. Soc. Series B, 37
A. Iusem, M. Teboulle (1993)
A regularized dual-based iterative method for a class of image reconstruction problemsInverse Problems, 9
A. N. Iusem (1992)
A short convergence proof of the EM algorithm for a specific Poisson modelRev. Brasileira Probab. Estatistica, 6
N. Karmarkar (1984)
A new polynomial-time algorithm for linear programmingCombinatorica, 4
J. Pang (1984)
Necessary and sufficient conditions for the convergence of iterative methods for the linear complementarity problemJournal of Optimization Theory and Applications, 42
J. Dennis, Bobby Schnabel (1983)
Numerical methods for unconstrained optimization and nonlinear equations
We analyze an algorithm for the problem minf(x) s.t.x ⩾ 0 suggested, without convergence proof, by Eggermont. The iterative step is given by x j k+1 =x j k (1-λk▽f(x k)j) with λk > 0 determined through a line search. This method can be seen as a natural extension of the steepest descent method for unconstrained optimization, and we establish convergence properties similar to those known for steepest descent, namely weak convergence to a KKT point for a generalf, weak convergence to a solution for convexf and full convergence to the solution for strictly convexf. Applying this method to a maximum likelihood estimation problem, we obtain an additively overrelaxed version of the EM Algorithm. We extend the full convergence results known for EM to this overrelaxed version by establishing local Fejér monotonicity to the solution set.
Acta Applicandae Mathematicae – Springer Journals
Published: Dec 30, 2004
Read and print from thousands of top scholarly journals.
Already have an account? Log in
Bookmark this article. You can see your Bookmarks on your DeepDyve Library.
To save an article, log in first, or sign up for a DeepDyve account if you don’t already have one.
Copy and paste the desired citation format or use the link below to download a file formatted for EndNote
Access the full text.
Sign up today, get DeepDyve free for 14 days.
All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.