Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

A new smoothing-regularization approach for a maximum-likelihood estimation problem

A new smoothing-regularization approach for a maximum-likelihood estimation problem We consider the problem minΣ i=1 m (⟨ai,x⟩−bilog⟨a i, z⟩) subject tox ≥ 0 which occurs as a maximum-likelihood estimation problem in several areas, and particularly in positron emission tomography. After noticing that this problem is equivalent to mind(b, Ax) subject tox ≥ 0, whered is the Kullback-Leibler information divergence andA, b are the matrix and vector with rows and entriesa i,b i, respectively, we suggest a regularized problem mind(b, Ax) + μd(v, Sx), whereμ is the regularization parameter,S is a smoothing matrix, andv is a fixed vector. We present a computationally attractive algorithm for the regularized problem, establish its convergence, and show that the regularized solutions, asμ goes to 0, converge to the solution of the original problem which minimizes a convex function related tod(v, Sx). We give convergence-rate results both for the regularized solutions and for their functional values. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Applied Mathematics and Optimization Springer Journals

A new smoothing-regularization approach for a maximum-likelihood estimation problem

Loading next page...
 
/lp/springer-journals/a-new-smoothing-regularization-approach-for-a-maximum-likelihood-fVlB5Vxv90

References (21)

Publisher
Springer Journals
Copyright
Copyright © 1994 by Springer-Verlag New York Inc
Subject
Mathematics; Calculus of Variations and Optimal Control; Optimization; Systems Theory, Control; Theoretical, Mathematical and Computational Physics; Mathematical Methods in Physics; Numerical and Computational Physics, Simulation
ISSN
0095-4616
eISSN
1432-0606
DOI
10.1007/BF01189476
Publisher site
See Article on Publisher Site

Abstract

We consider the problem minΣ i=1 m (⟨ai,x⟩−bilog⟨a i, z⟩) subject tox ≥ 0 which occurs as a maximum-likelihood estimation problem in several areas, and particularly in positron emission tomography. After noticing that this problem is equivalent to mind(b, Ax) subject tox ≥ 0, whered is the Kullback-Leibler information divergence andA, b are the matrix and vector with rows and entriesa i,b i, respectively, we suggest a regularized problem mind(b, Ax) + μd(v, Sx), whereμ is the regularization parameter,S is a smoothing matrix, andv is a fixed vector. We present a computationally attractive algorithm for the regularized problem, establish its convergence, and show that the regularized solutions, asμ goes to 0, converge to the solution of the original problem which minimizes a convex function related tod(v, Sx). We give convergence-rate results both for the regularized solutions and for their functional values.

Journal

Applied Mathematics and OptimizationSpringer Journals

Published: Feb 3, 2005

There are no references for this article.