Access the full text.
Sign up today, get DeepDyve free for 14 days.
(2001)
Linear and Nonlinear Programming, Addison–Wesley, Reading, Massachusetts, 1973
J. Cea, R. Glowinski (1973)
Sur des méthodes d'optimisation par relaxation, 7
R. Rockafellar (1976)
Augmented Lagrangians and Applications of the Proximal Point Algorithm in Convex ProgrammingMath. Oper. Res., 1
H. Howson, N. Sancho (1975)
A new algorithm for the solution of multi-state dynamic programming problemsMathematical Programming, 8
G. McCormick, E. Polak (1972)
Computational methods in optimization : a unified approachMathematics of Computation, 26
O. Mangasarian, R. Leone (1987)
Parallel successive overrelaxation methods for symmetric linear complementarity problems and linear programsJournal of Optimization Theory and Applications, 54
L. Grippo, M. Sciandrone (2000)
On the convergence of the block nonlinear Gauss-Seidel method under convex constraintsOper. Res. Lett., 26
(1999)
Sparse Undetermined ICA: Estimating the Mixing Matrix and the Sources Separately
O. Mangasarian (1969)
Nonlinear Programming
A. Auslender (1976)
Optimisation : méthodes numériques
(1970)
Iteratiûe Solution of Nonlinear Equations in Seûeral Variables
Norman Zadeh (1970)
Note---A Note on the Cyclic Coordinate Ascent MethodManagement Science, 16
(1970)
A Dynamic Programming Successi û e Approximations Technique with Con û ergence Proofs
R. Sargent, D. Sebastian (1973)
On the convergence of sequential minimization algorithmsJournal of Optimization Theory and Applications, 12
K. Kiwiel (1997)
Free-Steering Relaxation Methods for Problems with Strictly Convex Costs and Linear ConstraintsMath. Oper. Res., 22
P. Tseng (1993)
Dual coordinate ascent methods for non-strictly convex minimizationMathematical Programming, 59
Dykstra's Algorithm with Bregman Projections: A Conûergence Proof, Optimization (to appear)
(1999)
Athena Scientific
D. Bertsekas, J. Tsitsiklis (1989)
Parallel and distributed computation
D. Bertsekas, J. Tsitsiklis (1997)
Partial Solutions Manual Parallel and Distributed Computation : Numerical Methods
Shih-Ping Han (1989)
A Decomposition Method and Its Application to Convex ProgrammingMath. Oper. Res., 14
P. Tseng (1990)
Dual ascent methods for problems with strictly convex costs and linear constraints: a unified approachSiam Journal on Control and Optimization, 28
A. Stachurski (2000)
Parallel Optimization: Theory, Algorithms and ApplicationsScalable Comput. Pract. Exp., 3
J. Warga (1963)
Minimizing Certain Convex FunctionsJournal of The Society for Industrial and Applied Mathematics, 11
(1988)
A Successiûe Projection Method
(1972)
Determination Approchée d’un Point Fixe d’une Application Pseudo-Contractante: Cas de l ’Application Prox
T. Stern (1977)
A Class of Decentralized Routing Algorithms Using RelaxationIEEE Trans. Commun., 25
S. Arimoto (1972)
An algorithm for computing the capacity of arbitrary discrete memoryless channelsIEEE Trans. Inf. Theory, 18
L. Bregman (1967)
The relaxation method of finding the common point of convex sets and its application to the solution of problems in convex programmingUssr Computational Mathematics and Mathematical Physics, 7
Z. Zuo, C. Wu (1989)
Successive approximation technique for a class of large-scale NLP problems and its application to dynamic programmingJournal of Optimization Theory and Applications, 62
C. Hildreth (1957)
A quadratic programming procedureNaval Research Logistics Quarterly, 4
S. Sardy, A. Bruce, P. Tseng (2000)
Block Coordinate Relaxation Methods for Nonparametric Wavelet DenoisingJournal of Computational and Graphical Statistics, 9
M. Zibulevsky, Barak Pearlmutter (2000)
Blind source separation by sparse decomposition, 4056
M. Powell (1973)
On search directions for minimization algorithmsMathematical Programming, 4
Z. Luo, P. Tseng (1993)
Error bounds and convergence analysis of feasible descent methods: a general approachAnnals of Operations Research, 46-47
R. Blahut (1972)
Computation of channel capacity and rate-distortion functionsIEEE Trans. Inf. Theory, 18
(1970)
Conûex Analysis
We study the convergence properties of a (block) coordinate descent method applied to minimize a nondifferentiable (nonconvex) function f(x 1, . . . , x N ) with certain separability and regularity properties. Assuming that f is continuous on a compact level set, the subsequence convergence of the iterates to a stationary point is shown when either f is pseudoconvex in every pair of coordinate blocks from among N-1 coordinate blocks or f has at most one minimum in each of N-2 coordinate blocks. If f is quasiconvex and hemivariate in every coordinate block, then the assumptions of continuity of f and compactness of the level set may be relaxed further. These results are applied to derive new (and old) convergence results for the proximal minimization algorithm, an algorithm of Arimoto and Blahut, and an algorithm of Han. They are applied also to a problem of blind source separation.
Journal of Optimization Theory and Applications – Springer Journals
Published: Oct 9, 2004
Read and print from thousands of top scholarly journals.
Already have an account? Log in
Bookmark this article. You can see your Bookmarks on your DeepDyve Library.
To save an article, log in first, or sign up for a DeepDyve account if you don’t already have one.
Copy and paste the desired citation format or use the link below to download a file formatted for EndNote
Access the full text.
Sign up today, get DeepDyve free for 14 days.
All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.