Access the full text.
Sign up today, get DeepDyve free for 14 days.
R. Brayton, J. Cullum (1979)
An algorithm for minimizing a differentiable function subject to box constraints and errorsJournal of Optimization Theory and Applications, 29
E. Levitin, Boris Polyak (1966)
Constrained minimization methodsUssr Computational Mathematics and Mathematical Physics, 6
Z. Luo, P. Tseng (1992)
On the convergence of the coordinate descent method for convex differentiable minimizationJournal of Optimization Theory and Applications, 72
Y. Cheng (1984)
On the gradient-projection method for solving the nonsymmetric linear complementarity problemJournal of Optimization Theory and Applications, 43
G. McCormick, R. Tapia (1972)
The Gradient Projection Method under Mild Differentiability ConditionsSiam Journal on Control, 10
J. Dunn (1967)
On the classification of singular and nonsingular extremals for the Pontryagin maximum principleJournal of Mathematical Analysis and Applications, 17
A. Goldstein (1964)
Convex programming in Hilbert spaceBulletin of the American Mathematical Society, 70
E. Gafni, D. Bertsekas (1984)
TWO-METRIC PROJECTION METHODS FOR CONSTRAINED OPTIMIZATION*Siam Journal on Control and Optimization, 22
D. Bertsekas (1974)
On the Goldstein-Levitin-Polyak gradient projection method, 13
K. Kiwiel, K. Murty (1996)
Convergence of the steepest descent method for minimizing quasiconvex functionsJournal of Optimization Theory and Applications, 89
D. Bertsekas (1981)
Projected Newton methods for optimization problems with simple constraints1981 20th IEEE Conference on Decision and Control including the Symposium on Adaptive Processes
J. Burke, J. Moré, G. Toraldo (1990)
Convergence properties of trust region methods for linear and convex constraintsMathematical Programming, 47
G. Xue (1986)
On convergence properties of a least-distance programming procedure for minimization problems under linear constraintsJournal of Optimization Theory and Applications, 50
D. Touati-Ahmed, C. Storey (1990)
Efficient hybrid conjugate gradient techniquesJournal of Optimization Theory and Applications, 64
R. Phelps (1986)
The gradient projection method using Curry's steplengthSiam Journal on Control and Optimization, 24
J. Dunn (1981)
Global and Asymptotic Convergence Rate Estimates for a Class of Projected Gradient ProcessesSiam Journal on Control and Optimization, 19
J. Dunn (1991)
A subspace decomposition principle for scaled gradient projection methods: global theorySiam Journal on Control and Optimization, 29
Changyu Wang, N. Xiu (2000)
Convergence of the Gradient Projection Method for Generalized Convex MinimizationComputational Optimization and Applications, 16
A. Conn, N. Gould, P. Toint (1988)
Testing a class of methods for solving minimization problems with simple bounds on the variablesMathematics of Computation, 50
J. Dunn (1987)
On the convergence of projected gradient processes to singular critical pointsJournal of Optimization Theory and Applications, 55
E. Gafni, D. Bertsekas, Decision Systems. (1982)
CONVERGENCE OF A GRADIENT PROJECTION METHOD
A. Conn, I. Gould, P. Toint (1988)
Global convergence of a class of trust region algorithms for optimization with simple boundsSIAM Journal on Numerical Analysis, 25
B. Rustem (1984)
A class of superlinearly convergent projection algorithms with relaxed stepsizesApplied Mathematics and Optimization, 12
J. Danskin (1966)
The Theory of Max-Min, with ApplicationsSiam Journal on Applied Mathematics, 14
Fang Wu, Shiquan Wu (1995)
A modified Frank-Wolfe algorithm and its convergence propertiesActa Mathematicae Applicatae Sinica, 11
Z. Luo, P. Tseng (1992)
On the linear convergence of descent methods for convex essentially smooth minimizationSiam Journal on Control and Optimization, 30
Z. Wei, L. Qi, Houyuan Jiang (1997)
Some Convergence Properties of Descent MethodsJournal of Optimization Theory and Applications, 95
M. Solodov, P. Tseng (1996)
Modified Projection-Type Methods for Monotone Variational InequalitiesSiam Journal on Control and Optimization, 34
P. Calamai, J. Moré (1987)
Projected gradient methods for linearly constrained problemsMathematical Programming, 39
In this paper, the continuously differentiable optimization problem min{f(x) : x ∈ Ω}, where Ω ∈ R n is a nonempty closed convex set, the gradient projection method by Calamai and More (Math. Programming, Vol.39. P.93-116, 1987) is modified by memory gradient to improve the convergence rate of the gradient projection method is considered. The convergence of the new method is analyzed without assuming that the iteration sequence {x k } of bounded. Moreover, it is shown that, when f(x) is pseudo-convex (quasi-convex) function, this new method has strong convergence results. The numerical results show that the method in this paper is more effective than the gradient projection method.
Acta Mathematicae Applicatae Sinica – Springer Journals
Published: Jan 1, 2006
Read and print from thousands of top scholarly journals.
Already have an account? Log in
Bookmark this article. You can see your Bookmarks on your DeepDyve Library.
To save an article, log in first, or sign up for a DeepDyve account if you don’t already have one.
Copy and paste the desired citation format or use the link below to download a file formatted for EndNote
Access the full text.
Sign up today, get DeepDyve free for 14 days.
All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.