Access the full text.
Sign up today, get DeepDyve free for 14 days.
P. L. Lions, P. E. Souganidis (1985)
Differential games, optimal control and directional derivatives of viscosity solutions of Bellman's and Isaaks' equationsSIAM J. Control Optim., 23
L. Cesari (1983)
Optimization Theory and Applications
M. G. Crandall, L. C. Evans, P. L. Lions (1984)
Some properties of viscosity solutions of the Hamilton-Jacobi equationTrans. Amer. Math. Soc., 282
W. H. Fleming, R. W. Rishel (1975)
Deterministic and Stochastic Optimal Control
J.-P. Aubin, I. Ekeland (1984)
Applied Nonlinear Analysis
D. C. Offin (1978)
A Hamilton-Jacobi approach to the differential inclusion problem
R. Bellman, S. Dreyfus (1962)
Applied Dynamic Programming
D. Jacobson, D. Mayne (1970)
Differential Dynamic Programming, Modern Analytic and Computational Methods in Science and Mathematics
J.-P. Aubin, A. Cellina (1984)
Differential Inclusions
G. Leitmann (1982)
Dynamical Systems and Microphysics
H. Frankowska (1987)
The maximum principle for an optimal solution to a differential inclusion with end point constraintsSIAM J. Control Optim., 25
F. H. Clarke, R. B. Vinter (1987)
The relationship between the maximum principle and dynamic programmingSIAM J. Control Optim., 25
R. E. Bellman (1957)
Dynamic Programming
E. Zeidler (1984)
Nonlinear Functional Analysis and its Applications, vol. III
P. L. Lions (1982)
Generalized solutions of Hamilton-Jacobi equations
F. H. Clarke (1983)
Optimization and Nonsmooth Analysis
F. H. Clarke, R. B. Vinter (1983)
Local optimality conditions and Lipschitzian solutions to the Hamilton-Jacobi equationSIAM J. Control Optim., 21
H. Frankowska (1987)
Local controllability and infinitesimal generators of semigroups of set-valued mapsSIAM J. Control Optim., 25
M. G. Crandall, P. L. Lions (1983)
Viscosity solutions of Hamilton-Jacobi equationsTrans. Amer. Math. Soc., 277
S. E. Dreyfus (1965)
Dynamic Programming and the Calculus of Variations
In this paper we study the existence of optimal trajectories associated with a generalized solution to the Hamilton-Jacobi-Bellman equation arising in optimal control. In general, we cannot expect such solutions to be differentiable. But, in a way analogous to the use of distributions in PDE, we replace the usual derivatives with “contingent epiderivatives” and the Hamilton-Jacobi equation by two “contingent Hamilton-Jacobi inequalities.” We show that the value function of an optimal control problem verifies these “contingent inequalities.”
Applied Mathematics and Optimization – Springer Journals
Published: Mar 23, 2005
Read and print from thousands of top scholarly journals.
Already have an account? Log in
Bookmark this article. You can see your Bookmarks on your DeepDyve Library.
To save an article, log in first, or sign up for a DeepDyve account if you don’t already have one.
Copy and paste the desired citation format or use the link below to download a file formatted for EndNote
Access the full text.
Sign up today, get DeepDyve free for 14 days.
All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.