Access the full text.
Sign up today, get DeepDyve free for 14 days.
R. Rishel (1975)
Dynamic Programming and Minimum Principles for Systems with Jump Markov DisturbancesSiam Journal on Control, 13
L. Stone (1973)
Necessary and Sufficient Conditions for Optimal Control of Semi-Markov Jump ProcessesSiam Journal on Control, 11
X. Zhou (1993)
On the necessary conditions of optimal controls for stochastic partial differential equationsSiam Journal on Control and Optimization, 31
D. Sworder (1969)
Feedback control of a class of linear systems with jump parametersIEEE Transactions on Automatic Control, 14
R. Rishel (1970)
Necessary and Sufficient Dynamic Programming Conditions for Continuous Time Stochastic Optimal ControlSiam Journal on Control, 8
K. Bichteler, Jean-Bernard Gravereaux, J. Jacod (1987)
Malliavin calculus for processes with jumps
M. Davis, P. Varaiya (1973)
Dynamic Programming Conditions for Partially Observable Stochastic SystemsSiam Journal on Control, 11
W. Fleming (1968)
Optimal Control of Partially Observable DiffusionsSiam Journal on Control, 6
R. Boel, P. Varaiya (1977)
Optimal Control of Jump ProcessesSiam Journal on Control and Optimization, 15
É. Pardoux, S. Peng (1992)
Backward stochastic differential equations and quasilinear parabolic partial differential equations
Shanjian Tang (1998)
The Maximum Principle for Partially Observed Optimal Control of Stochastic Differential EquationsSiam Journal on Control and Optimization, 36
Shanjian Tang, Xunjing Li (1994)
Necessary Conditions for Optimal Control of Stochastic Systems with Random JumpsSiam Journal on Control and Optimization, 32
S. Pliska (1975)
Controlled jump processesStochastic Processes and their Applications, 3
Xunjing Li, Shanjian Tang (1995)
General necessary conditions for partially observed optimal stochastic controlsJournal of Applied Probability, 32
J. Baras, R. Elliott, M. Kohlmann (1989)
The partially observed stochastic minimum principleSiam Journal on Control and Optimization, 27
Mark Davis, R. Elliott (1977)
Optimal control of a jump processZeitschrift für Wahrscheinlichkeitstheorie und Verwandte Gebiete, 40
R. Rishel (1975)
A Minimum Principle for Controlled Jump Processes
U. Haussmann (1987)
The maximum principle for optimal control of diffusions with partial informationSiam Journal on Control and Optimization, 25
M. Fragoso (1988)
On a partially observable LQG problem for systems with Markovian jumping parametersSystems & Control Letters, 10
S. Peng (1990)
A general stochastic maximum principle for optimal control problemsSiam Journal on Control and Optimization, 28
R. Liptser, A. Shiryayev (1984)
Statistics of Random Processes I: General Theory
J. Norris (1988)
MALLIAVIN CALCULUS FOR PROCESSES WITH JUMPS (Stochastic Monographs 2)Bulletin of The London Mathematical Society, 20
Tsukasa Fujiwara, H. Kunita (1985)
Stochastic differential equations of jump type and Lévy processes in diffeomorphisms groupJournal of Mathematics of Kyoto University, 25
A. Bensoussan (1983)
Maximum principle and dynamic programming approaches of the optimal control of partially observed diffusionsStochastics An International Journal of Probability and Stochastic Processes, 9
. This paper studies the optimal control problem for point processes with Gaussian white-noised observations. A general maximum principle is proved for the partially observed optimal control of point processes, without using the associated filtering equation . Adjoint flows—the adjoint processes of the stochastic flows of the optimal system—are introduced, and their relations are established. Adjoint vector fields , which are observation-predictable, are introduced as the solutions of associated backward stochastic integral-partial differential equtionsdriven by the observation process. In a heuristic way, their relations are explained, and the adjoint processes are expressed in terms of the adjoint vector fields, their gradients and Hessians, along the optimal state process. In this way the adjoint processes are naturally connected to the adjoint equationof the associated filtering equation . This shows that the conditional expectation in the maximum condition is computable through filtering the optimal state, as usually expected. Some variants of the partially observed stochastic maximum principle are derived, and the corresponding maximum conditions are quite different from the counterpart for the diffusion case. Finally, as an example, a quadratic optimal control problem with a free Poisson process and a Gaussian white-noised observation is explicitly solved using the partially observed maximum principle.
Applied Mathematics & Optimization – Springer Journals
Published: Jan 1, 2002
Keywords: Point processes; Partially observed optimal control; Adjoint vector fields; Maximum principle; Backward stochastic integral-partial differential equations; AMS Classification. Primary 93E20, 93E11, Secondary 60G15, 60H25.
Read and print from thousands of top scholarly journals.
Already have an account? Log in
Bookmark this article. You can see your Bookmarks on your DeepDyve Library.
To save an article, log in first, or sign up for a DeepDyve account if you don’t already have one.
Copy and paste the desired citation format or use the link below to download a file formatted for EndNote
Access the full text.
Sign up today, get DeepDyve free for 14 days.
All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.