Access the full text.
Sign up today, get DeepDyve free for 14 days.
H. Soner (1986)
Optimal control with state-space constraint ISiam Journal on Control and Optimization, 24
S. Pliska (1978)
On a functional differential equation that arises in a Markov control problemJournal of Differential Equations, 28
M. A. H. Dempster (1991)
Applied Stochastic Analysis
G. Last, A. Brandt (1995)
Marked Point Processes on the Real Line
Mark Davis, M. Farid (1996)
A Target Recognition Problem: Sequential Analysis and Optimal ControlSiam Journal on Control and Optimization, 34
L. Brown, R. Purves (1973)
Measurable Selections of ExtremaAnnals of Statistics, 1
A. Hordijk, F. Schouten (1985)
Markov Decision Drift Processes; Conditions for Optimality Obtained by DiscretizationMath. Oper. Res., 10
D. Bertsekas, S. E. Shreve (1978)
Stochastic Optimal Control: The Discrete-Time Case
A. Hordijk, F. Schouten (1983)
Average optimal policies in Markov decision drift processes with applications to a queueing and a replacement modelAdvances in Applied Probability, 15
Mark Davis (1986)
Control of piecewise-deterministic processes via discrete-time dynamic programming
M. Schäl (2002)
On the first order Bellman equation in reinsurance and investment control problems
A. Yushkevich (1990)
Verification Theorems for Markov Decision Processes with Controlled Deterministic Drift and Gradual and Impulsive ControlsTheory of Probability and Its Applications, 34
W. Fleming, H. Soner, H. Soner, Div Mathematics, Florence Fleming, Serpil Soner (1992)
Controlled Markov processes and viscosity solutions
A. Yushkevich (1988)
Bellman inequalities in markov decision deterministic drift processesStochastics An International Journal of Probability and Stochastic Processes, 23
A. Hordijk, F. Schouten (1984)
Discretization and Weak Convergence in Markov Decision Drift ProcessesMath. Oper. Res., 9
P. Brémaud (1983)
Point processes and queues, martingale dynamicsJournal of the American Statistical Association, 78
Plum Hans-joachim (1991)
Impulsive and continuously acting control of jump processes-time discretizationStochastics and Stochastics Reports, 36
A. Yushkevich (1980)
On Reducing a Jump Controllable Markov Model to a Model with Discrete TimeTheory of Probability and Its Applications, 25
K. Hinderer (1970)
Foundations of Non-stationary Dynamic Programming with Discrete Time Parameter
Michael Dempster, Michael Dempster, J. Ye, J. Ye (1996)
Generalized Bellman-Hamilton-Jacobi optimality conditions for a control problem with a boundary conditionApplied Mathematics and Optimization, 33
C. Dellacherie, P. Meyer (1978)
Probabilities and potential C
M. Schäl (1975)
Conditions for optimality in dynamic programming and for the limit of n-stage optimal policies to be optimalZeitschrift für Wahrscheinlichkeitstheorie und Verwandte Gebiete, 32
J. Warga (1972)
Optimal control of differential and functional equations
Mark Davis (1995)
Markov Models and OptimizationTechnometrics, 37
S. Deshmukh, S. Pliska (1980)
Optimal Consumption and Exploration of Nonrenewable Resources Under UncertaintyEconometrica, 48
A. Yushkevich (1997)
The Compactness of a Policy Space in Dynamic Programming Via an Extension Theorem for Carathéodory FunctionsMath. Oper. Res., 22
M. Schäl (1998)
On piecewise deterministic Markov control processes: Control of jumps and of risk processes in insuranceInsurance Mathematics & Economics, 22
Juan Ye (1990)
Optimal control of piecewise deterministic Markov processes.
A. Almudevar (2001)
A Dynamic Programming Algorithm for the Optimal Control of Piecewise Deterministic Markov ProcessesSIAM J. Control. Optim., 40
H. Soner (1985)
Optimal control of a one-dimensional storage processApplied Mathematics and Optimization, 13
N. Bäuerle (2001)
Discounted Stochastic Fluid ProgramsMath. Oper. Res., 26
R. Strauch (1966)
Negative Dynamic ProgrammingAnnals of Mathematical Statistics, 37
E. Çinlar, J. Jacod (1981)
Representation of Semimartingale Markov Processes in Terms of Wiener Processes and Poisson Random Measures
M. H. A. Davis (1986)
Stochastic Differential System
B. Rao, K. Parthasarathy (1967)
Probability measures on metric spaces
Mark Davis (1984)
Piecewise‐Deterministic Markov Processes: A General Class of Non‐Diffusion Stochastic ModelsJournal of the royal statistical society series b-methodological, 46
M. Dempster, J. Ye (1992)
Necessary and sufficient optimality conditions for control of piecewise deterministic markov processesStochastics and Stochastics Reports, 40
The control of piecewise-deterministic processes is studied where only local boundedness of the data is assumed. Moreover the discount rate may be zero. The value function is shown to be solution to the Bellman equation in a weak sense; however the solution concept is strong enough to generate optimal policies. Continuity and compactness conditions are given for the existence of nonrelaxed optimal feedback controls.
Acta Applicandae Mathematicae – Springer Journals
Published: Oct 18, 2004
Read and print from thousands of top scholarly journals.
Already have an account? Log in
Bookmark this article. You can see your Bookmarks on your DeepDyve Library.
To save an article, log in first, or sign up for a DeepDyve account if you don’t already have one.
Copy and paste the desired citation format or use the link below to download a file formatted for EndNote
Access the full text.
Sign up today, get DeepDyve free for 14 days.
All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.