Access the full text.
Sign up today, get DeepDyve free for 14 days.
E. Çinlar (1972)
Markov additive processes. IIZeitschrift für Wahrscheinlichkeitstheorie und Verwandte Gebiete, 24
S. N. Ethier, T. G. Kurtz (1986)
Markov Processes: Characterization and Convergence
H. Taylor (1975)
Optimal replacement under additive damage and other failure modelsNaval Research Logistics Quarterly, 22
C. Conrad, N. McClamroch (1987)
The drilling problem: A stochastic modeling and control example in manufacturingIEEE Transactions on Automatic Control, 32
W. Fleming, R. Rishel (1975)
Deterministic and Stochastic Optimal Control
E. Barron (1993)
The Bellman equation for control of the running max of a diffusion and applications to look-back optionsApplicable Analysis, 48
L. Baxter, E. Lee (1988)
Optimal Control of a Model for a System Subject to Continuous WearProbability in the Engineering and Informational Sciences, 2
H. Taylor (1965)
Markovian sequential replacement processes
S. Karlin, H. Taylor (1968)
A First Course on Stochastic Processes
R. Feldman (1977)
Optimal Replacement for Systems Governed by Markov Additive Shock ProcessesAnnals of Probability, 5
E. Çinlar (1979)
On increasing continuous processesStochastic Processes and their Applications, 9
E. Çinlar (1984)
MARKOV AND SEMIMARKOV MODELS OF DETERIORATION
A. Heinricher, R. Stockbridge (1991)
Optimal control of the running maxSiam Journal on Control and Optimization, 29
C. Valdez‐Flores, R. Feldman (1989)
A survey of preventive maintenance models for stochastically deteriorating single-unit systemsNaval Research Logistics, 36
A. Heinricher, R. Stockbridge (1993)
Optimal Control and Replacement with State-Dependent Failure Rate: An Invariant Measure ApproachAnnals of Applied Probability, 3
E. Çinlar (1984)
Reliability Theory and Models
R. Anderson (1988)
Replacement with nonconstant operating costSiam Journal on Control and Optimization, 26
M. Freimer, R. Bellman (1961)
Adaptive Control Processes: A Guided TourThe Mathematical Gazette, 46
A. Heinricher, R. Stockbridge (1993)
Optimal Control and Replacement with State-Dependent Failure Rate: Dynamic ProgrammingAnnals of Applied Probability, 3
L. Baxter, E. Lee (1987)
A Diffusion Model for a System Subject to Continuous WearProbability in the Engineering and Informational Sciences, 1
E. Çinlar (1972)
Markov additive processes, I and IIZeitschrift Wahrscheinlichkeitstheorie und verwandte Gebete, 24
We analyze optimal control problems for systems subject to random deterioration and failure. The system is replaced at failure and our objective is to optimize the utilization of the system between failures. The problems are new in that the payoff depends on the running maximum of a diffusion. This provides an intuitively appealing model for naturally monotone phenomena such as wear. The long-term average control problem is reduced to a family of simpler, single-cycle problems, a formula for the invariant measure for the (controlled) process is determined and a computational scheme based on the decomposition and formula is given.
Applied Mathematics and Optimization – Springer Journals
Published: Feb 2, 2005
Read and print from thousands of top scholarly journals.
Already have an account? Log in
Bookmark this article. You can see your Bookmarks on your DeepDyve Library.
To save an article, log in first, or sign up for a DeepDyve account if you don’t already have one.
Copy and paste the desired citation format or use the link below to download a file formatted for EndNote
Access the full text.
Sign up today, get DeepDyve free for 14 days.
All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.