Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Zero-Sum Stochastic Differential Game in Finite Horizon Involving Impulse Controls

Zero-Sum Stochastic Differential Game in Finite Horizon Involving Impulse Controls This paper considers the problem of two-player zero-sum stochastic differential game with both players adopting impulse controls in finite horizon under rather weak assumptions on the cost functions (c and χ not decreasing in time). We use the dynamic programming principle and viscosity solutions approach to show existence and uniqueness of a solution for the Hamilton–Jacobi–Bellman–Isaacs (HJBI) partial differential equation (PDE) of the game. We prove that the upper and lower value functions coincide. Keywords Stochastic differential game · Impulse control · Quasi-variational inequality · Viscosity solution Mathematics Subject Classification 93E20 · 49L20 · 49L25 · 49N70 1 Introduction The theory of differential games with Elliot–Kalton strategies in the viscosity solution framework was initiated by Evans and Souganidis [21]. Fleming and Souganidis [22] studied in a rigorous manner two-player zero-sum stochastic differential games and their work translated former results on differential games from the purely deterministic into the stochastic framework. Subsequently, Buckdahn and Li [10] generalized the framework introduced in [22]. In this paper, we consider the state process of the stochastic differential game, defined as the solution of the following stochastic equation: B Brahim El Asri b.elasri@uiz.ac.ma Sehail Mazid sehail.mazid@edu.uiz.ac.ma Université Ibn Zohr, Equipe, Aide á la decision, http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Applied Mathematics and Optimization Springer Journals

Zero-Sum Stochastic Differential Game in Finite Horizon Involving Impulse Controls

Loading next page...
 
/lp/springer-journals/zero-sum-stochastic-differential-game-in-finite-horizon-involving-VcW3JoUgNY

References (50)

Publisher
Springer Journals
Copyright
Copyright © 2018 by Springer Science+Business Media, LLC, part of Springer Nature
Subject
Mathematics; Calculus of Variations and Optimal Control; Optimization; Systems Theory, Control; Theoretical, Mathematical and Computational Physics; Mathematical Methods in Physics; Numerical and Computational Physics, Simulation
ISSN
0095-4616
eISSN
1432-0606
DOI
10.1007/s00245-018-9529-2
Publisher site
See Article on Publisher Site

Abstract

This paper considers the problem of two-player zero-sum stochastic differential game with both players adopting impulse controls in finite horizon under rather weak assumptions on the cost functions (c and χ not decreasing in time). We use the dynamic programming principle and viscosity solutions approach to show existence and uniqueness of a solution for the Hamilton–Jacobi–Bellman–Isaacs (HJBI) partial differential equation (PDE) of the game. We prove that the upper and lower value functions coincide. Keywords Stochastic differential game · Impulse control · Quasi-variational inequality · Viscosity solution Mathematics Subject Classification 93E20 · 49L20 · 49L25 · 49N70 1 Introduction The theory of differential games with Elliot–Kalton strategies in the viscosity solution framework was initiated by Evans and Souganidis [21]. Fleming and Souganidis [22] studied in a rigorous manner two-player zero-sum stochastic differential games and their work translated former results on differential games from the purely deterministic into the stochastic framework. Subsequently, Buckdahn and Li [10] generalized the framework introduced in [22]. In this paper, we consider the state process of the stochastic differential game, defined as the solution of the following stochastic equation: B Brahim El Asri b.elasri@uiz.ac.ma Sehail Mazid sehail.mazid@edu.uiz.ac.ma Université Ibn Zohr, Equipe, Aide á la decision,

Journal

Applied Mathematics and OptimizationSpringer Journals

Published: Sep 24, 2018

There are no references for this article.