Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Asset Liquidation Under Drift Uncertainty and Regime-Switching Volatility

Asset Liquidation Under Drift Uncertainty and Regime-Switching Volatility Optimal liquidation of an asset with unknown constant drift and stochastic regime- switching volatility is studied. The uncertainty about the drift is represented by an arbitrary probability distribution; the stochastic volatility is modelled by m-state Markov chain. Using filtering theory, an equivalent reformulation of the original prob- lem as a four-dimensional optimal stopping problem is found and then analysed by constructing approximating sequences of three-dimensional optimal stopping prob- lems. An optimal liquidation strategy and various structural properties of the problem are determined. Analysis of the two-point prior case is presented in detail, building on which, an outline of the extension to the general prior case is given. Keywords Optimal liquidation · Drift uncertainty · Regime-switching volatility · Sequential analysis · Optimal stopping · Stochastic filtering Mathematics Subject Classification Primary 60G40 · Secondary 91G80 · 60J25 1 Introduction Selling is a fundamental and ubiquitous economic operation. As the prices of goods fluctuate over time, ‘What is the best time to sell an asset to maximise revenue?’ qualifies as a basic question in Finance. Suppose that an asset needs to be sold before a known deterministic time T > 0 and that the only source of information available to the seller is the price history. A natural mathematical reformulation of the aforementioned optimal selling question is to find a selling time τ ∈ T such that E[S ]= sup E[S ], (1.1) τ τ τ ∈T B Juozas Vaicenavicius juozas.vaicenavicius@it.uu.se Department of Information Technology, Uppsala University, Box 337, 751 05 Uppsala, Sweden 123 Applied Mathematics & Optimization where {S } denotes the price process and T denotes the set of stopping times with t t ≥0 T respect to the price process S. Many popular continuous models for the price process are of the form dS = αS dt + σ(t )S dW , (1.2) t t t t where α ∈ R is called the drift, and σ ≥ 0 is known as the volatility process. Imposing simplifying assumptions that the volatility is independent of W as well as time-homogeneous, an m-state time-homogeneous Markov chain stands out as a basic though still rather flexible stochastic volatility model (proposed in [11]), which we choose to use in this article. The flexibility comes from the fact that we can choose the state space as well as the transition intensities between the states. Though the problem (1.1) in which S follows (1.2) is well-posed mathematically, from a financial point of view, the known drift assumption is widely accepted to be unreasonable (e.g. see [32, Sect. 4.2 on p. 144]) and needs to be relaxed. Hence, using the Bayesian paradigm, we model the initial uncertainty about the drift by a probability distribution (known as the prior in Bayesian inference), which incorporates all the available information about the parameter and its uncertainty (see [15] for more on the interpretation of the prior). If the quantification of initial uncertainty is subjective, then the prior represents one’s beliefs about how likely the drift is to take different values. To be able to incorporate arbitrary prior beliefs, we set out to solve the optimal selling problem (1.1) under an arbitrary prior for the drift. In the present paper, we analyse and solve the asset liquidation problem (1.1)inthe case when S follows (1.2) with m-state time-homogeneous Markov chain volatility and unknown drift, the uncertainty of which is modelled by an arbitrary probabil- ity distribution. The first time a particular four-dimensional process hits a specific boundary determining the stopping set is shown to be optimal. This stopping bound- ary has attractive monotonicity properties and can be found using the approximation procedure developed. Let us elucidate our study of the optimal selling problem in more depth. Using the nonlinear filtering theory, the original selling problem with parameter uncertainty is rewritten as an equivalent optimal stopping problem of a standard form (i.e. without unknown parameters). In this new optimal stopping problem, the posterior mean serves as the underlying process and acts as a stochastic creation rate; the payoff function in the problem is constant. The posterior mean is shown to be the solution of an SDE depending on the prior and the whole volatility history. Embedding of the optimal stopping problem into a Markovian framework is non-trivial because the whole pos- terior distribution needs to be included as a variable. Fortunately, we show that having fixed the prior, the posterior is fully characterised by only two real-valued parameters: the posterior mean and, what we call, the effective learning time. As a result, we are able to define an associated Markovian value function with four underlying variables (time, posterior mean, effective learning time, and volatility) and study the optimal stopping problem as a four-dimensional Markovian optimal stopping problem (the volatility takes values in a finite set, but slightly abusing terminology, we still call it a dimension). Exploiting that the volatility is constant between the regime switches, we construct m sequences of simpler auxiliary three-dimensional Markovian optimal 123 Applied Mathematics & Optimization stopping problems whose values in the limit converge monotonically to the true value function. The main advantage of this approximating sequence approach comparing with tackling the full variational inequality of the problem directly is that dealing with the analytically complicated coupled system is avoided altogether. Instead only much simpler standard uncoupled free-boundary problems need to be analysed or solved numerically to arrive at a desired result. We show that the value function is decreasing in time and effective learning time as well as increasing and convex in posterior mean. The first hitting time of a region specified by a stopping boundary that is a function of time, effective learning time, and volatility is shown to be optimal. The stopping boundary is increasing in time, effective learning time, and is the limit of a monoton- ically increasing sequence of boundaries from the auxiliary problems. Moreover, the approximation procedure using the auxiliary problems yields a method to calculate the value function as well as the optimal stopping boundary numerically. In the two-point prior case, the posterior mean fully characterises the posterior distribution, making the problem more tractable and allowing us to obtain some addi- tional results. In particular, we prove that, under a skip-free volatility assumption, the Markovian value function is decreasing in the volatility and that the stopping boundary is increasing in the volatility. In a broader mathematical context, the selling problem investigated appears to be the first optimal stopping problem with parameter uncertainty and stochastic volatility to be studied in the literature. Thus it is plausible that ideas presented herein will find uses in other optimal stopping problems of the same type; for example, in classical problems of Bayesian sequential analysis (e.g. see [29, Chapter VI]) with stochastically evolving noise magnitude. It is clear to the author that with additional efforts a number of results of the article can be refined or generalised. However, the objective chosen is to provide an intuitive understanding of the problem and the solution while still maintaining readability and clarity. This also explains why, for the most part, we focus on the two-point prior case and outline an extension to the general prior case only at the end. 1.1 Related Literature There is a strand of research on asset liquidation problems in models with regime- switching volatility, alas, they either concern only a special class of suboptimal strategies or treat the drift as observable. In [36], a restrictive asset liquidation problem was proposed and studied; the drift as well as the volatility were treated as unob- servable and the possibility to learn about the parameters from the observations was disregarded. The subsequent papers [17,34,35] explored various aspects of the same −r τ formulation. An optimal selling problem with the payoff e (S − K ) was studied in [26] for the Black–Scholes model, in [21] for a two-state regime-switching model, and in [35] for an m-state model with finite horizon. In all three cases, the drift and the volatility are assumed to be fully observable. In another strand of research, the optimal stopping problem (1.1) has been solved and analysed in the Black–Scholes model under arbitrary uncertainty about the drift. The two-point prior case was studied in [12], while the general prior case was solved in 123 Applied Mathematics & Optimization [15] using a different approach. This article can be viewed as a generalisation of [15] to include stochastic regime-switching volatility. Related option valuation problems under incomplete information were studied in [18,33], both in the two-point prior case, and in [10]inthe n-point prior case. The approach we take to approximate a Markovian value function by a sequence of value functions of simpler constant volatility problems was used before in [24]to investigate a finite-horizon American put problem (also, its slight generalisation) in a regime-switching model with full information. Regrettably, in the case of 3 or more volatility states, the recursive approximation step in [24, Sect. 5] contains a blunder; we rectify it in Sect. 3.2 of this article. A possible alternative route to analysing and solving the optimal stopping problem is to analytically tackle the system of variational inequalities directly using weak solutions techniques (e.g., see [6,30]), similarly as in [7] for American options with regime-switching volatility. Structural and regularity properties would need to be established using PDE techniques. If appropriate theoret- ical results can be obtained, numerical PDE schemes discussed in [22] should yield a numerical solution. However, this alternative approach requires a different toolkit, appears to be more demanding analytically, and hence not investigated further in the present article. Though it is true that the current paper is a generalisation of [15] from constant volatility to the regime-switching stochastic volatility model, the extension is definitely not a straightforward one. Novel statistical learning intuitions were needed, and new proofs were developed to arrive at the results of the paper. One of the main insights of the optimal liquidation problem with constant volatility in [15] was that the current time and price were sufficient statistics for the optimal selling problem. However, changing the volatility from constant to stochastic makes the posterior distribution of the drift truly dependent on the price path. This raises questions whether an optimal liquidation problem can be treated using the mainstream finite-dimensional Markovian techniques at all, and also whether any of the developments from the constant volatility case can be taken advantage of. In the two-point prior case with regime-switching volatility, the following new insight was key. Despite the posterior being a path-dependent function of the stock price, we can show that the current time, posterior mean and instantaneous volatility (extracted from the price process) are sufficient statistics for the optimal liquidation problem. Alas, for any prior with more than two points in the support, the same triplet is no longer a sufficient statistic. Fortunately, if in addition to the time- price-volatility triplet we introduce an additional statistic, which we name the effective learning time, the resulting 4-tuple becomes a sufficient statistic for the selling problem under a general prior. Besides these insights, some new technicalities (in particular, Lemma (2.3)) stemming from stochastic volatility had to be resolved to reformulate the optimal selling problem into the standard Markovian form. In relation to [24], though we employ the same general iterative approximation idea to construct an approximating sequence for the Markovian value function, the particulars, including proofs and results, are notably distinct. Firstly, we work in a more general setting, proving and formulating more abstract as well as, in multiple instances, new type of results. For example, we prove things in the m-state rather than the two-state regime-switching model. This allowed us to catch and correct an erroneous construction of the approximating sequence in [24] for models with more 123 Applied Mathematics & Optimization than two volatility states. Moreover, almost all the proofs follow different arguments either because of the structural differences in the selling problem or because we prefer another way, which seems to be more transparent and direct, to arrive at the results. Lastly, many of the results in the present paper are problem-specific and even not depend on the iterative approximation of the value function after all. The idea to iteratively construct a sequence of auxiliary value functions that con- verge to the true value function in the limit is generic and has been many times success- fully applied to optimal stopping problems with a countable number of discrete events (e.g. jumps, discrete observations). In the setting with partial observations, an iterative approximation scheme was employed in [5] to study the Poisson disorder detection problem with unknown post-disorder intensity, then later, in [9], to analyse a combined Poisson-Wiener disorder detection problem, and, more recently, in [4], to investigate the Wiener disorder detection under discrete observations. In the fully observable set- ting, such iterative approximations go back to at least as early as [19], which deals with a Markovian optimal stopping problem with a piecewise deterministic underlying. In Financial Mathematics, iteratively constructed approximations were used in [2,3]to study the value functions of finite and perpetual American put options, respectively, for a jump diffusion. Besides optimal stopping, the iterative approximation technique was utilised for the singular control problem [13] of optimal dividend policy. 2 Problem Set-Up We model a financial market on a filtered probability space (, F , {F } , P) satisfy- t t ≥0 ing the usual conditions. Here the measure P denotes the physical probability measure. The price process is modelled by dS = XS dt + σ(t )S dW , (2.1) t t t t where X is a random variable having probability distribution μ, W is a standard Brown- ian motion, and σ is a time-homogeneous right-continuous m-state Markov chain with a generator  = (λ ) and taking values σ ≥ ··· ≥ σ > 0. Moreover, we ij 1≤i , j ≤m m 1 assume that X, W , and σ are independent. Since the volatility can be estimated from the observations of S in an arbitrary short period of time (at least in theory), it is reasonable to assume that the volatility process {σ(t )} is observable. Hence the t ≥0 S,σ S,σ available information is modeled by the filtration F ={F } generated by the t ≥0 processes S and σ and augmented by the null sets of F. Note that the drift X and the random driver W are not directly observable. The optimal selling problem that we are interested in is V = sup E[S ], (2.2) S,σ τ ∈T S,σ S,σ where T denotes the set of F -stopping times that are smaller or equal to a prespecified time horizon T > 0. 123 Applied Mathematics & Optimization −r τ Remark 2.1 It is straightforward to include a discount factor e in (2.2). In fact, it simply corresponds to a shift of the prior distribution μ in the negative direction by r. Let l := inf supp(μ) and h := sup supp(μ). It is easy to see that if l ≥ 0, then it is optimal to stop at the terminal time T . Likewise, if h ≤ 0, then stopping immediately, i.e. at time zero, is optimal. The rest of the article focuses on the remaining and most interesting case. Assumption 2.2 l < 0 < h. 2.1 Equivalent Reformulation Under a Measure Change S,σ Let us write X := E[X | F ]. Then the process ˆ ˆ W := (X − X ) ds + W , t s t σ(s) S,σ called the innovation process, is an F -Brownian motion (see [1, Proposition 2.30 on p. 33]). Lemma 2.3 The volatility process σ and the innovation process W are independent. Proof Since X, W , and σ are independent, we can think of (, F , P) as a product [0,T ] space  ×  , F ⊗ F , P × P .Let A, A ∈ B(R ). Then X ,W σ X ,W σ X ,W σ P W ∈ A,σ ∈ A = 1 d P × P (ω ,ω ) X ,W σ X ,W σ {W (ω ,ω )∈A,σ (ω )∈A } X ,W σ σ X ,W σ = 1 1  dP (ω ) dP (ω ) X ,W X ,W σ σ ˆ {σ(ω )∈A } {W (ω ,ω )∈A} σ X ,W σ σ X ,W = 1  1 dP (ω ) dP (ω ) {σ(ω )∈A } ˆ X ,W X ,W σ σ {W (ω ,ω )∈A} X ,W σ σ X ,W = 1 P W (·,ω ) ∈ A dP (ω ) {σ(ω )∈A } X ,W σ σ σ = P W ∈ A P σ ∈ A = P W ∈ A P σ ∈ A , (2.3) where the penultimate equality is justified by the fact that, for any fixed ω ,the innovation process W (·,ω ) is a Brownian motion under P . Hence from (2.3), σ X ,W the processes W and σ are independent. Defining a new equivalent measure P ∼ P on (, F ) via the Radon-Nikodym derivative T T dP 1 2 σ(t ) dW − σ(t ) dt 0 2 0 = e dP 123 Applied Mathematics & Optimization and writing t 1 t Xt + σ(s) dW − σ(s) ds 0 2 0 S = S e t 0 t t 1 t ˆ ˆ X ds+ σ(s) dW − σ(s) ds s s 0 0 2 0 = S e , S,σ we have that, for any τ ∈ T , τ τ ˆ ˆ X ds X ds s s ˜ ˜ 0 0 E [S ] = E S e = S E e . τ 0 0 ˆ ˜ Moreover, by Girsanov’s theorem, the process B := − σ(s) ds + W is a P- t t Brownian motion on [0, T ]. In addition, Lemma 2.3 together with [1, Proposition 3.13] tells us that the law of σ is the same under P and P, as well as that B and σ are independent under P. Without loss of generality, we set S = 1 throughout the article, so the optimal stopping problem (2.2) can be cast as X ds ˜ s V = sup E[e ]. (2.4) S,σ τ ∈T Between the volatility jumps, the stock price is a geometric Brownian motion with known constant volatility and unknown drift. Hence, by Corollary 3.4 in [15], we have ˆ ˆ S,σ X ,σ S,σ X ,σ X ,σ that F = F and T = T , where F denotes the usual augmentation of T T X ,σ ˆ X ,σ the filtration generated by X and σ,also, T denotes the set of F -stopping times not exceeding T . As a result, an equivalent reformulation of (2.4)is X ds V = sup E[e ], (2.5) X ,σ τ ∈T which we will study in the subsequent parts of the article. 2.2 Markovian Embedding In all except the last section of this article, we will focus on the special case when X has a two-point distribution μ = πδ + (1 − π)δ , where h > l, π ∈ (0, 1) are h l constants, and δ ,δ are Dirac measures at h and l, respectively. In this special case, h l expressions are simpler and arguments are easier to follow than in the general prior case; still, most underlying ideas of the arguments are the same. Hence, we choose to understand the two-point prior case first, after which generalising the results to the general prior case will become a rather easy task. Since the volatility is a known constant between the jump times, using the dynamics ˆ ˆ of X in the constant volatility case [the equation (3.9) in [15]], the process X is a unique strong solution of ˆ ˆ ˆ dX = σ(t )φ (X ,σ (t )) dt + φ(X ,σ (t )) dB , (2.6) t t t t 123 Applied Mathematics & Optimization where φ(x,σ ) := (h − x )(x − l). Now, we can embed the optimal stopping problem (2.4) into a Markovian framework by defining a Markovian value function t ,x ,σ X ds ˜ s v(t , x,σ ) := sup E[e ],(t , x,σ ) ∈[0, T ]× (l, h) ×{σ ,...,σ }. 1 m τ ∈T T −t (2.7) t ,x ,σ ˆ ˆ ˆ Here X denotes the process X in (2.6) started at time t with X = x, σ(t ) = σ , and T stands for the set of stopping times less or equal to T − t with respect to T −t t ,x ,σ the usual augmentation of the filtration generated by {X } and {σ(t + s)} . s≥0 s≥0 t +s The formulation (2.7) has an interpretation of an optimal stopping problem with the constant payoff 1 and the discount rate −X ; from now onwards, we will study this discounted problem. The notation v := v(·, ·,σ ) will often be used. i i 3 Approximation Procedure It is not clear how to compute v in (2.7) or analyse it directly. Hence, in this section, we develop a way to approximate the value function v by a sequence of value functions, corresponding to simpler constant volatility optimal stopping problems. 3.1 Operator J For the succinctness of notation, let λ := λ denote the total intensity with i ij j =i which the volatility jumps from state σ . Also, let us define η := inf{s > 0 | σ(t + s) = σ(t ) = σ }, whichisanExp(λ )-distributed random variable representing the duration up to the first volatility change if started from the volatility state σ at time t. Furthermore, let us define an operator J acting on a bounded f :[0, T ]×(l, h) → R by (Jf )(t , x,σ ) t ,x ,σ t ,x ,σ τ i i i ˆ ˆ t ,x ,σ X ds X ds t ˜ t +s t +s ˆ 0 0 := sup E e 1 t + e f t + η , X 1 t {τ<η } i {τ ≥η } t +η i i τ ∈T T −t (3.1) t ,x ,σ t ,x ,σ τ i u i ˆ ˆ t ,x ,σ X −λ ds X −λ ds ˜ t +s i t +s i ˆ 0 0 = sup E e + λ e f t + u, X du , t +u τ ∈T T −t (3.2) 123 Applied Mathematics & Optimization where T denotes the set of stopping times less or equal to T − t with respect to T −t t ,x ,σ the usual augmentation of the filtration generated by {X } and {σ(t + s)} . s≥0 s≥0 t +s To simplify notation, we also define an operator J by J f := (Jf )(·, ·,σ ). i i Intuitively, (J f ) represents a Markovian value function corresponding to optimal stopping before t + η , i.e. before the first volatility change after t, when, at time t ,x ,σ t t i t + η < T , the payoff f t + η , X is received provided stopping has not i i t +η occurred yet. Proposition 3.1 Let f :[0, T ]× (l, h) → R be bounded. Then (i) J f is bounded; (ii) f increasing in the second variable x implies that J f is increasing in the second variable x; (iii) f decreasing in the first variable t implies that J f is decreasing in the first variable t; (iv) f increasing and convex in the second variable x implies that J f is increasing and convex in the second variable x; (v) J preserves order, i.e. f ≤ f implies J f ≤ Jf ; 1 2 1 2 (vi) Jf ≥ 1. Proof All except claim (iv) are straightforward consequences of the representation (3.2). To prove (iv), we will approximate the optimal stopping problem (3.2)byBermu- dan options. Let i and n be fixed. We will approximate the value function J f by a value function ( f ) w of a corresponding Bermudan problem with stopping allowed only at times i ,n ( f ) kT n : k ∈{0, 1,..., 2 } . We define w recursively as follows. First, 2 i ,n ( f ) w (T , x ) := 1. i ,n Then, starting with k = 2 and continuing recursively down to k = 1, we define kT (k−1)T kT g t , x , , t ∈ , , n n n ( f ) 2 2 2 w (t , x ) = (3.3) i ,n (k−1)T (k−1)T kT g , x , ∨ 1, t = , n n n 2 2 2 where the function g is given by kT n t ,x ,σ kT 2 i kT ˆ ( f ) X −λ ds t ,x ,σ ˜ t s i g t , x , := E e w , X kT i ,n 2 n kT t ,x ,σ 2 u X −λ ds t ,x ,σ s i i t ˆ + e f u, X du . (3.4) 123 Applied Mathematics & Optimization ( f ) Next, we show by backward induction on k that w is increasing and convex in the i ,n ( f ) kT second variable x. Suppose that for some k ∈{1, 2,..., 2 }, the function w , · i ,n 2 is increasing and convex (the assumption clearly holds for the base step k = 2 ). Let (k−1)T kT t ∈[ , ). Then, since f is also increasing and convex in the second variable x, n n 2 2 ( f ) kT we have that the function g(t , ·, ), and so w (t , ·), is convex by [14, Theorem 5.1]. i ,n ( f ) Moreover, from (3.4) and [31, Theorem IX.3.7], it is clear that w (t , ·) is increasing. i ,n Consequently, by backward induction, we obtain that the Bermudan value function ( f ) w is increasing and convex in the second variable. i ,n ( f ) Letting n ∞, the Bermudan value w  J f pointwise. As a result, J f is i i i ,n increasing and convex in the second argument, since convexity and monotonicity are preserved when taking pointwise limits. The sets C := {(t , x ) ∈[0, T ) × (l, h) : (J f )(t , x)> 1}, f f D := {(t , x ) ∈[0, T ]× (l, h) : (J f )(t , x ) = 1}=[0, T ]× (l, h) \ C , (3.5) i i correspond to continuation and stopping sets for the stopping problem J f as the next proposition shows. Proposition 3.2 (Optimal stopping time) The stopping time t ,x ,σ f τ (t , x ) = inf{u ∈[0, T − t]: (t + u, X ) ∈ D } (3.6) σ t +u i i is optimal for the problem (3.2). Proof A standard application of Theorem D.12 in [23]. Proposition 3.3 If a bounded f :[0, T ]×(l, h) → R is decreasing in the first variable as well as increasing and convex in the second, then J f is continuous. Proof The argument is a trouble-free extension of the proof of the third part of Theorem 3.10 in [15]; still, we include it for completeness. Before we begin, in order to simplify notation, we will write u := J f . Firstly, we let r ∈ (l, h) and will prove that there exists K > 0 such that, for every t ∈[0, T ],the map x → J f (t , x ) is K -Lipschitz continuous on (l, r ].To obtain a contradiction, assume that there is no such K . Then, by convexity of u in the second variable, there is a sequence {t } ⊂[0, T ] such that the left-derivatives n n≥0 ∂ u(t , r ) ∞. Hence, for r ∈ (r , h), the sequence u(t , r ) →∞, which contra- n n dicts that u(t , r ) ≤ u(0, r )< ∞ for all n ∈ N. Now, it remains to show that u is continuous in time. Assume for a contradiction that the map t → u(t , x ) is not continuous at t = t for some x . Since u is decreasing in 0 0 0 time, u(·, x ) has a negative jump at t . Next, we will investigate the cases u(t −, x )> 0 0 0 0 u(t , x ) and u(t , x )> u(t +, x ) separately. 0 0 0 0 0 0 123 Applied Mathematics & Optimization Suppose u(t −, x )> u(t , x ). By Lipschitz continuity in the second variable, 0 0 0 0 there exists δ> 0 such that, writing R = (t − δ, t ) × (x − δ, x + δ), 0 0 0 0 inf u(t , x)> u(t , x + δ). (3.7) 0 0 (t ,x )∈R f t ,x ,σ Thus R ⊆ C .Let t ∈ (t − δ, t ) and τ := inf{s ≥ 0 : (t + s, X )/ ∈ R}. 0 0 R i t +τ Then, by the martingality in the continuation region, τ t ,x ,σ R 0 i t ,x ,σ X −λ du 0 i ˜ t +u ˆ u(t , x ) = E e u t + τ , X 0 R t +τ R t ,x ,σ 0 i t ,x ,σ X −λ ds 0 i t +s ˆ + e f t + u, X du t +u (t −t )(x +δ) 0 0 ≤ E e u(t , x + δ)1 0 {t +τ <t } R 0 (t −t )(x +δ) 0 0 + e u(t , x + δ)1 0 0 {t +τ =t } R 0 t −t t ,x ,σ 0 i t ,x ,σ X −λ ds 0 i 0 t +s ˆ + e  f t + u, X  du t +u + + (t −t )(x +δ) (t −t )(x +δ) 0 0 ˜ 0 0 ≤ e u(t , x + δ)P(t + τ < t ) + e u(t , x + δ) 0 R 0 0 0 t −t 0 t ,x ,σ 0 i ˆ  t ,x ,σ X −λ ds 0 i i ˜ t +s ˆ + E e f t + u, X du t +u → u(t , x + δ) 0 0 as t → t , contradicting (3.7). The other case to consider is u(t , x )> u(t +, x ); we look into the situation 0 0 0 0 u(t , x )> u(t +, x )> 1 first. The local Lipschitz continuity in the second variable 0 0 0 0 and the decay in the first variable imply that there exist > 0 and δ> 0 such that, writing R = (t , t + ]×[x − δ, x + δ], 0 0 0 0 u(t , x )> sup u(t , x ) ≥ inf u(t , x)> 1. (3.8) 0 0 (t ,x )∈R (t ,x )∈R f t ,x ,σ 0 0 i Hence, R ⊆ C and writing τ := inf{s ≥ 0 : (t + s, X )/ ∈ R} we have R 0 t +s i 0 τ t ,x ,σ R 0 0 i X −λ du t ,x ,σ i 0 0 i t +u ˜ 0 ˆ u(t , x ) = E e u t + τ , X 0 0 0 R t +τ 0 R t ,x ,σ R u 0 0 i X −λ ds t ,x ,σ i 0 0 i t +s 0 ˆ + e f t + u, X du t +u (x +δ) ≤ E e u(t , x + δ)1 0 0 {τ <} (x +δ) ˜ 0 + E e u(t + , x + δ)1 0 0 {τ =} 123 Applied Mathematics & Optimization t ,x ,σ 0 i X −λ ds t ,x ,σ i 0 0 i 0 t +s + e  f t + u, X  du t +u + + (x +δ) (x +δ) 0 ˜ 0 ≤ e u(t , x + δ)P(τ <) + e u(t + , x + δ) 0 0 R 0 0 t ,x ,σ 0 0 i X −λ ds t ,x ,σ i 0 0 i ˜ 0 t +s + E e  f t + u, X  du t +u → u(t +, x + δ) 0 0 as   0, which contradicts (3.8). Lastly, suppose that u(t , x )> u(t +, x ) = 1. By Lipschitz continuity in the 0 0 0 0 second variable, there exists δ> 0 such that inf u(t , x)> u(t +, x ) = 1. (3.9) 0 0 0 x ∈(x −δ,x ) 0 0 t ,x −δ/2,σ 0 0 i Consequently, (t , T ]× (x − δ, x ) ⊆ D . Hence the process X hits the 0 0 0 stopping region immediately and so (t , x − δ/2) ∈ D , which contradicts (3.9). 0 0 Proposition 3.4 (Optimal stopping boundary) Let f :[0, T]× (l, h) → R be bounded, decreasing in the first variable as well as increasing and convex in the second variable. Then the following hold. (i) There exists a function b :[0, T ) →[l, h] that is both increasing, right- continuous with left limits, and satisfies C ={(t , x ) ∈[0, T ) × (l, h) : x > b (t )}. (3.10) (ii) The pair (J f , b ) satisfies the free-boundary problem i σ 1 2 ⎪ ∂ u(t , x ) + σ φ(x,σ )∂ u(t , x ) + φ(x,σ ) ∂ u(t , x ) ⎨ t i i x i xx + (x − λ )u(t , x ) + λ f (t , x ) = 0, if x > b (t ), (3.11) i i σ ⎩ f u(t , x ) = 1, if x ≤ b (t ) or t = T . Proof (i) By Proposition 3.1 (iv), there exists a unique function b satisfying (3.10). Moreover, by Proposition 3.1 (iii), this boundary b is increasing. Hence, using Proposition 3.3, we also obtain that b is right-continuous with left limits. (ii) The proof follows a well-known standard argument (e.g. see [23, Theorem 7.7 in Chapter 2]), thus we omit it. 3.2 A Sequence of Approximating Problems Let us define a sequence of stopping times {ξ } recursively by n≥0 ξ := 0, 123 Applied Mathematics & Optimization t t t ξ := inf s >ξ : σ(t + s) = σ(t + ξ ) , n > 0. n n−1 n−1 Here ξ represents the duration until the n-th volatility jump since time t. Furthermore, (n) let us define a sequence of operators {J } by n≥0 (n) (J f )(t , x,σ ) t ,x ,σ ξ t ,x ,σ τ i n i ˆ ˆ t ,x ,σ X ds X ds t ˜ t +s t +s ˆ 0 0 := sup E e 1 t + e f t + ξ , X 1 t , {τ<ξ } t {τ ≥ξ } n t +ξ n τ ∈T T −t (3.12) (0) where f :[0, T ]× (l, r ) → R is bounded. In particular, note that J f = f and (n) (1) J f = Jf . Similarly as for the operator J , we define J by (n) (n) J f := (J f )(·, ·,σ ). Proposition 3.5 Let n ≥ 0 and i ∈{0,..., m}. Then ⎛ ⎞ (n+1) ij (n) ⎝ ⎠ J = J J . (3.13) i j j =i Proof The proof is by induction. In order to present the argument of the proof while keeping intricate notation at bay, we will only prove that, for a bounded f :[0, T ]× (2) λ ij (l, h) → R and x ∈ (l, h), the identity (J f )(t , x ) = (J ( J f ))(t , x ) i j j =i i λ (n+1) ij (n) holds. The induction step J = J J follows a similar argument, i j =i λ j though with more abstract notation. Note that without loss of generality, we can assume t = 0, whichwedo. (2) ij Firstly, we will show (J f )(0, x ) ≤ J (J f ) (0, x ) and then the i j j =i i λ opposite inequality. For j ∈ N, we will write ξ instead of ξ as well as will use the notation η := ξ − ξ .Let τ ∈ T and consider j j j −1 T A(τ ) τ 0,x ,σ τ 0,x ,σ ξ 0,x ,σ i i 2 i ˆ ˆ ˆ 0,x ,σ X ds X ds X ds i s s s ˜ 0 0 0 ˆ := E e 1 + e 1 + e f (ξ , X )1 {τ<η } {η ≤τ<ξ } 2 {τ ≥ξ } 1 1 2 ξ 2 0,x ,σ 0,x ,σ τ τ i i ˆ ˆ X ds X ds s s ˜ ˜ 0 0 = E e 1 + E e 1 {τ<η } {η ≤τ<ξ } 1 1 2 0,x ,σ 2 i 0,x ,σ ˆ ˆ 0,x ,σ i X ds i X ,N s ˆ + e f (ξ , X )1 | F , (3.14) 2 {τ ≥ξ } ξ 2 η 2 1 123 Applied Mathematics & Optimization where {N } denotes the process counting the volatility jumps. The inner conditional t t ≥0 expectation in (3.14) satisfies 0,x ,σ 0,x ,σ τ ξ i 2 i 0,x ,σ ˆ ˆ 0,x ,σ ˆ i X ds X ds X ,N s s i ˜ ˆ 0 0 E e 1 + e f (ξ , X )1 | F {η ≤τ<ξ } 2 {τ ≥ξ } 1 2 ξ 2 η 0,x ,σ η 0,x ,σ τ i 1 i ˆ ˆ X ds X ds s s η 0 ˜ = e 1 E e 1 {η ≤τ } {τ<ξ } 1 2 0,x ,σ 2 ˆ 0,x ,σ X ds ˆ i s 0,x ,σ X ,N η ˆ + e f (ξ , X )1 | F 2 {τ ≥ξ } 2 η ξ 1 η 0,x ,σ 0,x ,σ i λ i τ˜ ˆ ij ˆ ˆ X ds η ,X ,σ X ds 1 j η +s s ˜ η 0 1 0 1 = e 1 E e 1 {η ≤τ } {˜ τ<η } 1 2 j =i X ds η +s 0 ˆ + e f (η + η , X )1 , (3.15) 1 2 η +η 1 2 {˜ τ ≥η } where τ˜ = τ − η in the case η ≤ τ ≤ T . Therefore, substituting (3.15)into(3.14) 1 1 and then taking a supremum over τ˜, we get τ 0,x ,σ X ds ˜ s A(τ ) ≤ E e 1 {τ<η } η 0,x ,σ 0,x ,σ i λ i τ˜ ˆ ij ˆ ˆ X ds η ,X ,σ X ds η +s s ˜ 1 η j 0 1 0 1 + e 1 sup E e 1 {τ ≥η } {˜ τ<η } 1 2 τ˜∈T T −T ∧η j =i 1 X ds η +s 0 ˆ + e f (η + η , X )1 1 2 η +η {˜ τ ≥η } 1 2 2 0,x ,σ η 0,x ,σ i 1 i λ ˆ ˆ ij X ds X ds 0,x ,σ ˜ s s i 0 0 = E e 1 + e 1 (J f )(η , X ) {τ<η } {τ ≥η } j 1 1 1 η j =i (3.16) Taking a supremum over τ in (3.16), we obtain (2) ij (J f )(0, x ) = sup A(τ ) ≤ J (J f ) (0, x ). (3.17) i j τ ∈T j =i It remains to establish the opposite inequality. Let τ ∈ T and define τˇ := τ1 + (η ∧ T + τ )1 , (3.18) {τ ≤η } 1 σ(η ) {τ>η } 1 1 1 0,x ,σ where τ := τ (η ∧ T , X ). Clearly, τˇ ∈ T . Then σ(η ) 1 T σ(η ) η ∧T 1 1 (2) (J f )(0, x ) ≥ A(τ) ˇ 0,x ,σ η 0,x ,σ 0,x ,σ τ λ j i 1 i i ˆ ˆ ij ˆ ˆ X ds X ds η ,X ,σ X ds s s 1 η j η +s ˜ 0 0 ˜ 0 1 = E e 1 + e 1 E 1 e 1 {τ<η } {τ ≥η } {τ <η } 1 1 σ 2 j =i 123 Applied Mathematics & Optimization X ds η +s 0 ˆ + e f (η + η , X )1 1 2 η +η {τ ≥η } 1 2 σ 2 0,x ,σ 0,x ,σ τ η i 1 i λ ˆ ˆ ij X ds X ds 0,x ,σ ˜ s s ˆ 0 0 = E e 1 + e 1 (J f )(η , X ) , {τ<η } {τ ≥η } j 1 1 1 η j =i where Proposition 3.2 was used to obtain the last equality. Hence, by taking supremum over stopping times τ ∈ T , we get ij (2) (J f )(0, x ) ≥ J (J f ) (0, x ). (3.19) i j j =i Finally, (3.17) and (3.19) taken together imply ij (2) (J f )(0, x ) = J (J f ) (0, x ). i j j =i Remark 3.6 In [24], the authors use the same approximation procedure for an optimal stopping problem with regime switching volatility as in this article. Unfortunately, a mistake is made in equation (18) of [24], which wrecks the subsequent approximation procedure when the number of volatility states is greater than 2. The identity (18) therein should be replaced by (3.13). 3.3 Convergence to the Value Function Proposition 3.7 (Properties of the approximating sequence) (n) (i) The sequence of functions {J 1} is increasing, bounded from below by 1 n≥0 hT and from above by e . (n) (ii) Every J 1 is decreasing in the first variable t as well as increasing and convex in the second variable x. (iii) The sequence of functions (n) J 1  v pointwise as n ∞. Moreover, the approximation error n−1 (λT ) (n) hT v − J 1 ≤ e λT as n →∞, (3.20) (n − 1)! where λ := max{λ : 1 ≤ i ≤ m}. (iv) For every n ∈ N ∪ {0}, n (n) n J 1 ≤ J 1 ≤ J 1. (3.21) m 1 123 Applied Mathematics & Optimization (n) Proof (i) The statement that {J 1} is increasing, bounded from below by 1 and n≥0 hT from above by e is a direct consequence of the definition (3.12). (n) (ii) The claim that every J 1 is decreasing in the first variable t as well as increasing and convex in the second variable x follows by a straightforward induction on n, using Proposition 3.1 (iii),(iv) and Proposition 3.5 at the induction step. (iii) First, let i ∈{1,..., m} and note that, for any n ∈ N, (n) J 1 ≤ v . (n) Here the inequality holds by suboptimality, since J 1 corresponds to an expected payoff of a particular stopping time in the problem (2.4). Next, define t ,x ,σ (i ) X ds 0 t +s U (t , x ) := sup E e 1 . {τ<ξ } τ ∈T T −t Then (n) (i ) (i ) h(T −t ) t U (t , x ) ≤ (J 1)(t , x ) ≤ v (t , x ) ≤ U (t , x ) + e P(ξ ≤ T − t ). (3.22) n i n n th Since it is a standard fact that the n jump time, call it ζ , of a Poisson process with jump intensity λ := max{λ : 1 ≤ i ≤ m} follows the Erlang distribution, we have P(ξ ≤ T − t ) ≤ P(ζ ≤ T − t ) λ(T −t ) n−1 −u = u e du (n − 1)! n−1 (λT ) ≤ λT . (n − 1)! Therefore, by (3.22), n−1 (λT ) (n) hT v − J 1 ≤ e λT as n →∞. (n − 1)! (iv) The string of inequalities (3.21) will be proved by induction. First, the base step is obvious. Now, suppose (3.21) holds for some n ≥ 0. Hence, for any i ∈{1,..., m}, ij (n) n n J 1 ≤ J 1 ≤ J 1. (3.23) m j 1 j =i Let us fix i ∈ {1,..., m}. By Proposition 3.1 (iv), every function in (3.23)is convex in the spatial variable x, thus [14, Theorem 6.1] yields 123 Applied Mathematics & Optimization ⎛ ⎞ ij (n) n+1 n+1 ⎝ ⎠ J 1 ≤ J J 1 ≤ J 1. j 1 j =i As i was arbitrary, we also have n+1 (n+1) n+1 J 1 ≤ J 1 ≤ J 1. (3.24) σ σ m 1 hT Remark 3.8 If instead of 1 we choose the constant function e to apply the operators (n) (n) hT J to, then, following the same strategy as above, {J e } is a decreasing n≥0 i i (n) hT sequence of functions with the limit J e  v pointwise as n ∞. Let B ([0, T ]× (l, h); R) denote the set of bounded functions from [0, T ]× (l, h) m m to R and define an operator J : B ([0, T ]× (l, h); R) → B ([0, T ]× (l, h); R) b b by ⎛ ⎞ ⎛ ⎞ 1 j J ( f ) 1 j 1 j =1 ⎜ ⎟ ⎜ ⎟ . . ⎜ ⎟ . . J := . ⎝ ⎠ . . ⎝ ⎠ mj J ( f ) m j j =m Proposition 3.9 (i) Let f ∈ B ([0, T ]× (l, h); R) . Then ⎛ ⎞ ⎜ . ⎟ lim J f = . ⎝ ⎠ n→∞ tr (ii) The vector (v ,...,v ) of value functions is a fixed point of the operator J, 1 m i.e. ⎛ ⎞ ⎛ ⎞ v v 1 1 ⎜ ⎟ ⎜ ⎟ . . . . J ⎝ ⎠ = ⎝ ⎠ . (3.25) . . v v m m Proof (i) Observe that the argument in the proof of part (iii) of Proposition 3.7 also (n) gives that J g → v as n →∞ for any bounded g. Hence to finish the proof it is enough to recall the relation (3.13) in Proposition 3.5. (ii) Let i ∈{1,..., m}. By Proposition 3.5, ⎛ ⎞ ij (n+1) (n) ⎝ ⎠ J 1 = J J 1 . (3.26) i j j =i 123 Applied Mathematics & Optimization (n) By Proposition 3.7 (iii), for every j ∈{1,..., m}, the sequence J 1  v as n ∞, so, letting n ∞ in (3.26), the monotone convergence theorem tells us that ⎛ ⎞ ij ⎝ ⎠ v = J v . (3.27) i i j j =i 4 The Value Function and the Stopping Strategy In this section, we show that the value function v has attractive structural properties and identify an optimal strategy for the liquidation problem (2.7). The first passage time below a boundary, which is an increasing function of time and volatility, is proved to be optimal. Moreover, we provide a method to approximate the optimal stopping boundary by demonstrating that it is a limit of an increasing sequence of stopping boundaries coming from easier auxiliary problems of Sect. 3. Theorem 4.1 (Properties of the value function) (i) v is decreasing in the first variable t as well as increasing and convex in the second variable x. (ii) v is continuous for every i ∈{1,..., m}. (iii) v ˇ ≤ v ≤ˇ v , (4.1) σ σ m 1 where v ˇ :[0, T]× (l, h) → R denotes the Markovian value function as in (2.7), but for a price process (2.1) with constant volatility σ . (n) Proof (i) Since, by Proposition 3.7 (ii), every J 1 is decreasing in the first vari- able t, increasing and convex in the second variable x, these properties are also (n) preserved in the pointwise limit lim J 1, which is v by Proposition 3.7 n→∞ (iii). (ii) Using part (i) above, the claim follows from Proposition 3.9 (ii), i.e. from the tr fact that (v ,...,v ) is a fixed point of a regularising operator J in the sense 1 m of Proposition 3.3. (iii) Letting n →∞ in (3.21), Proposition 3.7 (iii) gives us (4.1). For the optimal liquidation problem (2.4) with constant volatility σ , i.e. in the case σ = ... = σ = σ , it has been shown in [15] that an optimal liquidation strategy 1 m is characterised by a increasing continuous stopping boundary b :[0, T ) →[l, 0] ˇ ˆ ˇ with b (T −) = 0 such that the stopping time τˇ = inf{t ≥ 0 : X ≤ b (t )}∧ T is σ σ t σ optimal. It turns out that the optimal liquidation strategy within our regime-switching volatility model shares some similarities with the constant volatility case as the next theorem shows. 123 Applied Mathematics & Optimization Theorem 4.2 (Optimal liquidation strategy) (i) For every i ∈{1,..., m}, there exists b :[0, T ) →[l, 0] that is increasing, right-continuous with left limits, satisfies the equality b (T −) = 0 and the identity C ={(t , x ) ∈[0, T ) × (l, h) : x > b (t )}, (4.2) i i ij where u := v . Moreover, i j j =i λ ˇ ˇ b ≤ b ≤ b . σ σ σ 1 i m { } for any i ∈ 1,..., m . (ii) The stopping strategy ∗ t ,x ,σ τ := inf{s ∈[0, T − t ) : X ≤ b (t + s)}∧ (T − t). σ(t +s) t +s is optimal for the optimal selling problem (2.7). (iii) For i ∈{1,..., m}, the boundaries (n) b  b pointwise as n ∞, σ σ i i (n) λ (n) ij where g := J 1. j =i i λ j (iv) The pairs (v , b ), (v , b ), . . . , (v , b ) satisfy a coupled system of m free- 1 σ 2 σ m σ 1 2 m boundary problems with each being 1 2 ∂ v (t , x ) + σ φ(x,σ )∂ v (t , x ) + φ(x,σ ) ∂ v (t , x ) t i i i x i i xx i +(x − λ )v (t , x ) + λ v (t , x ) = 0, if x > b (t ), (4.3) i i ij j i j =i v (t , x ) = 1, if x ≤ b (t ) or t = T , i i where i ∈{1,..., m}. Proof (i) The existence of b :[0, T ) →[l, h] that is increasing, right-continuous with left limits, and satisfies (4.2) follows from the fixed-point property (3.25), ˇ ˇ ˇ and Theorem 4.1 (i),(ii). Since the range of b , b is [l, 0] and b (T −) = σ σ σ 1 m 1 ˇ ˇ ˇ b (T −) = 0, using Theorem 4.1 (iii), we also conclude that b ≤ b ≤ b σ σ σ σ m 1 i m and that b (T −) = 0 for every i. (ii) Let us define D := {(t , x,σ ) ∈[0, T ]× (l, h)×{σ ,...,σ }: v(t , x,σ ) = 1}. 1 m t ,x ,σ (t ) Then τ := inf{s ≥ 0 : (t + s, X ,σ (t + s)) ∈ D} is optimal for the D t +s problem (2.7)by[29, Corollary 2.9]. Lastly, from the fixed-point property (3.25) and Proposition 3.2, we conclude that τ = τ , which finishes the proof. (n) (n) (n) (iii) Since J 1  v as n ∞ and J 1 ≥ 1 for all n,wehavethat lim b ≥ i n∞ σ i i (n) (n) b . Also, if x < lim b (t ), then J 1(t , x ) = 1 for all n ∈ N and so σ n∞ σ i i i (n) (n) v (t , x ) = lim J 1(t , x ) = 1. Hence, lim b ≤ b . As a result, i n∞ n∞ σ σ i i (n) lim b = b . n∞ σ σ i i 123 Applied Mathematics & Optimization (iv) The free-boundary problem is a consequence of Proposition 3.4 (ii) and the fixed-point property (3.25). Remark 4.3 Establishing uniqueness of a classical solution to a time non-homogeneous free-boundary problem is typically a technical task (see [27] for an example). Not being central to the mission of the paper, the uniqueness of solution to the free-boundary problems (4.3) and (3.11) has not been pursued. Remark 4.4 (A possible alternative approach) It is worth pointing out that a potential alternative approach for the study of the value function and the optimal strategy is to directly analyse the variational inequality formulation (e.g., see [30, Sect. 5.2]) arising from the optimal stopping problem (2.7). The coupled system of variational inequalities would need to be studied using weak solution techniques from the PDE theory (e.g., see [6,30]) to obtain desired regularity and structural properties of the value function and the stopping region. Though the author is unaware of any work studying exactly this type of free-boundary problem directly in detail, there are avail- able theoretical results [7] that include existence, uniqueness of viscosity solutions, and a comparison principle for the pricing of American options in regime-switching models. Also, under some conditions, convergence of stable, monotone, and consis- tent approximation schemes to the value function is shown. Suitable numerical PDE methods and their pros and cons for such a coupled system are discussed in [22]. With this alternative route in mind (provided all the needed technical results can be established), our approach has clear benefits: avoiding many analytical complications that arise in the study of the full system (compare [7]) and yielding a very intuitive monotone approximation scheme for the value function and the stopping boundary. For further study of the problem in this section, we will make a structural assumption about the Markov chain modelling the volatility. Assumption 4.5 The Markov chain σ is skip-free, i.e. for all i ∈{1,..., m}, λ = 0if j ∈{ / i − 1, i , i + 1}. ij As many popular financial stochastic volatility models have continuous trajectories, and a skip-free Markov chain is a natural discrete state-space approximation of a continuous process, Assumption 4.5 does not appear to be a severe restriction. Lemma 4.6 Let δ> 0,g : (l, h) ×[0, ∞) →[0, ∞) be increasing and convex in the first variable as well as decreasing in the second. Then u : (l, h) ×{σ ,...,σ }→ R 1 m defined by x ,σ x ,σ X du 0 ˆ u(x,σ ) := E e g(X , σ (δ)) (4.4) is increasing and convex in the first variable as well as decreasing in the second. Proof We will prove the claim using a coupling argument. Let ( , F , P ) be a prob- 1 2 ability triplet supporting a Brownian motion B, and two volatility processes σ , σ 123 Applied Mathematics & Optimization with the state space and transition densities as in (2.1). In addition, we assume that B 1 2 1 2 is independent of (σ ,σ ), that the starting values satisfy σ (0) = σ ≤ σ = σ (0), i j 1 2 1 2 ˆ ˆ and that σ (t ) ≤ σ (t ) for all t ≥ 0. Also, let X and X denote the solutions to (2.6) 1 2 when σ is replaced by σ and σ , respectively. Let us fix an arbitrary ω ∈  . Since W is independent of σ , δ δ 1 x 1 1 x ˆ ˜ (X ) du 1 x 1 σ  (X ) du 1 x 1 ˜ u ˆ ˜ u ˜ 0 0 E e g((X ) ,σ (δ)) | F (ω ) = E e g((X ) ,σ (δ, ω )) , (4.5) 0 0 δ δ δ 1 1 1 ˜ ˆ where X denotes the process X with the volatility process σ replaced by a deter- ministic function σ (·,ω ). Furthermore, the right-hand (and so the left-hand side) in (4.5)asafunctionof x is increasing by [31, Theorem IX.3.7] as well as convex by [14, Theorem 5.1]. Hence 1 x 1 (X ) du 1 x 1 σ ˜ ˜ 0 u ˆ u(·,σ ) : x → E E e g((X ) ,σ (δ)) | F δ δ is increasing and convex. Next, we observe that 1 x 1 2 σ ,σ (X ) du 1 x 1 ˜ u ˆ E e g((X ) ,σ (δ)) | F (ω ) δ δ δ 1 2 2 δ (X ) du 2 x 1 σ ,σ ˜ u ˆ ≥ E e g((X ) ,σ (δ)) | F (ω ) δ 1 2 2 x σ ,σ (X ) du 2 x 2 ˜ 0 u ˆ ≥ E e g((X ) ,σ (δ)) | F (ω ). (4.6) δ δ In the above, having in mind that the conditional expectations can be rewritten as ordi- nary expectations similarly as in (4.5), the first inequality followed by [14, Theorem 6.1], the second by the decay of g in the second variable. Integrating both sides of (4.6) over all possible ω ∈  with respect to dP , we get that u(x,σ ) ≥ u(x,σ ). 1 2 Thus we can conclude that u is increasing and convex in the first variable as well as decreasing in the second. Theorem 4.7 (Ordering in volatility) (i) v is decreasing in the volatility variable, i.e. v ≥ v ≥ ··· ≥ v . σ σ σ 1 2 m (ii) The boundaries are ordered in volatility as b ≤ b ≤ ··· ≤ b . σ σ σ 1 2 m Proof (i) We will prove the claim by approximating the value function v by a sequence of value functions {v } of corresponding Bermudan optimal stop- n n≥0 ping problems. Let v denote the value function as in (2.7), but when stopping is kT allowed only at times : k ∈{0, 1,..., 2 } . 123 Applied Mathematics & Optimization Let us fix n ∈ N. We will show that, for any given k ∈{0,..., 2 } and any t ∈[ T , T ], the value function v (t , x,σ ) is increasing and convex in x as n n well as decreasing in σ (note that here σ denotes the initial value of the process t → σ(t )). The proof is by backwards induction from k = 2 down to k = 0. Since v (T , ·, ·) = 1, the base step k = 2 holds trivially. Now, suppose that, for some given k ∈{0,..., 2 },the value v (t , x,σ ) is increasing and convex in x as well as decreasing in σ for any t ∈[ T , T ]. Then, Lemma 4.6 tells us that (k−1)T kT for any fixed t ∈[ , ), n n 2 2 kT t ,x ,σ 2 kT kT t ,x ,σ X du ˜ t u f (t , x,σ ) := E e v , X ,σ , kT n n 2 2 is increasing and convex in x as well as decreasing in σ . Consequently, since (k−1)T kT f (t , x,σ ), t ∈ ( , ), n n 2 2 v (t , x,σ ) = (4.7) (k−1)T f (t , x,σ ) ∨ 1, t = , the value v (t , x,σ ) is increasing and convex in x as well as decreasing in σ for k−1 any fixed t ∈[ T , T ]. Hence, by backwards induction, v is increasing and convex in the second argument x as well as decreasing in the third argument σ . Finally, since v → v pointwise as n →∞, we can conclude that the value function v is decreasing in σ . (ii) From the proof of Theorem 4.2 (ii), the claim is a direct consequence of part (i) above. Remark 4.8 1. The value function is decreasing in the initial volatility (Theorem 4.7 (i)) also when the volatility is any continuous time-homogeneous positive Markov process independent of the driving Brownian motion W . The assertion is justified by inspection of the proof of Lemma 4.6 in which no crossing of the volatility trajectories was important, not the Markov chain structure. 2. Though there are no grounds to believe that any of the boundaries b ,..., b is σ σ 1 m discontinuous, proving their continuity, except for the lowest one, is beyond the power of customary techniques. Continuity of the lowest boundary can be proved similarly as in the proof of part 4 of [15, Theorem 3.10], exploiting the ordering of the boundaries. The stumbling block for proving continuity of the upper boundaries is that, at a downward volatility jump time, the value function has a positive jump whose magnitude is difficult to quantify. 5 Generalisation to an Arbitrary Prior In this section, we generalise most results of the earlier parts to the general prior case. In what follows, the prior μ of the drift is no longer a two-point but an arbitrary probability distribution. 123 Applied Mathematics & Optimization 5.1 Two-Dimensional Characterisation of the Posterior Distribution Let us first think a bit more abstractly to develop intuition for the arbitrary prior case. According to the Kushner–Stratonivich stochastic partial differential equation (SPDE) for the posterior distribution (see [8, Sect. 3.2]), if we take the innovation process driving the SPDE and the volatility as the available information sources, then the posterior distribution is a measure-valued Markov process. Unfortunately, there does not exist any applicable general methods to solve optimal stopping problems for measure-valued stochastic processes. If only we were able to characterise the posterior distribution process by an R -valued Markovian process (with respect to the filtration generated by the innovation and the volatility processes), then we should manage to reduce our optimal stopping problem with a stochastic measure-valued underlying to an optimal stopping problem with a R -valued Markovian underlying. Mercifully, this wishful thinking turns out to be possible in reality as we shall soon see. Unlike in the problem with constant volatility studied in [15], when the volatility is varying, the pair consisting of the elapsed time t and the posterior mean X is not suffi- cient (with an exception of the two-point prior case studied before) to characterise the S,σ posterior distribution μ of X given F . Hence we need some additional informa- tion to describe the posterior distribution. Quite surprisingly, all this needed additional information can be captured in a single additional observable statistic which we will name the ‘effective learning time’. We start the development by first introducing some useful notation. (i ) (i ) Define Y := Xt + σ W and let μ denote the posterior distribution of X at time i t t t ,y (i ) t given Y = y. It needs to be mentioned that, for any given prior μ, the distributions (i ) (i ) of X given F and X given Y are equal (see Proposition 3.1 in [15]), which t t (i ) justifies our conditioning only on the last value Y . Also, recall that l = inf supp(μ), h = sup supp(μ). The next lemma provides the key insight allowing to characterise the posterior distribution by only two parameters. Lemma 5.1 Let σ ≥ σ > 0. Then 2 1 (1) (2) {μ : t > 0, y ∈ R}={μ : t > 0, y ∈ R}, t ,y t ,y i.e. the sets of possible conditional distributions of X in both cases are the same. Proof Let t > 0, y ∈ R. By the standard filtering theory (a generalised Bayes’ rule), 2uy−u t 2σ e μ(du) (i ) μ (du) := . (5.1) t ,y 2uy−u t 2σ e μ(du) 123 Applied Mathematics & Optimization 2 2 σ σ 1 1 Then taking r = t and y = y, we have that σ σ 2 2 (2) (1) μ (du) = μ (du). t ,y r ,y From Lemma 5.1 and [15, Lemma 3.3] we obtain the following important corollary, telling us that, having fixed a prior, any possible posterior distribution can be fully characterised by only two parameters. Corollary 5.2 Let t > 0. Then, for any posterior distribution μ (·) = P(X ∈ S,σ (1) ·| F )(ω), there exists (r , x ) ∈ (0, T]× (l, h) such that μ = μ , where t t r ,y (r ,x ) (1) y (r , x ) is defined as the unique value satisfying E[X | Y = y (r , x )]= x. In 1 r 1 2 2 t σ t σ 1 1 particular, we can take r = du and y (r , x ) = dY (ω), 1 u 0 σ(u)(ω) 0 σ(u)(ω) where Y = log(S ) + σ(b) db. u u 2 0 When the volatility varies, so does the speed of learning about the drift. The corollary tells us that we can interpret r as the effective learning time measured under the constant volatility σ . The intuition for the name is that even though the volatility is varying over time, the same posterior distribution μ can be also be obtained in a constant volatility model with the constant volatility σ , just at a different time r and at a different value of the price S. Remark 5.3 It is worth remarking that Corollary 5.2 also holds for any reasonable positive volatility process. Indeed, using the Kallianpur–Striebel formula with time- dependent volatility (see Theorem 2.9 on page 39 of [8]), the proof of Lemma 5.1 equally applies for an arbitrary positive time-dependent volatility and immediately yields the result of the corollary. Next, we make a convenient technical assumption about the prior distribution μ. Assumption 5.4 The prior distribution μ is such that au 1. e μ(du)< ∞ for some a > 0, 2. ψ(·, ·) :[0, T ]× (l, h) → R defined by 1 1 2 1 2 1 ψ(t , x ) := E[X | Y = y (t , x )]− x = Var X | Y = y (t , x ) 1 1 t t σ σ 1 1 is a bounded function that is Lipschitz continuous in the second variable. In particular, all compactly supported distributions as well as the normal distribution are known to satisfy Assumption 5.4 (see [15]), so it is an inconsequential restriction for practical applications. 123 Applied Mathematics & Optimization 5.2 Markovian Embedding Similarly as in the two-point prior case, we will study the optimal stopping problem (2.5) by embedding it into a Markovian framework. With Corollary 5.2 telling us that the effective learning time r and the posterior mean x fully characterise the posterior distribution, now, we can embed the optimal stopping problem (2.5) into the standard Markovian framework by defining the Markovian value function τ t ,x ,r ,σ X ds ˜ t +s v(t , x , r,σ ) := sup E e ,(t , x , r,σ ) ∈[0, T ] τ ∈T T −t ×(l, h) ×[0, T ]×{σ ,...,σ }. (5.2) 1 m t ,x ,r ,σ ˆ ˆ Here the process X = X evolves according to ˆ ˆ ˆ dX = σ ψ(r , X ) ds + ψ(r , X ) dB , s ≥ 0, ⎪ t +s 1 t +s t +s t +s t +s t +s σ(t +s) ⎪ σ dr = ds, s ≥ 0, t +s σ(t +s) (5.3) ⎪ X = x , r = r , σ(t ) = σ ; the given dynamics of X is a consequence of Corollary 5.2 and the evolution equation of X in the constant volatility case (see the equation (3.9) in [15]). Also, in (5.3), ˆ ˜ the process B = σ(u) du + W is a P-Brownian motion. Lastly, in (5.2), T t t T −t denotes the set of stopping times less than or equal to T − t with respect to the usual t ,x ,r ,σ augmentation of the filtration generated by {X } and {σ(t + s)} . s≥0 s≥0 t +s Remark 5.5 Let us note that in light of the observations of Sect. 5.1, if the regime- switching volatility was replaced by a different stochastic volatility process, the same Markovian embedding 5.2 could still be useful for the study of the altered problem. 5.3 Outline of the Approximation Procedure and Main Results Under an arbitrary prior, the approximation procedure of Sect. 3 can also be applied, (n) however, the operators J and J need to be redefined in a suitable way. We redefine the operator J to act on a function f :[0, T ]× (l, h) ×[0, T]→ R as (Jf )(t , x , r,σ ) t ,x ,r ,σ t ,x ,r ,σ i i i ˆ ˆ X ds X ds t t ,x ,r ,σ t ,r ˜ t +s t +s ˆ 0 0 := sup E e 1 t + e f t + η , X , r 1 t t t {τ<η } i {τ ≥η } i t +η t +η i i i τ ∈T T −t t ,x ,r ,σ t ,x ,r ,σ τ u i i ˆ ˆ X −λ ds X −λ ds t ,x ,r ,σ t ,r ˜ t +s i t +s i ˆ 0 0 = sup E e + e f t + u, X , r du t +u t +η τ ∈T 0 T −t (5.4) 123 Applied Mathematics & Optimization and then the operator J as J f := (Jf )(·, ·,σ ). Intuitively, (J f ) represents i i i i a Markovian value function corresponding to optimal stopping before t + η , i.e. before the first volatility change after t, when, at time t + η < T , the payoff t ,x ,r ,σ t ,r t i f t + η , X , r is received, provided stopping has not occurred yet. The t t t +η t +η i i underlying process in the optimal stopping problem J f is the diffusion (t , X , r ). i t t The majority of the results in Sects. 3 and 4 generalise nicely to an arbitrary prior case. Proposition 3.1 extends word by word; the proofs are analogous, just the second property of ψ from [15, Proposition 3.6] needs to be used for Proposition 3.1 (iv). In addition, we have that f decreasing in r implies that J f is decreasing in r, which is proved by a Bermudan approximation argument as in Proposition 3.1 (iv) using the time decay of ψ from [15, Proposition 3.6]. As a result, for f :[0, T ]×(l, h)×[0, T]→ R that is decreasing in the first and third variables as well as increasing (though not too fast as x ∞) and convex in the second, there exists a function (a stopping boundary) b :[0, T ) ×[0, T ) →[l, 0] that is increasing in both variables and such that the continuation region C := {(t , x , r ) ∈[0, T ) × (l, h) × 0, T : (J f )(t , x , r)> 1} (optimality shown as in Proposition 3.2) satisfies C ={(t , x , r ) ∈[0, T ) × (l, h) × [0, T ) : x > b (t , r )}. i i In addition, each pair (J f , b ) solves the free-boundary problem i σ ⎪ 1 ∂ u(t , x , r ) + ∂ u(t , x , r ) + σ ψ(r , x )∂ u(t , x , r ) t r 1 x ⎨ 2 1 1 2 + ψ(r , x ) ∂ u(t , x , r ) + (x − λ )u(t , x , r ) xx i 2 σ + λ f (t , x , r ) = 0, if x > b (t , r ), ⎪ i σ u(t , x , r ) = 1, if x ≤ b (t , r ) or t = T . (n) With the operator J redefined as t ,x ,r ,σ (n) X ds 0 t +s (J f )(t , x , r,σ ) := sup E e 1 {τ<ξ } τ ∈T T −t ξ t ,x ,r ,σ n i ˆ t ,x ,r ,σ X ds t t ,r t +s ˆ + e f (t + ξ , X , r )1 t , t t {τ ≥ξ } t +ξ t +ξ n n n the crucial Proposition 3.5 holds word by word. Furthermore, the sequence of functions (n) (n) {J 1} is increasing, bounded from below by 1 with each J 1 being decreasing n≥0 in the first and third variables as well as increasing and convex in the second variable x. As desired, (n) J 1  v pointwise as n ∞, so the value function v is decreasing in the first and third variables as well as increasing and convex in the second variable; again, v is a fixed point of J . Moreover, the uniform 123 Applied Mathematics & Optimization approximation error result (3.20) also holds for compactly supported priors (with an obvious reinterpretation h = sup(supp μ)). We can also show (by a similar argument as in Theorem 4.2 (iii)) that (n) b  b pointwise as n ∞, σ σ i i (n) (n) ij where g := J 1 and the limit b is a function increasing in both vari- i j =i j i ables. Lastly, by similar arguments as before, the stopping time ∗ t ,x ,r ,σ τ = inf{s ∈[0, T − t ) : X ≤ b (t + s, r )}∧ (T − t ) σ(t +s) t +s t +s is optimal for the liquidation problem (2.5). Remark 5.6 The higher volatility, the slower learning about the drift, so under Assump- tion 4.5 it is tempting to expect that the value function v is decreasing in the volatility variable and so the stopping boundaries b ≤ b ≤ ··· ≤ b also in the case of an σ σ σ 1 2 m arbitrary prior distribution μ. Regrettably, proving (or disproving) such monotonicity in volatility has not been achieved by the author. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 Interna- tional License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. References 1. Bain, A., Crisan, D.: Fundamentals of stochastic filtering. In: Stochastic Modelling and Applied Prob- ability, vol. 60. Springer, New York (2009) 2. Bayraktar, E.: A proof of the smoothness of the finite time horizon american put option for jump diffusions. SIAM J. Control Optim. 48(2), 551–572 (2009) 3. Bayraktar, E.: On the perpetual American put options for level dependent volatility models with jumps. Quant. Financ. 11(3), 335–341 (2011) 4. Bayraktar, E., Kravitz, R.: Quickest detection with discretely controlled observations. Seq. Anal. 34(1), 77–133 (2015) 5. Bayraktar, E., Dayanik, S., Karatzas, I.: Adaptive Poisson disorder problem. Ann. Appl. Probab. 16(3), 1190–1261 (2006) 6. Bensoussan, A.: Applications of Variational Inequalities in Stochastic Control. Studies in Mathematics and Its Applications, vol. 12. North-Holland, Amsterdam (1982) 7. Crépey, S.: About, the pricing equations in finance. In: Paris-Princeton Lectures on Mathematical Finance 2010, pp. 63–203. Springer, Berlin (2011) 8. Crisan, D., Rozovskii, B.: The Oxford Handbook of Nonlinear Filtering. Oxford University Press, Oxford (2011) 9. Dayanik, S., Poor, H.V., Sezer, S.O.: Multisource Bayesian sequential change detection. Ann. Appl. Probab. 18(2), 552–590 (2008) 10. Décamps, J.-P., Mariotti, T., Villeneuve, S.: Investment timing under incomplete information. Math. Oper. Res. 30(2), 472–500 (2005) 11. Di Masi, G.B., Kabanov, Y.M., Runggaldier, W.J.: Mean-variance hedging of options on stocks with Markov volatilities. Theory Probab. Appl. 39(1), 172–182 (1995) 123 Applied Mathematics & Optimization 12. Ekström, E., Lu, B.: Optimal selling of an asset under incomplete information. Int. J. Stoch. Anal. 2011, ID 543590 (2011) 13. Ekström, E., Lu, B.: The optimal dividend problem in the dual model. Adv. Appl. Probab. 46(3), 746–765 (2014) 14. Ekström, E., Tysk, J.: Convexity theory for the term structure equation. Financ. Stoch. 12(1), 117–147 (2008) 15. Ekström, E., Vaicenavicius, J.: Optimal liquidation of an asset under drift uncertainty. SIAM J. Financ. Math. 7(1), 357–381 (2016) 16. Elie, R., Kharroubi, I.: Probabilistic representation and approximation for coupled systems of varia- tional inequalities. Stat. Probab. Lett. 80(17–18), 1388–1396 (2010) 17. Eloe, P., Liu, R.H., Yatsuki, M., Yin, G., Zhang, Q.: Optimal selling rules in a regime-switching exponential Gaussian diffusion model. SIAM J. Appl. Math. 69(3), 810–829 (2008) 18. Gapeev, P.: Pricing of perpetual American options in a model with partial information. Int. J. Theor. Appl. Financ. 15(1), ID 1250010 (2012) 19. Gugerui, U.S.: Optimal stopping of a piecewise-deterministic Markov process. Stochastics 19(4), 221– 236 (1986) 20. Guo, X., Zhang, Q.: Closed-form solutions for perpetual American put options with regime switching. SIAM J. Appl. Math. 64(6), 2034–2049 (2004) 21. Guo, X., Zhang, Q.: Optimal selling rules in a regime switching model. IEEE Trans. Autom. Control 50, 1450–1455 (2005) 22. Huang, Y., Forsyth, P.A., Labahn, G.: Methods for pricing American options under regime switching. SIAM J. Sci. Comput. 33(5), 2144–2168 (2011) 23. Karatzas, I., Shreve, S.: Methods of Mathematical Finance. Applications of Mathematics, vol. 39. Springer, New York (1998) 24. Le, H., Wang, C.: A finite time horizon optimal stopping problem with regime switching. SIAM J. Control Optim. 48(8), 5193–5213 (2010) 25. Lu, B.: Optimal selling of an asset with jumps under incomplete information. Appl. Math. Financ. 20(6), 599–610 (2013) 26. Øksendal, B.: Stochastic Differential Equations: An Introduction with Applications, 6th edn. Springer, New York (2007) 27. Pascucci, A.: Free boundary and optimal stopping problems for American Asian options. Financ. Stoch. 12(1), 21–41 (2008) 28. Pemy, M., Zhang, Q.: Optimal stock liquidation in a regime switching model with finite time horizon. J. Math. Anal. Appl. 321(2), 537–552 (2006) 29. Peskir, G., Shiryaev, A.: Optimal Stopping and Free-Boundary Problems. Lectures in Mathematics, ETH Zürich. Birkhäuser Verlag, Basel (2006) 30. Pham, H.: Continuous-Time Stochastic Control and Optimization with Financial Applications, vol. 61. Springer, Berlin (2009) 31. Revuz, D., Yor, M.: Continuous martingales and Brownian motion. Grundlehren der Mathematischen Wissenschaften, 3rd edn., vol. 293. Springer, Berlin (1999) 32. Rogers, L.C.G.: Optimal Investment. Springer Briefs in Quantitative Finance. Springer, New York (2013) 33. Vannestål, M.: Exercising American options under uncertainty. Working paper (2017) 34. Yin, G., Liu, R.H., Zhang, Q.: Recursive algorithms for stock liquidation: a stochastic optimization approach. SIAM J. Optim. 13(1), 240–263 (2002) 35. Yin, G., Zhang, G., Liu, F., Liu, R.H., Cheng, Y.: Stock liquidation via stochastic approximation using Nasdaq daily and intra-day data. Math. Financ. 16(1), 217–236 (2006) 36. Zhang, Q.: Stock trading: an optimal selling rule. SIAM J. Control Optim. 40(1), 64–87 (2001) 37. Zhang, Q., Yin, G., Liu, R.H.: A near-optimal selling rule for a two-time-scale market model. Multiscale Model. Simul. 4(1), 172–193 (2005) http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Applied Mathematics and Optimization Springer Journals

Asset Liquidation Under Drift Uncertainty and Regime-Switching Volatility

Applied Mathematics and Optimization , Volume OnlineFirst – Aug 30, 2018

Loading next page...
 
/lp/springer-journals/asset-liquidation-under-drift-uncertainty-and-regime-switching-QjIOp6QJQ0

References (42)

Publisher
Springer Journals
Copyright
Copyright © 2018 by The Author(s)
Subject
Mathematics; Calculus of Variations and Optimal Control; Optimization; Systems Theory, Control; Theoretical, Mathematical and Computational Physics; Mathematical Methods in Physics; Numerical and Computational Physics, Simulation
ISSN
0095-4616
eISSN
1432-0606
DOI
10.1007/s00245-018-9518-5
Publisher site
See Article on Publisher Site

Abstract

Optimal liquidation of an asset with unknown constant drift and stochastic regime- switching volatility is studied. The uncertainty about the drift is represented by an arbitrary probability distribution; the stochastic volatility is modelled by m-state Markov chain. Using filtering theory, an equivalent reformulation of the original prob- lem as a four-dimensional optimal stopping problem is found and then analysed by constructing approximating sequences of three-dimensional optimal stopping prob- lems. An optimal liquidation strategy and various structural properties of the problem are determined. Analysis of the two-point prior case is presented in detail, building on which, an outline of the extension to the general prior case is given. Keywords Optimal liquidation · Drift uncertainty · Regime-switching volatility · Sequential analysis · Optimal stopping · Stochastic filtering Mathematics Subject Classification Primary 60G40 · Secondary 91G80 · 60J25 1 Introduction Selling is a fundamental and ubiquitous economic operation. As the prices of goods fluctuate over time, ‘What is the best time to sell an asset to maximise revenue?’ qualifies as a basic question in Finance. Suppose that an asset needs to be sold before a known deterministic time T > 0 and that the only source of information available to the seller is the price history. A natural mathematical reformulation of the aforementioned optimal selling question is to find a selling time τ ∈ T such that E[S ]= sup E[S ], (1.1) τ τ τ ∈T B Juozas Vaicenavicius juozas.vaicenavicius@it.uu.se Department of Information Technology, Uppsala University, Box 337, 751 05 Uppsala, Sweden 123 Applied Mathematics & Optimization where {S } denotes the price process and T denotes the set of stopping times with t t ≥0 T respect to the price process S. Many popular continuous models for the price process are of the form dS = αS dt + σ(t )S dW , (1.2) t t t t where α ∈ R is called the drift, and σ ≥ 0 is known as the volatility process. Imposing simplifying assumptions that the volatility is independent of W as well as time-homogeneous, an m-state time-homogeneous Markov chain stands out as a basic though still rather flexible stochastic volatility model (proposed in [11]), which we choose to use in this article. The flexibility comes from the fact that we can choose the state space as well as the transition intensities between the states. Though the problem (1.1) in which S follows (1.2) is well-posed mathematically, from a financial point of view, the known drift assumption is widely accepted to be unreasonable (e.g. see [32, Sect. 4.2 on p. 144]) and needs to be relaxed. Hence, using the Bayesian paradigm, we model the initial uncertainty about the drift by a probability distribution (known as the prior in Bayesian inference), which incorporates all the available information about the parameter and its uncertainty (see [15] for more on the interpretation of the prior). If the quantification of initial uncertainty is subjective, then the prior represents one’s beliefs about how likely the drift is to take different values. To be able to incorporate arbitrary prior beliefs, we set out to solve the optimal selling problem (1.1) under an arbitrary prior for the drift. In the present paper, we analyse and solve the asset liquidation problem (1.1)inthe case when S follows (1.2) with m-state time-homogeneous Markov chain volatility and unknown drift, the uncertainty of which is modelled by an arbitrary probabil- ity distribution. The first time a particular four-dimensional process hits a specific boundary determining the stopping set is shown to be optimal. This stopping bound- ary has attractive monotonicity properties and can be found using the approximation procedure developed. Let us elucidate our study of the optimal selling problem in more depth. Using the nonlinear filtering theory, the original selling problem with parameter uncertainty is rewritten as an equivalent optimal stopping problem of a standard form (i.e. without unknown parameters). In this new optimal stopping problem, the posterior mean serves as the underlying process and acts as a stochastic creation rate; the payoff function in the problem is constant. The posterior mean is shown to be the solution of an SDE depending on the prior and the whole volatility history. Embedding of the optimal stopping problem into a Markovian framework is non-trivial because the whole pos- terior distribution needs to be included as a variable. Fortunately, we show that having fixed the prior, the posterior is fully characterised by only two real-valued parameters: the posterior mean and, what we call, the effective learning time. As a result, we are able to define an associated Markovian value function with four underlying variables (time, posterior mean, effective learning time, and volatility) and study the optimal stopping problem as a four-dimensional Markovian optimal stopping problem (the volatility takes values in a finite set, but slightly abusing terminology, we still call it a dimension). Exploiting that the volatility is constant between the regime switches, we construct m sequences of simpler auxiliary three-dimensional Markovian optimal 123 Applied Mathematics & Optimization stopping problems whose values in the limit converge monotonically to the true value function. The main advantage of this approximating sequence approach comparing with tackling the full variational inequality of the problem directly is that dealing with the analytically complicated coupled system is avoided altogether. Instead only much simpler standard uncoupled free-boundary problems need to be analysed or solved numerically to arrive at a desired result. We show that the value function is decreasing in time and effective learning time as well as increasing and convex in posterior mean. The first hitting time of a region specified by a stopping boundary that is a function of time, effective learning time, and volatility is shown to be optimal. The stopping boundary is increasing in time, effective learning time, and is the limit of a monoton- ically increasing sequence of boundaries from the auxiliary problems. Moreover, the approximation procedure using the auxiliary problems yields a method to calculate the value function as well as the optimal stopping boundary numerically. In the two-point prior case, the posterior mean fully characterises the posterior distribution, making the problem more tractable and allowing us to obtain some addi- tional results. In particular, we prove that, under a skip-free volatility assumption, the Markovian value function is decreasing in the volatility and that the stopping boundary is increasing in the volatility. In a broader mathematical context, the selling problem investigated appears to be the first optimal stopping problem with parameter uncertainty and stochastic volatility to be studied in the literature. Thus it is plausible that ideas presented herein will find uses in other optimal stopping problems of the same type; for example, in classical problems of Bayesian sequential analysis (e.g. see [29, Chapter VI]) with stochastically evolving noise magnitude. It is clear to the author that with additional efforts a number of results of the article can be refined or generalised. However, the objective chosen is to provide an intuitive understanding of the problem and the solution while still maintaining readability and clarity. This also explains why, for the most part, we focus on the two-point prior case and outline an extension to the general prior case only at the end. 1.1 Related Literature There is a strand of research on asset liquidation problems in models with regime- switching volatility, alas, they either concern only a special class of suboptimal strategies or treat the drift as observable. In [36], a restrictive asset liquidation problem was proposed and studied; the drift as well as the volatility were treated as unob- servable and the possibility to learn about the parameters from the observations was disregarded. The subsequent papers [17,34,35] explored various aspects of the same −r τ formulation. An optimal selling problem with the payoff e (S − K ) was studied in [26] for the Black–Scholes model, in [21] for a two-state regime-switching model, and in [35] for an m-state model with finite horizon. In all three cases, the drift and the volatility are assumed to be fully observable. In another strand of research, the optimal stopping problem (1.1) has been solved and analysed in the Black–Scholes model under arbitrary uncertainty about the drift. The two-point prior case was studied in [12], while the general prior case was solved in 123 Applied Mathematics & Optimization [15] using a different approach. This article can be viewed as a generalisation of [15] to include stochastic regime-switching volatility. Related option valuation problems under incomplete information were studied in [18,33], both in the two-point prior case, and in [10]inthe n-point prior case. The approach we take to approximate a Markovian value function by a sequence of value functions of simpler constant volatility problems was used before in [24]to investigate a finite-horizon American put problem (also, its slight generalisation) in a regime-switching model with full information. Regrettably, in the case of 3 or more volatility states, the recursive approximation step in [24, Sect. 5] contains a blunder; we rectify it in Sect. 3.2 of this article. A possible alternative route to analysing and solving the optimal stopping problem is to analytically tackle the system of variational inequalities directly using weak solutions techniques (e.g., see [6,30]), similarly as in [7] for American options with regime-switching volatility. Structural and regularity properties would need to be established using PDE techniques. If appropriate theoret- ical results can be obtained, numerical PDE schemes discussed in [22] should yield a numerical solution. However, this alternative approach requires a different toolkit, appears to be more demanding analytically, and hence not investigated further in the present article. Though it is true that the current paper is a generalisation of [15] from constant volatility to the regime-switching stochastic volatility model, the extension is definitely not a straightforward one. Novel statistical learning intuitions were needed, and new proofs were developed to arrive at the results of the paper. One of the main insights of the optimal liquidation problem with constant volatility in [15] was that the current time and price were sufficient statistics for the optimal selling problem. However, changing the volatility from constant to stochastic makes the posterior distribution of the drift truly dependent on the price path. This raises questions whether an optimal liquidation problem can be treated using the mainstream finite-dimensional Markovian techniques at all, and also whether any of the developments from the constant volatility case can be taken advantage of. In the two-point prior case with regime-switching volatility, the following new insight was key. Despite the posterior being a path-dependent function of the stock price, we can show that the current time, posterior mean and instantaneous volatility (extracted from the price process) are sufficient statistics for the optimal liquidation problem. Alas, for any prior with more than two points in the support, the same triplet is no longer a sufficient statistic. Fortunately, if in addition to the time- price-volatility triplet we introduce an additional statistic, which we name the effective learning time, the resulting 4-tuple becomes a sufficient statistic for the selling problem under a general prior. Besides these insights, some new technicalities (in particular, Lemma (2.3)) stemming from stochastic volatility had to be resolved to reformulate the optimal selling problem into the standard Markovian form. In relation to [24], though we employ the same general iterative approximation idea to construct an approximating sequence for the Markovian value function, the particulars, including proofs and results, are notably distinct. Firstly, we work in a more general setting, proving and formulating more abstract as well as, in multiple instances, new type of results. For example, we prove things in the m-state rather than the two-state regime-switching model. This allowed us to catch and correct an erroneous construction of the approximating sequence in [24] for models with more 123 Applied Mathematics & Optimization than two volatility states. Moreover, almost all the proofs follow different arguments either because of the structural differences in the selling problem or because we prefer another way, which seems to be more transparent and direct, to arrive at the results. Lastly, many of the results in the present paper are problem-specific and even not depend on the iterative approximation of the value function after all. The idea to iteratively construct a sequence of auxiliary value functions that con- verge to the true value function in the limit is generic and has been many times success- fully applied to optimal stopping problems with a countable number of discrete events (e.g. jumps, discrete observations). In the setting with partial observations, an iterative approximation scheme was employed in [5] to study the Poisson disorder detection problem with unknown post-disorder intensity, then later, in [9], to analyse a combined Poisson-Wiener disorder detection problem, and, more recently, in [4], to investigate the Wiener disorder detection under discrete observations. In the fully observable set- ting, such iterative approximations go back to at least as early as [19], which deals with a Markovian optimal stopping problem with a piecewise deterministic underlying. In Financial Mathematics, iteratively constructed approximations were used in [2,3]to study the value functions of finite and perpetual American put options, respectively, for a jump diffusion. Besides optimal stopping, the iterative approximation technique was utilised for the singular control problem [13] of optimal dividend policy. 2 Problem Set-Up We model a financial market on a filtered probability space (, F , {F } , P) satisfy- t t ≥0 ing the usual conditions. Here the measure P denotes the physical probability measure. The price process is modelled by dS = XS dt + σ(t )S dW , (2.1) t t t t where X is a random variable having probability distribution μ, W is a standard Brown- ian motion, and σ is a time-homogeneous right-continuous m-state Markov chain with a generator  = (λ ) and taking values σ ≥ ··· ≥ σ > 0. Moreover, we ij 1≤i , j ≤m m 1 assume that X, W , and σ are independent. Since the volatility can be estimated from the observations of S in an arbitrary short period of time (at least in theory), it is reasonable to assume that the volatility process {σ(t )} is observable. Hence the t ≥0 S,σ S,σ available information is modeled by the filtration F ={F } generated by the t ≥0 processes S and σ and augmented by the null sets of F. Note that the drift X and the random driver W are not directly observable. The optimal selling problem that we are interested in is V = sup E[S ], (2.2) S,σ τ ∈T S,σ S,σ where T denotes the set of F -stopping times that are smaller or equal to a prespecified time horizon T > 0. 123 Applied Mathematics & Optimization −r τ Remark 2.1 It is straightforward to include a discount factor e in (2.2). In fact, it simply corresponds to a shift of the prior distribution μ in the negative direction by r. Let l := inf supp(μ) and h := sup supp(μ). It is easy to see that if l ≥ 0, then it is optimal to stop at the terminal time T . Likewise, if h ≤ 0, then stopping immediately, i.e. at time zero, is optimal. The rest of the article focuses on the remaining and most interesting case. Assumption 2.2 l < 0 < h. 2.1 Equivalent Reformulation Under a Measure Change S,σ Let us write X := E[X | F ]. Then the process ˆ ˆ W := (X − X ) ds + W , t s t σ(s) S,σ called the innovation process, is an F -Brownian motion (see [1, Proposition 2.30 on p. 33]). Lemma 2.3 The volatility process σ and the innovation process W are independent. Proof Since X, W , and σ are independent, we can think of (, F , P) as a product [0,T ] space  ×  , F ⊗ F , P × P .Let A, A ∈ B(R ). Then X ,W σ X ,W σ X ,W σ P W ∈ A,σ ∈ A = 1 d P × P (ω ,ω ) X ,W σ X ,W σ {W (ω ,ω )∈A,σ (ω )∈A } X ,W σ σ X ,W σ = 1 1  dP (ω ) dP (ω ) X ,W X ,W σ σ ˆ {σ(ω )∈A } {W (ω ,ω )∈A} σ X ,W σ σ X ,W = 1  1 dP (ω ) dP (ω ) {σ(ω )∈A } ˆ X ,W X ,W σ σ {W (ω ,ω )∈A} X ,W σ σ X ,W = 1 P W (·,ω ) ∈ A dP (ω ) {σ(ω )∈A } X ,W σ σ σ = P W ∈ A P σ ∈ A = P W ∈ A P σ ∈ A , (2.3) where the penultimate equality is justified by the fact that, for any fixed ω ,the innovation process W (·,ω ) is a Brownian motion under P . Hence from (2.3), σ X ,W the processes W and σ are independent. Defining a new equivalent measure P ∼ P on (, F ) via the Radon-Nikodym derivative T T dP 1 2 σ(t ) dW − σ(t ) dt 0 2 0 = e dP 123 Applied Mathematics & Optimization and writing t 1 t Xt + σ(s) dW − σ(s) ds 0 2 0 S = S e t 0 t t 1 t ˆ ˆ X ds+ σ(s) dW − σ(s) ds s s 0 0 2 0 = S e , S,σ we have that, for any τ ∈ T , τ τ ˆ ˆ X ds X ds s s ˜ ˜ 0 0 E [S ] = E S e = S E e . τ 0 0 ˆ ˜ Moreover, by Girsanov’s theorem, the process B := − σ(s) ds + W is a P- t t Brownian motion on [0, T ]. In addition, Lemma 2.3 together with [1, Proposition 3.13] tells us that the law of σ is the same under P and P, as well as that B and σ are independent under P. Without loss of generality, we set S = 1 throughout the article, so the optimal stopping problem (2.2) can be cast as X ds ˜ s V = sup E[e ]. (2.4) S,σ τ ∈T Between the volatility jumps, the stock price is a geometric Brownian motion with known constant volatility and unknown drift. Hence, by Corollary 3.4 in [15], we have ˆ ˆ S,σ X ,σ S,σ X ,σ X ,σ that F = F and T = T , where F denotes the usual augmentation of T T X ,σ ˆ X ,σ the filtration generated by X and σ,also, T denotes the set of F -stopping times not exceeding T . As a result, an equivalent reformulation of (2.4)is X ds V = sup E[e ], (2.5) X ,σ τ ∈T which we will study in the subsequent parts of the article. 2.2 Markovian Embedding In all except the last section of this article, we will focus on the special case when X has a two-point distribution μ = πδ + (1 − π)δ , where h > l, π ∈ (0, 1) are h l constants, and δ ,δ are Dirac measures at h and l, respectively. In this special case, h l expressions are simpler and arguments are easier to follow than in the general prior case; still, most underlying ideas of the arguments are the same. Hence, we choose to understand the two-point prior case first, after which generalising the results to the general prior case will become a rather easy task. Since the volatility is a known constant between the jump times, using the dynamics ˆ ˆ of X in the constant volatility case [the equation (3.9) in [15]], the process X is a unique strong solution of ˆ ˆ ˆ dX = σ(t )φ (X ,σ (t )) dt + φ(X ,σ (t )) dB , (2.6) t t t t 123 Applied Mathematics & Optimization where φ(x,σ ) := (h − x )(x − l). Now, we can embed the optimal stopping problem (2.4) into a Markovian framework by defining a Markovian value function t ,x ,σ X ds ˜ s v(t , x,σ ) := sup E[e ],(t , x,σ ) ∈[0, T ]× (l, h) ×{σ ,...,σ }. 1 m τ ∈T T −t (2.7) t ,x ,σ ˆ ˆ ˆ Here X denotes the process X in (2.6) started at time t with X = x, σ(t ) = σ , and T stands for the set of stopping times less or equal to T − t with respect to T −t t ,x ,σ the usual augmentation of the filtration generated by {X } and {σ(t + s)} . s≥0 s≥0 t +s The formulation (2.7) has an interpretation of an optimal stopping problem with the constant payoff 1 and the discount rate −X ; from now onwards, we will study this discounted problem. The notation v := v(·, ·,σ ) will often be used. i i 3 Approximation Procedure It is not clear how to compute v in (2.7) or analyse it directly. Hence, in this section, we develop a way to approximate the value function v by a sequence of value functions, corresponding to simpler constant volatility optimal stopping problems. 3.1 Operator J For the succinctness of notation, let λ := λ denote the total intensity with i ij j =i which the volatility jumps from state σ . Also, let us define η := inf{s > 0 | σ(t + s) = σ(t ) = σ }, whichisanExp(λ )-distributed random variable representing the duration up to the first volatility change if started from the volatility state σ at time t. Furthermore, let us define an operator J acting on a bounded f :[0, T ]×(l, h) → R by (Jf )(t , x,σ ) t ,x ,σ t ,x ,σ τ i i i ˆ ˆ t ,x ,σ X ds X ds t ˜ t +s t +s ˆ 0 0 := sup E e 1 t + e f t + η , X 1 t {τ<η } i {τ ≥η } t +η i i τ ∈T T −t (3.1) t ,x ,σ t ,x ,σ τ i u i ˆ ˆ t ,x ,σ X −λ ds X −λ ds ˜ t +s i t +s i ˆ 0 0 = sup E e + λ e f t + u, X du , t +u τ ∈T T −t (3.2) 123 Applied Mathematics & Optimization where T denotes the set of stopping times less or equal to T − t with respect to T −t t ,x ,σ the usual augmentation of the filtration generated by {X } and {σ(t + s)} . s≥0 s≥0 t +s To simplify notation, we also define an operator J by J f := (Jf )(·, ·,σ ). i i Intuitively, (J f ) represents a Markovian value function corresponding to optimal stopping before t + η , i.e. before the first volatility change after t, when, at time t ,x ,σ t t i t + η < T , the payoff f t + η , X is received provided stopping has not i i t +η occurred yet. Proposition 3.1 Let f :[0, T ]× (l, h) → R be bounded. Then (i) J f is bounded; (ii) f increasing in the second variable x implies that J f is increasing in the second variable x; (iii) f decreasing in the first variable t implies that J f is decreasing in the first variable t; (iv) f increasing and convex in the second variable x implies that J f is increasing and convex in the second variable x; (v) J preserves order, i.e. f ≤ f implies J f ≤ Jf ; 1 2 1 2 (vi) Jf ≥ 1. Proof All except claim (iv) are straightforward consequences of the representation (3.2). To prove (iv), we will approximate the optimal stopping problem (3.2)byBermu- dan options. Let i and n be fixed. We will approximate the value function J f by a value function ( f ) w of a corresponding Bermudan problem with stopping allowed only at times i ,n ( f ) kT n : k ∈{0, 1,..., 2 } . We define w recursively as follows. First, 2 i ,n ( f ) w (T , x ) := 1. i ,n Then, starting with k = 2 and continuing recursively down to k = 1, we define kT (k−1)T kT g t , x , , t ∈ , , n n n ( f ) 2 2 2 w (t , x ) = (3.3) i ,n (k−1)T (k−1)T kT g , x , ∨ 1, t = , n n n 2 2 2 where the function g is given by kT n t ,x ,σ kT 2 i kT ˆ ( f ) X −λ ds t ,x ,σ ˜ t s i g t , x , := E e w , X kT i ,n 2 n kT t ,x ,σ 2 u X −λ ds t ,x ,σ s i i t ˆ + e f u, X du . (3.4) 123 Applied Mathematics & Optimization ( f ) Next, we show by backward induction on k that w is increasing and convex in the i ,n ( f ) kT second variable x. Suppose that for some k ∈{1, 2,..., 2 }, the function w , · i ,n 2 is increasing and convex (the assumption clearly holds for the base step k = 2 ). Let (k−1)T kT t ∈[ , ). Then, since f is also increasing and convex in the second variable x, n n 2 2 ( f ) kT we have that the function g(t , ·, ), and so w (t , ·), is convex by [14, Theorem 5.1]. i ,n ( f ) Moreover, from (3.4) and [31, Theorem IX.3.7], it is clear that w (t , ·) is increasing. i ,n Consequently, by backward induction, we obtain that the Bermudan value function ( f ) w is increasing and convex in the second variable. i ,n ( f ) Letting n ∞, the Bermudan value w  J f pointwise. As a result, J f is i i i ,n increasing and convex in the second argument, since convexity and monotonicity are preserved when taking pointwise limits. The sets C := {(t , x ) ∈[0, T ) × (l, h) : (J f )(t , x)> 1}, f f D := {(t , x ) ∈[0, T ]× (l, h) : (J f )(t , x ) = 1}=[0, T ]× (l, h) \ C , (3.5) i i correspond to continuation and stopping sets for the stopping problem J f as the next proposition shows. Proposition 3.2 (Optimal stopping time) The stopping time t ,x ,σ f τ (t , x ) = inf{u ∈[0, T − t]: (t + u, X ) ∈ D } (3.6) σ t +u i i is optimal for the problem (3.2). Proof A standard application of Theorem D.12 in [23]. Proposition 3.3 If a bounded f :[0, T ]×(l, h) → R is decreasing in the first variable as well as increasing and convex in the second, then J f is continuous. Proof The argument is a trouble-free extension of the proof of the third part of Theorem 3.10 in [15]; still, we include it for completeness. Before we begin, in order to simplify notation, we will write u := J f . Firstly, we let r ∈ (l, h) and will prove that there exists K > 0 such that, for every t ∈[0, T ],the map x → J f (t , x ) is K -Lipschitz continuous on (l, r ].To obtain a contradiction, assume that there is no such K . Then, by convexity of u in the second variable, there is a sequence {t } ⊂[0, T ] such that the left-derivatives n n≥0 ∂ u(t , r ) ∞. Hence, for r ∈ (r , h), the sequence u(t , r ) →∞, which contra- n n dicts that u(t , r ) ≤ u(0, r )< ∞ for all n ∈ N. Now, it remains to show that u is continuous in time. Assume for a contradiction that the map t → u(t , x ) is not continuous at t = t for some x . Since u is decreasing in 0 0 0 time, u(·, x ) has a negative jump at t . Next, we will investigate the cases u(t −, x )> 0 0 0 0 u(t , x ) and u(t , x )> u(t +, x ) separately. 0 0 0 0 0 0 123 Applied Mathematics & Optimization Suppose u(t −, x )> u(t , x ). By Lipschitz continuity in the second variable, 0 0 0 0 there exists δ> 0 such that, writing R = (t − δ, t ) × (x − δ, x + δ), 0 0 0 0 inf u(t , x)> u(t , x + δ). (3.7) 0 0 (t ,x )∈R f t ,x ,σ Thus R ⊆ C .Let t ∈ (t − δ, t ) and τ := inf{s ≥ 0 : (t + s, X )/ ∈ R}. 0 0 R i t +τ Then, by the martingality in the continuation region, τ t ,x ,σ R 0 i t ,x ,σ X −λ du 0 i ˜ t +u ˆ u(t , x ) = E e u t + τ , X 0 R t +τ R t ,x ,σ 0 i t ,x ,σ X −λ ds 0 i t +s ˆ + e f t + u, X du t +u (t −t )(x +δ) 0 0 ≤ E e u(t , x + δ)1 0 {t +τ <t } R 0 (t −t )(x +δ) 0 0 + e u(t , x + δ)1 0 0 {t +τ =t } R 0 t −t t ,x ,σ 0 i t ,x ,σ X −λ ds 0 i 0 t +s ˆ + e  f t + u, X  du t +u + + (t −t )(x +δ) (t −t )(x +δ) 0 0 ˜ 0 0 ≤ e u(t , x + δ)P(t + τ < t ) + e u(t , x + δ) 0 R 0 0 0 t −t 0 t ,x ,σ 0 i ˆ  t ,x ,σ X −λ ds 0 i i ˜ t +s ˆ + E e f t + u, X du t +u → u(t , x + δ) 0 0 as t → t , contradicting (3.7). The other case to consider is u(t , x )> u(t +, x ); we look into the situation 0 0 0 0 u(t , x )> u(t +, x )> 1 first. The local Lipschitz continuity in the second variable 0 0 0 0 and the decay in the first variable imply that there exist > 0 and δ> 0 such that, writing R = (t , t + ]×[x − δ, x + δ], 0 0 0 0 u(t , x )> sup u(t , x ) ≥ inf u(t , x)> 1. (3.8) 0 0 (t ,x )∈R (t ,x )∈R f t ,x ,σ 0 0 i Hence, R ⊆ C and writing τ := inf{s ≥ 0 : (t + s, X )/ ∈ R} we have R 0 t +s i 0 τ t ,x ,σ R 0 0 i X −λ du t ,x ,σ i 0 0 i t +u ˜ 0 ˆ u(t , x ) = E e u t + τ , X 0 0 0 R t +τ 0 R t ,x ,σ R u 0 0 i X −λ ds t ,x ,σ i 0 0 i t +s 0 ˆ + e f t + u, X du t +u (x +δ) ≤ E e u(t , x + δ)1 0 0 {τ <} (x +δ) ˜ 0 + E e u(t + , x + δ)1 0 0 {τ =} 123 Applied Mathematics & Optimization t ,x ,σ 0 i X −λ ds t ,x ,σ i 0 0 i 0 t +s + e  f t + u, X  du t +u + + (x +δ) (x +δ) 0 ˜ 0 ≤ e u(t , x + δ)P(τ <) + e u(t + , x + δ) 0 0 R 0 0 t ,x ,σ 0 0 i X −λ ds t ,x ,σ i 0 0 i ˜ 0 t +s + E e  f t + u, X  du t +u → u(t +, x + δ) 0 0 as   0, which contradicts (3.8). Lastly, suppose that u(t , x )> u(t +, x ) = 1. By Lipschitz continuity in the 0 0 0 0 second variable, there exists δ> 0 such that inf u(t , x)> u(t +, x ) = 1. (3.9) 0 0 0 x ∈(x −δ,x ) 0 0 t ,x −δ/2,σ 0 0 i Consequently, (t , T ]× (x − δ, x ) ⊆ D . Hence the process X hits the 0 0 0 stopping region immediately and so (t , x − δ/2) ∈ D , which contradicts (3.9). 0 0 Proposition 3.4 (Optimal stopping boundary) Let f :[0, T]× (l, h) → R be bounded, decreasing in the first variable as well as increasing and convex in the second variable. Then the following hold. (i) There exists a function b :[0, T ) →[l, h] that is both increasing, right- continuous with left limits, and satisfies C ={(t , x ) ∈[0, T ) × (l, h) : x > b (t )}. (3.10) (ii) The pair (J f , b ) satisfies the free-boundary problem i σ 1 2 ⎪ ∂ u(t , x ) + σ φ(x,σ )∂ u(t , x ) + φ(x,σ ) ∂ u(t , x ) ⎨ t i i x i xx + (x − λ )u(t , x ) + λ f (t , x ) = 0, if x > b (t ), (3.11) i i σ ⎩ f u(t , x ) = 1, if x ≤ b (t ) or t = T . Proof (i) By Proposition 3.1 (iv), there exists a unique function b satisfying (3.10). Moreover, by Proposition 3.1 (iii), this boundary b is increasing. Hence, using Proposition 3.3, we also obtain that b is right-continuous with left limits. (ii) The proof follows a well-known standard argument (e.g. see [23, Theorem 7.7 in Chapter 2]), thus we omit it. 3.2 A Sequence of Approximating Problems Let us define a sequence of stopping times {ξ } recursively by n≥0 ξ := 0, 123 Applied Mathematics & Optimization t t t ξ := inf s >ξ : σ(t + s) = σ(t + ξ ) , n > 0. n n−1 n−1 Here ξ represents the duration until the n-th volatility jump since time t. Furthermore, (n) let us define a sequence of operators {J } by n≥0 (n) (J f )(t , x,σ ) t ,x ,σ ξ t ,x ,σ τ i n i ˆ ˆ t ,x ,σ X ds X ds t ˜ t +s t +s ˆ 0 0 := sup E e 1 t + e f t + ξ , X 1 t , {τ<ξ } t {τ ≥ξ } n t +ξ n τ ∈T T −t (3.12) (0) where f :[0, T ]× (l, r ) → R is bounded. In particular, note that J f = f and (n) (1) J f = Jf . Similarly as for the operator J , we define J by (n) (n) J f := (J f )(·, ·,σ ). Proposition 3.5 Let n ≥ 0 and i ∈{0,..., m}. Then ⎛ ⎞ (n+1) ij (n) ⎝ ⎠ J = J J . (3.13) i j j =i Proof The proof is by induction. In order to present the argument of the proof while keeping intricate notation at bay, we will only prove that, for a bounded f :[0, T ]× (2) λ ij (l, h) → R and x ∈ (l, h), the identity (J f )(t , x ) = (J ( J f ))(t , x ) i j j =i i λ (n+1) ij (n) holds. The induction step J = J J follows a similar argument, i j =i λ j though with more abstract notation. Note that without loss of generality, we can assume t = 0, whichwedo. (2) ij Firstly, we will show (J f )(0, x ) ≤ J (J f ) (0, x ) and then the i j j =i i λ opposite inequality. For j ∈ N, we will write ξ instead of ξ as well as will use the notation η := ξ − ξ .Let τ ∈ T and consider j j j −1 T A(τ ) τ 0,x ,σ τ 0,x ,σ ξ 0,x ,σ i i 2 i ˆ ˆ ˆ 0,x ,σ X ds X ds X ds i s s s ˜ 0 0 0 ˆ := E e 1 + e 1 + e f (ξ , X )1 {τ<η } {η ≤τ<ξ } 2 {τ ≥ξ } 1 1 2 ξ 2 0,x ,σ 0,x ,σ τ τ i i ˆ ˆ X ds X ds s s ˜ ˜ 0 0 = E e 1 + E e 1 {τ<η } {η ≤τ<ξ } 1 1 2 0,x ,σ 2 i 0,x ,σ ˆ ˆ 0,x ,σ i X ds i X ,N s ˆ + e f (ξ , X )1 | F , (3.14) 2 {τ ≥ξ } ξ 2 η 2 1 123 Applied Mathematics & Optimization where {N } denotes the process counting the volatility jumps. The inner conditional t t ≥0 expectation in (3.14) satisfies 0,x ,σ 0,x ,σ τ ξ i 2 i 0,x ,σ ˆ ˆ 0,x ,σ ˆ i X ds X ds X ,N s s i ˜ ˆ 0 0 E e 1 + e f (ξ , X )1 | F {η ≤τ<ξ } 2 {τ ≥ξ } 1 2 ξ 2 η 0,x ,σ η 0,x ,σ τ i 1 i ˆ ˆ X ds X ds s s η 0 ˜ = e 1 E e 1 {η ≤τ } {τ<ξ } 1 2 0,x ,σ 2 ˆ 0,x ,σ X ds ˆ i s 0,x ,σ X ,N η ˆ + e f (ξ , X )1 | F 2 {τ ≥ξ } 2 η ξ 1 η 0,x ,σ 0,x ,σ i λ i τ˜ ˆ ij ˆ ˆ X ds η ,X ,σ X ds 1 j η +s s ˜ η 0 1 0 1 = e 1 E e 1 {η ≤τ } {˜ τ<η } 1 2 j =i X ds η +s 0 ˆ + e f (η + η , X )1 , (3.15) 1 2 η +η 1 2 {˜ τ ≥η } where τ˜ = τ − η in the case η ≤ τ ≤ T . Therefore, substituting (3.15)into(3.14) 1 1 and then taking a supremum over τ˜, we get τ 0,x ,σ X ds ˜ s A(τ ) ≤ E e 1 {τ<η } η 0,x ,σ 0,x ,σ i λ i τ˜ ˆ ij ˆ ˆ X ds η ,X ,σ X ds η +s s ˜ 1 η j 0 1 0 1 + e 1 sup E e 1 {τ ≥η } {˜ τ<η } 1 2 τ˜∈T T −T ∧η j =i 1 X ds η +s 0 ˆ + e f (η + η , X )1 1 2 η +η {˜ τ ≥η } 1 2 2 0,x ,σ η 0,x ,σ i 1 i λ ˆ ˆ ij X ds X ds 0,x ,σ ˜ s s i 0 0 = E e 1 + e 1 (J f )(η , X ) {τ<η } {τ ≥η } j 1 1 1 η j =i (3.16) Taking a supremum over τ in (3.16), we obtain (2) ij (J f )(0, x ) = sup A(τ ) ≤ J (J f ) (0, x ). (3.17) i j τ ∈T j =i It remains to establish the opposite inequality. Let τ ∈ T and define τˇ := τ1 + (η ∧ T + τ )1 , (3.18) {τ ≤η } 1 σ(η ) {τ>η } 1 1 1 0,x ,σ where τ := τ (η ∧ T , X ). Clearly, τˇ ∈ T . Then σ(η ) 1 T σ(η ) η ∧T 1 1 (2) (J f )(0, x ) ≥ A(τ) ˇ 0,x ,σ η 0,x ,σ 0,x ,σ τ λ j i 1 i i ˆ ˆ ij ˆ ˆ X ds X ds η ,X ,σ X ds s s 1 η j η +s ˜ 0 0 ˜ 0 1 = E e 1 + e 1 E 1 e 1 {τ<η } {τ ≥η } {τ <η } 1 1 σ 2 j =i 123 Applied Mathematics & Optimization X ds η +s 0 ˆ + e f (η + η , X )1 1 2 η +η {τ ≥η } 1 2 σ 2 0,x ,σ 0,x ,σ τ η i 1 i λ ˆ ˆ ij X ds X ds 0,x ,σ ˜ s s ˆ 0 0 = E e 1 + e 1 (J f )(η , X ) , {τ<η } {τ ≥η } j 1 1 1 η j =i where Proposition 3.2 was used to obtain the last equality. Hence, by taking supremum over stopping times τ ∈ T , we get ij (2) (J f )(0, x ) ≥ J (J f ) (0, x ). (3.19) i j j =i Finally, (3.17) and (3.19) taken together imply ij (2) (J f )(0, x ) = J (J f ) (0, x ). i j j =i Remark 3.6 In [24], the authors use the same approximation procedure for an optimal stopping problem with regime switching volatility as in this article. Unfortunately, a mistake is made in equation (18) of [24], which wrecks the subsequent approximation procedure when the number of volatility states is greater than 2. The identity (18) therein should be replaced by (3.13). 3.3 Convergence to the Value Function Proposition 3.7 (Properties of the approximating sequence) (n) (i) The sequence of functions {J 1} is increasing, bounded from below by 1 n≥0 hT and from above by e . (n) (ii) Every J 1 is decreasing in the first variable t as well as increasing and convex in the second variable x. (iii) The sequence of functions (n) J 1  v pointwise as n ∞. Moreover, the approximation error n−1 (λT ) (n) hT v − J 1 ≤ e λT as n →∞, (3.20) (n − 1)! where λ := max{λ : 1 ≤ i ≤ m}. (iv) For every n ∈ N ∪ {0}, n (n) n J 1 ≤ J 1 ≤ J 1. (3.21) m 1 123 Applied Mathematics & Optimization (n) Proof (i) The statement that {J 1} is increasing, bounded from below by 1 and n≥0 hT from above by e is a direct consequence of the definition (3.12). (n) (ii) The claim that every J 1 is decreasing in the first variable t as well as increasing and convex in the second variable x follows by a straightforward induction on n, using Proposition 3.1 (iii),(iv) and Proposition 3.5 at the induction step. (iii) First, let i ∈{1,..., m} and note that, for any n ∈ N, (n) J 1 ≤ v . (n) Here the inequality holds by suboptimality, since J 1 corresponds to an expected payoff of a particular stopping time in the problem (2.4). Next, define t ,x ,σ (i ) X ds 0 t +s U (t , x ) := sup E e 1 . {τ<ξ } τ ∈T T −t Then (n) (i ) (i ) h(T −t ) t U (t , x ) ≤ (J 1)(t , x ) ≤ v (t , x ) ≤ U (t , x ) + e P(ξ ≤ T − t ). (3.22) n i n n th Since it is a standard fact that the n jump time, call it ζ , of a Poisson process with jump intensity λ := max{λ : 1 ≤ i ≤ m} follows the Erlang distribution, we have P(ξ ≤ T − t ) ≤ P(ζ ≤ T − t ) λ(T −t ) n−1 −u = u e du (n − 1)! n−1 (λT ) ≤ λT . (n − 1)! Therefore, by (3.22), n−1 (λT ) (n) hT v − J 1 ≤ e λT as n →∞. (n − 1)! (iv) The string of inequalities (3.21) will be proved by induction. First, the base step is obvious. Now, suppose (3.21) holds for some n ≥ 0. Hence, for any i ∈{1,..., m}, ij (n) n n J 1 ≤ J 1 ≤ J 1. (3.23) m j 1 j =i Let us fix i ∈ {1,..., m}. By Proposition 3.1 (iv), every function in (3.23)is convex in the spatial variable x, thus [14, Theorem 6.1] yields 123 Applied Mathematics & Optimization ⎛ ⎞ ij (n) n+1 n+1 ⎝ ⎠ J 1 ≤ J J 1 ≤ J 1. j 1 j =i As i was arbitrary, we also have n+1 (n+1) n+1 J 1 ≤ J 1 ≤ J 1. (3.24) σ σ m 1 hT Remark 3.8 If instead of 1 we choose the constant function e to apply the operators (n) (n) hT J to, then, following the same strategy as above, {J e } is a decreasing n≥0 i i (n) hT sequence of functions with the limit J e  v pointwise as n ∞. Let B ([0, T ]× (l, h); R) denote the set of bounded functions from [0, T ]× (l, h) m m to R and define an operator J : B ([0, T ]× (l, h); R) → B ([0, T ]× (l, h); R) b b by ⎛ ⎞ ⎛ ⎞ 1 j J ( f ) 1 j 1 j =1 ⎜ ⎟ ⎜ ⎟ . . ⎜ ⎟ . . J := . ⎝ ⎠ . . ⎝ ⎠ mj J ( f ) m j j =m Proposition 3.9 (i) Let f ∈ B ([0, T ]× (l, h); R) . Then ⎛ ⎞ ⎜ . ⎟ lim J f = . ⎝ ⎠ n→∞ tr (ii) The vector (v ,...,v ) of value functions is a fixed point of the operator J, 1 m i.e. ⎛ ⎞ ⎛ ⎞ v v 1 1 ⎜ ⎟ ⎜ ⎟ . . . . J ⎝ ⎠ = ⎝ ⎠ . (3.25) . . v v m m Proof (i) Observe that the argument in the proof of part (iii) of Proposition 3.7 also (n) gives that J g → v as n →∞ for any bounded g. Hence to finish the proof it is enough to recall the relation (3.13) in Proposition 3.5. (ii) Let i ∈{1,..., m}. By Proposition 3.5, ⎛ ⎞ ij (n+1) (n) ⎝ ⎠ J 1 = J J 1 . (3.26) i j j =i 123 Applied Mathematics & Optimization (n) By Proposition 3.7 (iii), for every j ∈{1,..., m}, the sequence J 1  v as n ∞, so, letting n ∞ in (3.26), the monotone convergence theorem tells us that ⎛ ⎞ ij ⎝ ⎠ v = J v . (3.27) i i j j =i 4 The Value Function and the Stopping Strategy In this section, we show that the value function v has attractive structural properties and identify an optimal strategy for the liquidation problem (2.7). The first passage time below a boundary, which is an increasing function of time and volatility, is proved to be optimal. Moreover, we provide a method to approximate the optimal stopping boundary by demonstrating that it is a limit of an increasing sequence of stopping boundaries coming from easier auxiliary problems of Sect. 3. Theorem 4.1 (Properties of the value function) (i) v is decreasing in the first variable t as well as increasing and convex in the second variable x. (ii) v is continuous for every i ∈{1,..., m}. (iii) v ˇ ≤ v ≤ˇ v , (4.1) σ σ m 1 where v ˇ :[0, T]× (l, h) → R denotes the Markovian value function as in (2.7), but for a price process (2.1) with constant volatility σ . (n) Proof (i) Since, by Proposition 3.7 (ii), every J 1 is decreasing in the first vari- able t, increasing and convex in the second variable x, these properties are also (n) preserved in the pointwise limit lim J 1, which is v by Proposition 3.7 n→∞ (iii). (ii) Using part (i) above, the claim follows from Proposition 3.9 (ii), i.e. from the tr fact that (v ,...,v ) is a fixed point of a regularising operator J in the sense 1 m of Proposition 3.3. (iii) Letting n →∞ in (3.21), Proposition 3.7 (iii) gives us (4.1). For the optimal liquidation problem (2.4) with constant volatility σ , i.e. in the case σ = ... = σ = σ , it has been shown in [15] that an optimal liquidation strategy 1 m is characterised by a increasing continuous stopping boundary b :[0, T ) →[l, 0] ˇ ˆ ˇ with b (T −) = 0 such that the stopping time τˇ = inf{t ≥ 0 : X ≤ b (t )}∧ T is σ σ t σ optimal. It turns out that the optimal liquidation strategy within our regime-switching volatility model shares some similarities with the constant volatility case as the next theorem shows. 123 Applied Mathematics & Optimization Theorem 4.2 (Optimal liquidation strategy) (i) For every i ∈{1,..., m}, there exists b :[0, T ) →[l, 0] that is increasing, right-continuous with left limits, satisfies the equality b (T −) = 0 and the identity C ={(t , x ) ∈[0, T ) × (l, h) : x > b (t )}, (4.2) i i ij where u := v . Moreover, i j j =i λ ˇ ˇ b ≤ b ≤ b . σ σ σ 1 i m { } for any i ∈ 1,..., m . (ii) The stopping strategy ∗ t ,x ,σ τ := inf{s ∈[0, T − t ) : X ≤ b (t + s)}∧ (T − t). σ(t +s) t +s is optimal for the optimal selling problem (2.7). (iii) For i ∈{1,..., m}, the boundaries (n) b  b pointwise as n ∞, σ σ i i (n) λ (n) ij where g := J 1. j =i i λ j (iv) The pairs (v , b ), (v , b ), . . . , (v , b ) satisfy a coupled system of m free- 1 σ 2 σ m σ 1 2 m boundary problems with each being 1 2 ∂ v (t , x ) + σ φ(x,σ )∂ v (t , x ) + φ(x,σ ) ∂ v (t , x ) t i i i x i i xx i +(x − λ )v (t , x ) + λ v (t , x ) = 0, if x > b (t ), (4.3) i i ij j i j =i v (t , x ) = 1, if x ≤ b (t ) or t = T , i i where i ∈{1,..., m}. Proof (i) The existence of b :[0, T ) →[l, h] that is increasing, right-continuous with left limits, and satisfies (4.2) follows from the fixed-point property (3.25), ˇ ˇ ˇ and Theorem 4.1 (i),(ii). Since the range of b , b is [l, 0] and b (T −) = σ σ σ 1 m 1 ˇ ˇ ˇ b (T −) = 0, using Theorem 4.1 (iii), we also conclude that b ≤ b ≤ b σ σ σ σ m 1 i m and that b (T −) = 0 for every i. (ii) Let us define D := {(t , x,σ ) ∈[0, T ]× (l, h)×{σ ,...,σ }: v(t , x,σ ) = 1}. 1 m t ,x ,σ (t ) Then τ := inf{s ≥ 0 : (t + s, X ,σ (t + s)) ∈ D} is optimal for the D t +s problem (2.7)by[29, Corollary 2.9]. Lastly, from the fixed-point property (3.25) and Proposition 3.2, we conclude that τ = τ , which finishes the proof. (n) (n) (n) (iii) Since J 1  v as n ∞ and J 1 ≥ 1 for all n,wehavethat lim b ≥ i n∞ σ i i (n) (n) b . Also, if x < lim b (t ), then J 1(t , x ) = 1 for all n ∈ N and so σ n∞ σ i i i (n) (n) v (t , x ) = lim J 1(t , x ) = 1. Hence, lim b ≤ b . As a result, i n∞ n∞ σ σ i i (n) lim b = b . n∞ σ σ i i 123 Applied Mathematics & Optimization (iv) The free-boundary problem is a consequence of Proposition 3.4 (ii) and the fixed-point property (3.25). Remark 4.3 Establishing uniqueness of a classical solution to a time non-homogeneous free-boundary problem is typically a technical task (see [27] for an example). Not being central to the mission of the paper, the uniqueness of solution to the free-boundary problems (4.3) and (3.11) has not been pursued. Remark 4.4 (A possible alternative approach) It is worth pointing out that a potential alternative approach for the study of the value function and the optimal strategy is to directly analyse the variational inequality formulation (e.g., see [30, Sect. 5.2]) arising from the optimal stopping problem (2.7). The coupled system of variational inequalities would need to be studied using weak solution techniques from the PDE theory (e.g., see [6,30]) to obtain desired regularity and structural properties of the value function and the stopping region. Though the author is unaware of any work studying exactly this type of free-boundary problem directly in detail, there are avail- able theoretical results [7] that include existence, uniqueness of viscosity solutions, and a comparison principle for the pricing of American options in regime-switching models. Also, under some conditions, convergence of stable, monotone, and consis- tent approximation schemes to the value function is shown. Suitable numerical PDE methods and their pros and cons for such a coupled system are discussed in [22]. With this alternative route in mind (provided all the needed technical results can be established), our approach has clear benefits: avoiding many analytical complications that arise in the study of the full system (compare [7]) and yielding a very intuitive monotone approximation scheme for the value function and the stopping boundary. For further study of the problem in this section, we will make a structural assumption about the Markov chain modelling the volatility. Assumption 4.5 The Markov chain σ is skip-free, i.e. for all i ∈{1,..., m}, λ = 0if j ∈{ / i − 1, i , i + 1}. ij As many popular financial stochastic volatility models have continuous trajectories, and a skip-free Markov chain is a natural discrete state-space approximation of a continuous process, Assumption 4.5 does not appear to be a severe restriction. Lemma 4.6 Let δ> 0,g : (l, h) ×[0, ∞) →[0, ∞) be increasing and convex in the first variable as well as decreasing in the second. Then u : (l, h) ×{σ ,...,σ }→ R 1 m defined by x ,σ x ,σ X du 0 ˆ u(x,σ ) := E e g(X , σ (δ)) (4.4) is increasing and convex in the first variable as well as decreasing in the second. Proof We will prove the claim using a coupling argument. Let ( , F , P ) be a prob- 1 2 ability triplet supporting a Brownian motion B, and two volatility processes σ , σ 123 Applied Mathematics & Optimization with the state space and transition densities as in (2.1). In addition, we assume that B 1 2 1 2 is independent of (σ ,σ ), that the starting values satisfy σ (0) = σ ≤ σ = σ (0), i j 1 2 1 2 ˆ ˆ and that σ (t ) ≤ σ (t ) for all t ≥ 0. Also, let X and X denote the solutions to (2.6) 1 2 when σ is replaced by σ and σ , respectively. Let us fix an arbitrary ω ∈  . Since W is independent of σ , δ δ 1 x 1 1 x ˆ ˜ (X ) du 1 x 1 σ  (X ) du 1 x 1 ˜ u ˆ ˜ u ˜ 0 0 E e g((X ) ,σ (δ)) | F (ω ) = E e g((X ) ,σ (δ, ω )) , (4.5) 0 0 δ δ δ 1 1 1 ˜ ˆ where X denotes the process X with the volatility process σ replaced by a deter- ministic function σ (·,ω ). Furthermore, the right-hand (and so the left-hand side) in (4.5)asafunctionof x is increasing by [31, Theorem IX.3.7] as well as convex by [14, Theorem 5.1]. Hence 1 x 1 (X ) du 1 x 1 σ ˜ ˜ 0 u ˆ u(·,σ ) : x → E E e g((X ) ,σ (δ)) | F δ δ is increasing and convex. Next, we observe that 1 x 1 2 σ ,σ (X ) du 1 x 1 ˜ u ˆ E e g((X ) ,σ (δ)) | F (ω ) δ δ δ 1 2 2 δ (X ) du 2 x 1 σ ,σ ˜ u ˆ ≥ E e g((X ) ,σ (δ)) | F (ω ) δ 1 2 2 x σ ,σ (X ) du 2 x 2 ˜ 0 u ˆ ≥ E e g((X ) ,σ (δ)) | F (ω ). (4.6) δ δ In the above, having in mind that the conditional expectations can be rewritten as ordi- nary expectations similarly as in (4.5), the first inequality followed by [14, Theorem 6.1], the second by the decay of g in the second variable. Integrating both sides of (4.6) over all possible ω ∈  with respect to dP , we get that u(x,σ ) ≥ u(x,σ ). 1 2 Thus we can conclude that u is increasing and convex in the first variable as well as decreasing in the second. Theorem 4.7 (Ordering in volatility) (i) v is decreasing in the volatility variable, i.e. v ≥ v ≥ ··· ≥ v . σ σ σ 1 2 m (ii) The boundaries are ordered in volatility as b ≤ b ≤ ··· ≤ b . σ σ σ 1 2 m Proof (i) We will prove the claim by approximating the value function v by a sequence of value functions {v } of corresponding Bermudan optimal stop- n n≥0 ping problems. Let v denote the value function as in (2.7), but when stopping is kT allowed only at times : k ∈{0, 1,..., 2 } . 123 Applied Mathematics & Optimization Let us fix n ∈ N. We will show that, for any given k ∈{0,..., 2 } and any t ∈[ T , T ], the value function v (t , x,σ ) is increasing and convex in x as n n well as decreasing in σ (note that here σ denotes the initial value of the process t → σ(t )). The proof is by backwards induction from k = 2 down to k = 0. Since v (T , ·, ·) = 1, the base step k = 2 holds trivially. Now, suppose that, for some given k ∈{0,..., 2 },the value v (t , x,σ ) is increasing and convex in x as well as decreasing in σ for any t ∈[ T , T ]. Then, Lemma 4.6 tells us that (k−1)T kT for any fixed t ∈[ , ), n n 2 2 kT t ,x ,σ 2 kT kT t ,x ,σ X du ˜ t u f (t , x,σ ) := E e v , X ,σ , kT n n 2 2 is increasing and convex in x as well as decreasing in σ . Consequently, since (k−1)T kT f (t , x,σ ), t ∈ ( , ), n n 2 2 v (t , x,σ ) = (4.7) (k−1)T f (t , x,σ ) ∨ 1, t = , the value v (t , x,σ ) is increasing and convex in x as well as decreasing in σ for k−1 any fixed t ∈[ T , T ]. Hence, by backwards induction, v is increasing and convex in the second argument x as well as decreasing in the third argument σ . Finally, since v → v pointwise as n →∞, we can conclude that the value function v is decreasing in σ . (ii) From the proof of Theorem 4.2 (ii), the claim is a direct consequence of part (i) above. Remark 4.8 1. The value function is decreasing in the initial volatility (Theorem 4.7 (i)) also when the volatility is any continuous time-homogeneous positive Markov process independent of the driving Brownian motion W . The assertion is justified by inspection of the proof of Lemma 4.6 in which no crossing of the volatility trajectories was important, not the Markov chain structure. 2. Though there are no grounds to believe that any of the boundaries b ,..., b is σ σ 1 m discontinuous, proving their continuity, except for the lowest one, is beyond the power of customary techniques. Continuity of the lowest boundary can be proved similarly as in the proof of part 4 of [15, Theorem 3.10], exploiting the ordering of the boundaries. The stumbling block for proving continuity of the upper boundaries is that, at a downward volatility jump time, the value function has a positive jump whose magnitude is difficult to quantify. 5 Generalisation to an Arbitrary Prior In this section, we generalise most results of the earlier parts to the general prior case. In what follows, the prior μ of the drift is no longer a two-point but an arbitrary probability distribution. 123 Applied Mathematics & Optimization 5.1 Two-Dimensional Characterisation of the Posterior Distribution Let us first think a bit more abstractly to develop intuition for the arbitrary prior case. According to the Kushner–Stratonivich stochastic partial differential equation (SPDE) for the posterior distribution (see [8, Sect. 3.2]), if we take the innovation process driving the SPDE and the volatility as the available information sources, then the posterior distribution is a measure-valued Markov process. Unfortunately, there does not exist any applicable general methods to solve optimal stopping problems for measure-valued stochastic processes. If only we were able to characterise the posterior distribution process by an R -valued Markovian process (with respect to the filtration generated by the innovation and the volatility processes), then we should manage to reduce our optimal stopping problem with a stochastic measure-valued underlying to an optimal stopping problem with a R -valued Markovian underlying. Mercifully, this wishful thinking turns out to be possible in reality as we shall soon see. Unlike in the problem with constant volatility studied in [15], when the volatility is varying, the pair consisting of the elapsed time t and the posterior mean X is not suffi- cient (with an exception of the two-point prior case studied before) to characterise the S,σ posterior distribution μ of X given F . Hence we need some additional informa- tion to describe the posterior distribution. Quite surprisingly, all this needed additional information can be captured in a single additional observable statistic which we will name the ‘effective learning time’. We start the development by first introducing some useful notation. (i ) (i ) Define Y := Xt + σ W and let μ denote the posterior distribution of X at time i t t t ,y (i ) t given Y = y. It needs to be mentioned that, for any given prior μ, the distributions (i ) (i ) of X given F and X given Y are equal (see Proposition 3.1 in [15]), which t t (i ) justifies our conditioning only on the last value Y . Also, recall that l = inf supp(μ), h = sup supp(μ). The next lemma provides the key insight allowing to characterise the posterior distribution by only two parameters. Lemma 5.1 Let σ ≥ σ > 0. Then 2 1 (1) (2) {μ : t > 0, y ∈ R}={μ : t > 0, y ∈ R}, t ,y t ,y i.e. the sets of possible conditional distributions of X in both cases are the same. Proof Let t > 0, y ∈ R. By the standard filtering theory (a generalised Bayes’ rule), 2uy−u t 2σ e μ(du) (i ) μ (du) := . (5.1) t ,y 2uy−u t 2σ e μ(du) 123 Applied Mathematics & Optimization 2 2 σ σ 1 1 Then taking r = t and y = y, we have that σ σ 2 2 (2) (1) μ (du) = μ (du). t ,y r ,y From Lemma 5.1 and [15, Lemma 3.3] we obtain the following important corollary, telling us that, having fixed a prior, any possible posterior distribution can be fully characterised by only two parameters. Corollary 5.2 Let t > 0. Then, for any posterior distribution μ (·) = P(X ∈ S,σ (1) ·| F )(ω), there exists (r , x ) ∈ (0, T]× (l, h) such that μ = μ , where t t r ,y (r ,x ) (1) y (r , x ) is defined as the unique value satisfying E[X | Y = y (r , x )]= x. In 1 r 1 2 2 t σ t σ 1 1 particular, we can take r = du and y (r , x ) = dY (ω), 1 u 0 σ(u)(ω) 0 σ(u)(ω) where Y = log(S ) + σ(b) db. u u 2 0 When the volatility varies, so does the speed of learning about the drift. The corollary tells us that we can interpret r as the effective learning time measured under the constant volatility σ . The intuition for the name is that even though the volatility is varying over time, the same posterior distribution μ can be also be obtained in a constant volatility model with the constant volatility σ , just at a different time r and at a different value of the price S. Remark 5.3 It is worth remarking that Corollary 5.2 also holds for any reasonable positive volatility process. Indeed, using the Kallianpur–Striebel formula with time- dependent volatility (see Theorem 2.9 on page 39 of [8]), the proof of Lemma 5.1 equally applies for an arbitrary positive time-dependent volatility and immediately yields the result of the corollary. Next, we make a convenient technical assumption about the prior distribution μ. Assumption 5.4 The prior distribution μ is such that au 1. e μ(du)< ∞ for some a > 0, 2. ψ(·, ·) :[0, T ]× (l, h) → R defined by 1 1 2 1 2 1 ψ(t , x ) := E[X | Y = y (t , x )]− x = Var X | Y = y (t , x ) 1 1 t t σ σ 1 1 is a bounded function that is Lipschitz continuous in the second variable. In particular, all compactly supported distributions as well as the normal distribution are known to satisfy Assumption 5.4 (see [15]), so it is an inconsequential restriction for practical applications. 123 Applied Mathematics & Optimization 5.2 Markovian Embedding Similarly as in the two-point prior case, we will study the optimal stopping problem (2.5) by embedding it into a Markovian framework. With Corollary 5.2 telling us that the effective learning time r and the posterior mean x fully characterise the posterior distribution, now, we can embed the optimal stopping problem (2.5) into the standard Markovian framework by defining the Markovian value function τ t ,x ,r ,σ X ds ˜ t +s v(t , x , r,σ ) := sup E e ,(t , x , r,σ ) ∈[0, T ] τ ∈T T −t ×(l, h) ×[0, T ]×{σ ,...,σ }. (5.2) 1 m t ,x ,r ,σ ˆ ˆ Here the process X = X evolves according to ˆ ˆ ˆ dX = σ ψ(r , X ) ds + ψ(r , X ) dB , s ≥ 0, ⎪ t +s 1 t +s t +s t +s t +s t +s σ(t +s) ⎪ σ dr = ds, s ≥ 0, t +s σ(t +s) (5.3) ⎪ X = x , r = r , σ(t ) = σ ; the given dynamics of X is a consequence of Corollary 5.2 and the evolution equation of X in the constant volatility case (see the equation (3.9) in [15]). Also, in (5.3), ˆ ˜ the process B = σ(u) du + W is a P-Brownian motion. Lastly, in (5.2), T t t T −t denotes the set of stopping times less than or equal to T − t with respect to the usual t ,x ,r ,σ augmentation of the filtration generated by {X } and {σ(t + s)} . s≥0 s≥0 t +s Remark 5.5 Let us note that in light of the observations of Sect. 5.1, if the regime- switching volatility was replaced by a different stochastic volatility process, the same Markovian embedding 5.2 could still be useful for the study of the altered problem. 5.3 Outline of the Approximation Procedure and Main Results Under an arbitrary prior, the approximation procedure of Sect. 3 can also be applied, (n) however, the operators J and J need to be redefined in a suitable way. We redefine the operator J to act on a function f :[0, T ]× (l, h) ×[0, T]→ R as (Jf )(t , x , r,σ ) t ,x ,r ,σ t ,x ,r ,σ i i i ˆ ˆ X ds X ds t t ,x ,r ,σ t ,r ˜ t +s t +s ˆ 0 0 := sup E e 1 t + e f t + η , X , r 1 t t t {τ<η } i {τ ≥η } i t +η t +η i i i τ ∈T T −t t ,x ,r ,σ t ,x ,r ,σ τ u i i ˆ ˆ X −λ ds X −λ ds t ,x ,r ,σ t ,r ˜ t +s i t +s i ˆ 0 0 = sup E e + e f t + u, X , r du t +u t +η τ ∈T 0 T −t (5.4) 123 Applied Mathematics & Optimization and then the operator J as J f := (Jf )(·, ·,σ ). Intuitively, (J f ) represents i i i i a Markovian value function corresponding to optimal stopping before t + η , i.e. before the first volatility change after t, when, at time t + η < T , the payoff t ,x ,r ,σ t ,r t i f t + η , X , r is received, provided stopping has not occurred yet. The t t t +η t +η i i underlying process in the optimal stopping problem J f is the diffusion (t , X , r ). i t t The majority of the results in Sects. 3 and 4 generalise nicely to an arbitrary prior case. Proposition 3.1 extends word by word; the proofs are analogous, just the second property of ψ from [15, Proposition 3.6] needs to be used for Proposition 3.1 (iv). In addition, we have that f decreasing in r implies that J f is decreasing in r, which is proved by a Bermudan approximation argument as in Proposition 3.1 (iv) using the time decay of ψ from [15, Proposition 3.6]. As a result, for f :[0, T ]×(l, h)×[0, T]→ R that is decreasing in the first and third variables as well as increasing (though not too fast as x ∞) and convex in the second, there exists a function (a stopping boundary) b :[0, T ) ×[0, T ) →[l, 0] that is increasing in both variables and such that the continuation region C := {(t , x , r ) ∈[0, T ) × (l, h) × 0, T : (J f )(t , x , r)> 1} (optimality shown as in Proposition 3.2) satisfies C ={(t , x , r ) ∈[0, T ) × (l, h) × [0, T ) : x > b (t , r )}. i i In addition, each pair (J f , b ) solves the free-boundary problem i σ ⎪ 1 ∂ u(t , x , r ) + ∂ u(t , x , r ) + σ ψ(r , x )∂ u(t , x , r ) t r 1 x ⎨ 2 1 1 2 + ψ(r , x ) ∂ u(t , x , r ) + (x − λ )u(t , x , r ) xx i 2 σ + λ f (t , x , r ) = 0, if x > b (t , r ), ⎪ i σ u(t , x , r ) = 1, if x ≤ b (t , r ) or t = T . (n) With the operator J redefined as t ,x ,r ,σ (n) X ds 0 t +s (J f )(t , x , r,σ ) := sup E e 1 {τ<ξ } τ ∈T T −t ξ t ,x ,r ,σ n i ˆ t ,x ,r ,σ X ds t t ,r t +s ˆ + e f (t + ξ , X , r )1 t , t t {τ ≥ξ } t +ξ t +ξ n n n the crucial Proposition 3.5 holds word by word. Furthermore, the sequence of functions (n) (n) {J 1} is increasing, bounded from below by 1 with each J 1 being decreasing n≥0 in the first and third variables as well as increasing and convex in the second variable x. As desired, (n) J 1  v pointwise as n ∞, so the value function v is decreasing in the first and third variables as well as increasing and convex in the second variable; again, v is a fixed point of J . Moreover, the uniform 123 Applied Mathematics & Optimization approximation error result (3.20) also holds for compactly supported priors (with an obvious reinterpretation h = sup(supp μ)). We can also show (by a similar argument as in Theorem 4.2 (iii)) that (n) b  b pointwise as n ∞, σ σ i i (n) (n) ij where g := J 1 and the limit b is a function increasing in both vari- i j =i j i ables. Lastly, by similar arguments as before, the stopping time ∗ t ,x ,r ,σ τ = inf{s ∈[0, T − t ) : X ≤ b (t + s, r )}∧ (T − t ) σ(t +s) t +s t +s is optimal for the liquidation problem (2.5). Remark 5.6 The higher volatility, the slower learning about the drift, so under Assump- tion 4.5 it is tempting to expect that the value function v is decreasing in the volatility variable and so the stopping boundaries b ≤ b ≤ ··· ≤ b also in the case of an σ σ σ 1 2 m arbitrary prior distribution μ. Regrettably, proving (or disproving) such monotonicity in volatility has not been achieved by the author. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 Interna- tional License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. References 1. Bain, A., Crisan, D.: Fundamentals of stochastic filtering. In: Stochastic Modelling and Applied Prob- ability, vol. 60. Springer, New York (2009) 2. Bayraktar, E.: A proof of the smoothness of the finite time horizon american put option for jump diffusions. SIAM J. Control Optim. 48(2), 551–572 (2009) 3. Bayraktar, E.: On the perpetual American put options for level dependent volatility models with jumps. Quant. Financ. 11(3), 335–341 (2011) 4. Bayraktar, E., Kravitz, R.: Quickest detection with discretely controlled observations. Seq. Anal. 34(1), 77–133 (2015) 5. Bayraktar, E., Dayanik, S., Karatzas, I.: Adaptive Poisson disorder problem. Ann. Appl. Probab. 16(3), 1190–1261 (2006) 6. Bensoussan, A.: Applications of Variational Inequalities in Stochastic Control. Studies in Mathematics and Its Applications, vol. 12. North-Holland, Amsterdam (1982) 7. Crépey, S.: About, the pricing equations in finance. In: Paris-Princeton Lectures on Mathematical Finance 2010, pp. 63–203. Springer, Berlin (2011) 8. Crisan, D., Rozovskii, B.: The Oxford Handbook of Nonlinear Filtering. Oxford University Press, Oxford (2011) 9. Dayanik, S., Poor, H.V., Sezer, S.O.: Multisource Bayesian sequential change detection. Ann. Appl. Probab. 18(2), 552–590 (2008) 10. Décamps, J.-P., Mariotti, T., Villeneuve, S.: Investment timing under incomplete information. Math. Oper. Res. 30(2), 472–500 (2005) 11. Di Masi, G.B., Kabanov, Y.M., Runggaldier, W.J.: Mean-variance hedging of options on stocks with Markov volatilities. Theory Probab. Appl. 39(1), 172–182 (1995) 123 Applied Mathematics & Optimization 12. Ekström, E., Lu, B.: Optimal selling of an asset under incomplete information. Int. J. Stoch. Anal. 2011, ID 543590 (2011) 13. Ekström, E., Lu, B.: The optimal dividend problem in the dual model. Adv. Appl. Probab. 46(3), 746–765 (2014) 14. Ekström, E., Tysk, J.: Convexity theory for the term structure equation. Financ. Stoch. 12(1), 117–147 (2008) 15. Ekström, E., Vaicenavicius, J.: Optimal liquidation of an asset under drift uncertainty. SIAM J. Financ. Math. 7(1), 357–381 (2016) 16. Elie, R., Kharroubi, I.: Probabilistic representation and approximation for coupled systems of varia- tional inequalities. Stat. Probab. Lett. 80(17–18), 1388–1396 (2010) 17. Eloe, P., Liu, R.H., Yatsuki, M., Yin, G., Zhang, Q.: Optimal selling rules in a regime-switching exponential Gaussian diffusion model. SIAM J. Appl. Math. 69(3), 810–829 (2008) 18. Gapeev, P.: Pricing of perpetual American options in a model with partial information. Int. J. Theor. Appl. Financ. 15(1), ID 1250010 (2012) 19. Gugerui, U.S.: Optimal stopping of a piecewise-deterministic Markov process. Stochastics 19(4), 221– 236 (1986) 20. Guo, X., Zhang, Q.: Closed-form solutions for perpetual American put options with regime switching. SIAM J. Appl. Math. 64(6), 2034–2049 (2004) 21. Guo, X., Zhang, Q.: Optimal selling rules in a regime switching model. IEEE Trans. Autom. Control 50, 1450–1455 (2005) 22. Huang, Y., Forsyth, P.A., Labahn, G.: Methods for pricing American options under regime switching. SIAM J. Sci. Comput. 33(5), 2144–2168 (2011) 23. Karatzas, I., Shreve, S.: Methods of Mathematical Finance. Applications of Mathematics, vol. 39. Springer, New York (1998) 24. Le, H., Wang, C.: A finite time horizon optimal stopping problem with regime switching. SIAM J. Control Optim. 48(8), 5193–5213 (2010) 25. Lu, B.: Optimal selling of an asset with jumps under incomplete information. Appl. Math. Financ. 20(6), 599–610 (2013) 26. Øksendal, B.: Stochastic Differential Equations: An Introduction with Applications, 6th edn. Springer, New York (2007) 27. Pascucci, A.: Free boundary and optimal stopping problems for American Asian options. Financ. Stoch. 12(1), 21–41 (2008) 28. Pemy, M., Zhang, Q.: Optimal stock liquidation in a regime switching model with finite time horizon. J. Math. Anal. Appl. 321(2), 537–552 (2006) 29. Peskir, G., Shiryaev, A.: Optimal Stopping and Free-Boundary Problems. Lectures in Mathematics, ETH Zürich. Birkhäuser Verlag, Basel (2006) 30. Pham, H.: Continuous-Time Stochastic Control and Optimization with Financial Applications, vol. 61. Springer, Berlin (2009) 31. Revuz, D., Yor, M.: Continuous martingales and Brownian motion. Grundlehren der Mathematischen Wissenschaften, 3rd edn., vol. 293. Springer, Berlin (1999) 32. Rogers, L.C.G.: Optimal Investment. Springer Briefs in Quantitative Finance. Springer, New York (2013) 33. Vannestål, M.: Exercising American options under uncertainty. Working paper (2017) 34. Yin, G., Liu, R.H., Zhang, Q.: Recursive algorithms for stock liquidation: a stochastic optimization approach. SIAM J. Optim. 13(1), 240–263 (2002) 35. Yin, G., Zhang, G., Liu, F., Liu, R.H., Cheng, Y.: Stock liquidation via stochastic approximation using Nasdaq daily and intra-day data. Math. Financ. 16(1), 217–236 (2006) 36. Zhang, Q.: Stock trading: an optimal selling rule. SIAM J. Control Optim. 40(1), 64–87 (2001) 37. Zhang, Q., Yin, G., Liu, R.H.: A near-optimal selling rule for a two-time-scale market model. Multiscale Model. Simul. 4(1), 172–193 (2005)

Journal

Applied Mathematics and OptimizationSpringer Journals

Published: Aug 30, 2018

There are no references for this article.