Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Modeling the Future Value Distribution of a Life Insurance Portfolio

Modeling the Future Value Distribution of a Life Insurance Portfolio risks Article Modeling the Future Value Distribution of a Life Insurance Portfolio 1, ,† 2,3,† Massimo Costabile * and Fabio Viviano Department of Economics, Statistics and Finance, University of Calabria, Ponte Bucci Cubo 0 C, 87036 Rende, CS, Italy Department of Economics and Statistics, University of Udine, Via Tomadini 30/A, 33100 Udine, Italy; viviano.fabio@spes.uniud.it Department of Economics, Business, Mathematics and Statistics “B. de Finetti”, University of Trieste, Piazzale Europa 1, 34127 Trieste, Italy * Correspondence: massimo.costabile@unical.it † These authors contributed equally to this work. Abstract: This paper addresses the problem of approximating the future value distribution of a large and heterogeneous life insurance portfolio which would play a relevant role, for instance, for solvency capital requirement valuations. Based on a metamodel, we first select a subset of representative policies in the portfolio. Then, by using Monte Carlo simulations, we obtain a rough estimate of the policies’ values at the chosen future date and finally we approximate the distribution of a single policy and of the entire portfolio by means of two different approaches, the ordinary least-squares method and a regression method based on the class of generalized beta distribution of the second kind. Extensive numerical experiments are provided to assess the performance of the proposed models. Keywords: GB2; LSMC; metamodel; regression models; Solvency II JEL Classification: G22 Citation: Costabile, Massimo, and Fabio Viviano. 2021. Modelling the Future Value Distribution of a Life Insurance Portfolio. Risks 9: 177. 1. Introduction https://doi.org/10.3390/risks9100177 In many relevant situations, life insurers face the necessity to determine the distri- bution of the value of their portfolio of policies at a certain future date. This happens, Academic Editor: Mercedes Ayuso for example, when regulators need to maintain solvency capital requirements in order to continue to conduct business, as stated in the Solvency II directive or in the Swiss Solvency Received: 19 August 2021 Test. In particular, Article 101(3) of the European directive requires that the Solvency Accepted: 27 September 2021 Capital Requirement “shall correspond to the Value-at-Risk of the basic own funds of an Published: 2 October 2021 insurance or reinsurance undertaking subject to a confidence level of 99.5% over a one-year period” (see European Parliament and European Council 2009). As a consequence, insur- Publisher’s Note: MDPI stays neutral ers are obliged to assess the value of assets and liabilities at a future date, the so-called risk with regard to jurisdictional claims in horizon, in order to derive their full loss distributions. To achieve this, the relevant risk published maps and institutional affil- factors must be projected at the risk horizon and then, conditional on the realized values, iations. a market consistent valuation of the insurer ’s assets and liabilities is required. This has led insurance and reinsurance companies to face a computationally intensive problem. Indeed, due to the complex structure of the insurer ’s liabilities, in general, closed form formulas are not available and a straightforward approach, common among insurers, is to Copyright: © 2021 by the authors. obtain an estimate through nested Monte Carlo simulations. Unfortunately, this approach Licensee MDPI, Basel, Switzerland. is extremely time consuming and becomes readily unmanageable from a computational This article is an open access article point of view. In this regard, one possible alternative method proposed in the literature distributed under the terms and to reduce the computational effort and to preserve the accuracy of the desired estimates conditions of the Creative Commons is the Least-Squares Monte Carlo (LSMC) method, firstly introduced by Carrière (1996), Attribution (CC BY) license (https:// Tilley (1993), and Longstaff and Schwartz (2001) in the context of American-type Option creativecommons.org/licenses/by/ Pricing. Application of the LSMC method for valuing solvency capital requirements in the 4.0/). Risks 2021, 9, 177. https://doi.org/10.3390/risks9100177 https://www.mdpi.com/journal/risks Risks 2021, 9, 177 2 of 17 insurance business was proposed in Cathcart and Morrison (2009) and Bauer et al. (2010). Moreover, Floryszczak et al. (2016) and Krah et al. (2018) illustrate a practical implemen- tation of the LSMC in this particular context. The above-mentioned papers, proposed in the actuarial literature, share the common feature of evaluating capital requirements for a single policy. In the case of an entire portfolio of policies, the nested simulation approach is even more difficult to implement due to the huge computational effort needed. For instance, assuming 10,000 outer trajectories simulated from the current time to the risk horizon for each one of the v risk factors, and then 2500 inner paths for each outer, with a monthly discretization for 20 years, and considering an insurance portfolio composed of 10,000 contracts, the total number of cash-flow projections needed would be 10,000  v  2500 12  20  10,000 = v  6  10 , which is very hard to manage. In order to keep the computational complexity of the evaluation problem at a reason- able level, we propose a metamodeling approach. Metamodeling, introduced in system engineering (see Barton 2015), can be defined as “the practice of using a model to describe another model as an instance” (see Allemang and Hendler 2011). This approach has also been widely used in the actuarial literature to estimate the price and Greeks of large portfo- lios of life insurance policies. For instance, Gan (2013) developed a metamodel based on data clustering and machine learning to price large portfolios of variable annuities, while Gan and Lin (2015) tackled a similar problem by developing a functional data approach. In addition, Gan (2015) compares the data clustering approach and Latin hypercube sam- pling to select representative variable annuities. Finally, Gan and Valdez (2018) proposes a metamodel to estimate partial Greeks of variable annuities with dependence. In the present paper, the metamodel we propose to approximate the future value distribution of a life insurance portfolio is constructed in different steps: 1. Select a subset of representative policies by means of conditional Latin hypercube sampling; 2. Project the risk factors from the evaluation date to the risk horizon by means of outer simulations; 3. Compute a rough estimate of each representative policy by means of a very limited (say two) number of inner simulations; 4. Create a regression model to approximate the distribution of the value of representa- tive policies; 5. Use the regression model to estimate the future value distribution of the entire portfolio. We propose two different approaches to develop the regression model in steps 4 and 5. The first approach relies upon the well-established Ordinary Least Squares (OLS) method for approximating the conditional distribution of each representative policy at the risk horizon, and then a second OLS regression is applied to estimate the future value distribution of the entire portfolio. Roughly speaking, we may say that the LSMC method is applied to estimate the distribution of the value of each representative policy at the risk horizon, and then this information is extended to the entire portfolio by means of a simple OLS regression. We call this approach the LSMC method. The second approach exploits the class of generalized beta of the second kind (GB2) distributions to model the conditional distribution of each representative policy value at the risk horizon and also to estimate the future value distribution of the entire portfolio. We underline that the GB2 regression model has been used in Gan and Valdez (2018) for modeling the fair current market values of guarantees embedded in a large variable annuity portfolio starting from a set of representative policies. Extensive numerical experiments have been conducted in order to assess the performance of the proposed models. The re- mainder of the paper is structured as follows. Section 2 provides the evaluation framework and Section 3 introduces the metamodeling approach. Section 4 illustrates some numerical results, and finally, in Section 5, conclusions are drawn. Risks 2021, 9, 177 3 of 17 2. The Evaluation Framework We consider a life insurance portfolio with M contracts underwritten by different policyholders (males and females) of different ages at the inception date t = 0. We take into account different types of life insurance policies which differ from each other in terms of maturity, policyholders’ ages, and sex. In particular, we consider unit-linked products, term life insurance and immediate life annuities. We assume that the unit-linked product pays, upon reaching maturity, and assuming the survival of the insured, the maximum value between the minimum guaranteed benefit and the value of a specific reference asset. The immediate life annuity is assumed to pay 10% of the level of a given reference asset continuously whilst the insured is alive; finally, the term insurance contract pays the total value of the asset upon the death of the policyholder before maturity. Regarding all the possible policy configurations, see Table 1. Table 1. This table shows the parameters used to generate the life insurance portfolio. Feature Value Policyholder age {55, . . ., 65} Sex {Male, Female} Maturity {10, 15, 20, 25, 30} Product type {Unit-linked, Term Insurance, Life Annuity} Since our task is to approximate the portfolio value distribution at the risk horizon starting from a set of representative policies, we use the Conditional Latin Hypercube Sampling (CLHS) method (see Minasny and McBratney 2006). Indeed, this approach has already been applied to select subsets of representative policies providing reliable results, e.g., see Gan and Valdez (2018). Therefore, in order to select a set of s representative contracts, we apply the CLHS method to the design matrix X, which contains all the features characterizing each specific policy, i.e., types, maturity, sex and age of the policyholder. Note that the categorical variables are treated as dummy variables. In order to project the cash-flows generated by the contracts over time, we need to sim- ulate the possible evolution of the risk factors. In this regard, we consider a computational framework where mortality, interest rate and the reference asset are taken into account. Despite insurance companies being exposed to systematic and non-systematic mortality risks, in our setting we consider only the first component for computational purposes due to the big dimension of the portfolio that will be considered. Let (W,F,P) be a filtered probability space large enough to support a process X in k d R , representing the evolution of financial variables, and a process Y in R , representing the evolution of mortality. The filtration F = (F ) represents the flow of information t0 available as time passes by; this includes knowledge of the evolution of all state variables up to each time t and of whether the policyholder has died by then. Specifically, we define F as the s-algebra generated by G [H , where t t t G = s(Z : 0  s  t), H = s I : 0  s  t , t s t fVsg k+d and where Z = (X, Y) is the joint state variables process in R . Thus, we have F = G_H, X Y with G = G _ G and with H = (H ) being the smallest filtration with respect to t0 which V is a stopping time and interpreted as the remaining lifetime of an insured. For more detail of modeling mortality under the intensity-based framework, see Biffis (2005). Risks 2021, 9, 177 4 of 17 Under the physical probability measure, P, we assume that the financial risk factors (reference asset value S, and interest rate r) dynamics are described by the following stochastic differential equations 1,P dS(t) = S(t)(r(t) + l)dt + S(t)s dW (t), (1) S(0) = S , 1,P where l is the risk premium, s is a positive constant, W (t) is a standard Wiener process, and r(t) is the risk-free interest rate, which is assumed to follow the dynamics 2,P dr(t) = a(q r(t))dt + s dW (t), (2) r(0) = r . 2,P Here, W (t) is a standard Wiener process, and the coefficients a, q, s are positive constants representing the speed of mean reversion, the long-term interest rate, and the interest rate volatility, respectively. Further, we assume that the two Wiener processes, 1,P 2,P W (t) and W (t), are correlated with the correlation coefficient r. In the absence of arbitrage opportunities, an equivalent martingale measure Q exists, under which all financial security prices are martingales after deflation by the money market account. We refer the readers to Biffis (2005) for more detail. Under the risk-neutral probability measure, Q, the dynamics in Equations (1) and (2) can be re-written as 1,Q dS(t) = S(t)r(t)dt + S(t)s dW (t), and 2,Q dr(t) = a q g r(t) dt + s dW (t), 1,Q 2,Q where g is the market price of risk. Note that W (t) and W (t) are two correlated standard Wiener processes with the coefficient of correlation r under Q. Concerning mortality, following Fung et al. (2014), we assume that the force of mor- tality, m (t), under the physical probability measure P for an individual aged x at time x+t t = 0, evolves accordingly to the following one-factor, non-mean-reverting and time- homogeneous affine process: 3,P dm (t) = [a + bm (t)]dt + s m (t)dW (t), (3) x+t x+t m x+t m (0) > 0, 3,P where a 6= 0, b > 0, s > 0 represent the volatility of the mortality intensity and W (t) is 1,P a standard Wiener process which is assumed to be independent with respect to W (t) and 2,P W (t). As pointed out by Fung et al. (2014), the important advantages of the mortality model defined in Equation (3) are its tractability since analytical expressions are available to evaluate survival probabilities, and also its simplicity since the model dynamics can be easily simulated. Furthermore, this model guarantees that, under specific conditions, the force of mortality is strictly positive (i.e., if a  s /2). The dynamics in Equation (3) under Q can be defined as 3,Q dm (t) = a + b ds m (t) dt + s m (t)dW (t), x+t m x+t m x+t m (0) > 0, 3,Q where W (t) is a standard Wiener process under the risk-neutral measure and d is the market price of the systematic mortality risk. Note that the parameters in the stochastic mortality model are estimated by calibrating the implied survival curve to the one obtained from the Italian population data of year Risks 2021, 9, 177 5 of 17 2016 (assumed to be t = 0) collected from the Human Mortality Database (see Fung et al. 2014). The calibration procedure was conducted for all policyholder ages and genders reported in Table 1. Finally, it is worth noting that, due to the flexibility of the methodology that will be proposed, different and/or more complex dynamics to describe the evolution of the risk factors may be assumed with respect to the ones assumed above. 3. Problem and Methodology Under the framework defined in Section 2, we need to evaluate the streams of pay- ments embedded in each policy inside the insurance portfolio. Before discussing the methodology, let us recall some results provided by Biffis (2005) related to the time-t fair values of the most common payoffs embedded in typical life insurance products, i.e., survival and death benefits. Proposition 1. (Survival benefit.) Let C be a bounded G-adapted process. Then, the time-t fair value SB (C ; T) of the time-T survival benefit of amount C , with 0  t  T, is given by: t T T h i h i R R T T r ds (r +m )ds s s s t t SB C ; T = E e I C j F = I E e C j G . ( ) t T T t T t fV>Tg fV>tg In particular, if C is G -adapted, the following holds: h R i h R i T T r ds X m ds Y s s t t SB (C ; T) = I E e C j G E e j G . t T T fV>tg t t Proposition 2. (Death benefit.) Let C be a bounded Gpredictable process. Then, the time-t fair value DB (C ; T) of the death benefit of amount C , payable in case the insured dies before time T, t V V with 0  t  T, is given by h i h i R R V u r ds (r +m )ds s s s t t DB (C ; T) = E e C I j F = I E e m C j G du. t V V t u u t ft<VTg fV>tg In particular, if C is G -predictable, the following holds h i h i R R u u r ds X m ds Y s s t t DB (C ; T) = I E e C j G E e m j G du. t V u u fV>tg t t We refer the readers to Biffis (2005) for the corresponding proofs and further de- tails. Therefore, as we can see from Propositions 1 and 2, evaluating life insurance poli- cies at future times implies solving conditional expectations for which often analytical formulas do not exist. Due to this, simulation-based approaches are extensively used (see Boyer and Stentoft 2013), among which we mention the nested simulations method where a high number of inner simulations branch out from another huge set of outer scenar- ios. However, the simulations within simulations approach is computationally challenging, especially when several policies are considered, as in our case. Therefore, in the following, we are going to discuss two methodologies to evaluate the streams of payments embedded in each policy inside the insurance portfolio. For this purpose, we project the relevant risk factors affecting the policy (i.e., S, r, and m) under the physical probability measure from time t = 0 up to the risk horizon t, and then for each outer scenario another set of inner trajectories is simulated under the risk-neutral measure. In order to avoid the huge computational cost of a pure nested model, as in the LSMC approach, we simulate n possible outer trajectories of the risk factors and then for each of them we further simulate n ¯  n inner paths. Following this approach, let Z be an n v matrix, where the row vector z contains the kth outer scenario of the v risk factors affecting the value of the ith representative policy. For each vector z and for time t < t  T, we simulate n ¯ trajectories under the risk-neutral probability measure. To simplify the notation, Risks 2021, 9, 177 6 of 17 we focus on the ith representative policy, and we denote z the vector containing the j,t time-t values of the risk factors along the jth inner trajectory corresponding to the kth outer scenario. Moreover, we label Y a n s matrix where the element y represents the value of ik the ith policy corresponding to the kth outer scenario obtained by averaging across the few inner simulations. Formally, n ¯ i k y = F z i = 1, . . . , s, and k = 1, . . . , n, (4) ik å å t j,t n ¯ j=1 t<tT where F  s represent the discounted cash-flows at time t of the ith policy with maturity T . ( ) In this way, we obtain a first (rough) estimate of each representative policy value distribution at the future time t. The next step is to obtain a more accurate estimate of the distribution of the time-t value of each representative policy and then to infer the distribution of the time-t value of the entire portfolio. We achieve this by applying two different approaches, an OLS as in the least-squares Monte Carlo method and a GB2 model. 3.1. The LSMC Method The least-squares Monte Carlo method applied to the problem of computing the distribution of the insurer ’s liabilities at a certain future date is based on the idea that the bias deriving from the few inner simulations can be reduced by approximating the involved conditional expectations with a linear combination of basis functions depending on some covariates, whose coefficients are estimated through an ordinary least-squares procedure (see Bauer et al. 2010 for further details). A straightforward application of the LSMC approach would be to apply the method on each policy inside the insurance portfolio. However, this kind of strategy would be quite computationally expensive due to the big dimensions of an insurance portfolio. Due to this, we propose applying the LSMC method first on just a set of representative policies and then through an OLS regression extend it to the entire portfolio. Hence, according to the LSMC method, we assume that the conditional ith representa- tive policy value, yb , can be expressed as a linear combination of basis functions depending ik on the covariate matrix z as follows: i i yb = b e z i = 1, . . . , s and k = 1, . . . , n, (5) ik å j j=1 where e () is the jth basis function in the regression, L is the number of basis functions, and b s represent the coefficients estimated through 2 3 n L i i i i ˆ ˆ 4 5 b , . . . , b = argmin y b e z . L å ik å j 1 j k b ,...,b 1 L k=1 j=1 In this way, we obtain an n s matrix Y where each row vector yb contains the values of each representative policy corresponding to the kth outer scenario. Now, in order to approximate the distribution of the value of the entire portfolio, we construct an OLS regression model for each outer scenario. In this regard, we denote with X an M (w + 1) matrix, where the row vector x contains the w covariates (gender, product type, age, and maturity) characterizing the ith contract in the portfolio plus an intercept term (M is the total number of contracts inside the insurance portfolio). Moreover, let X be the s (w + 1) matrix describing the structure of the representative insurance portfolio. Hence, x ¯ contains the w covariates characterizing the ith representative contract plus an intercept term. Therefore, we regress each row vector yb (k = 1, . . . , n) on the covariate matrix X, and once the coefficients are estimated, we extend them to the remaining policies by Risks 2021, 9, 177 7 of 17 exploiting the matrix X. In this way, we obtain the value of the ith contract corresponding to the kth outer scenario, which is denoted by v . Formally, ik v = x b i = 1, . . . , M and k = 1, . . . , n, (6) ik i where 0 0 0 0 ˆ ¯ ¯ ¯ b = X X X yb . k k Finally, the entire portfolio value distribution is obtained by adding up all the policy values in Equation (6) corresponding to each outer scenario. 3.2. The GB2 Model A GB2 model appears to provide a flexible family of distributions as it nests a range of standard distributions as special or limiting cases, such as the log-normal, the generalized- gamma, the Burr type III, the Burr type XII and many other (see McDonald 1984). Moreover, it has been used in several actuarial applications (e.g., see Gan and Valdez 2018) to model the fair market value of a portfolio made up of life insurance policies. A GB2 random variable can be constructed from a transformed ratio of two gamma random variables. The density function of a GB2 random variable, Y, is given by h   i pq a p1 a jaj y y f (y) = 1 + , y > 0, (7) bB( p, q) b b where a 6= 0, p > 0, q > 0 are shape parameters, b > 0 is the scale parameter, B() is the Beta function, and its expectation equals: 1 1 B p + , q a a E[Y] = b , (8) B( p, q) which exists if p < < q. In order to approximate the value of the portfolio, at first we approximate the time-t value of each representative policy, and then we use this information to approximate the distribution of the value of the entire insurance portfolio at the risk horizon. To achieve this, we construct two different GB2 regression models which exploit the generated information at the risk horizon i.e. S(t), r(t), and m(t) , and then the features characterizing uniquely each policy, respectively. Specifically, since the policy values y obtained from Equation (4) are not accurate ik due to the few inner trajectories on which they are based on, we aim at reducing the bias by estimating the involved conditional expectation through a GB2 regression model. In this regard, we assume that the ith policy value at time t conditioned on a specific outer scenario is a GB2 random variable with parameters (a , p , q , b ). In particular, we make i i i i the b parameter depend on some independent covariates (i.e., the value at time t of the risk factors which affect the policy of interest). Note that several approaches to incorpo- rate covariates in the GB2 regression model exist as well as different re-parametrization (see Beirlant et al. 2004; Frees and Valdez 2008). However, as noticed by Sun et al. (2008) and Frees et al. (2016), incorporating them into the scale parameter, b, facilitates the inter- pretability of the model; indeed, as can be seen in Equation (8), the expectation will change proportionally with respect to b, allowing one to interpret the regression coefficients as proportional changes. i i 0 Hence, b Z = exp Z b , where b = (b , b , . . . , b ) are the corresponding coef- i i;0 i;1 i;v ficients attached to each risk-factor. Note that the matrix Z now includes an intercept term. We can use the maximum likelihood method to estimate the parameters. Since we incorporate covariates through the scale parameter, we can write the log-likelihood function of the model as Risks 2021, 9, 177 8 of 17 " ! # n n n ja j y i ik i 0 l(a , p , q , b ) =n ln a p z b + (a p 1) ln(y ) ( p + q ) ln 1 + , (9) i i i i i i å i i å ik i i å k i i 0 B( p , q ) exp z b i i k=1 k=1 k=1 k i where i = 1,. . . s, n is the number of the generated outer scenarios and y denotes the value ik of the ith policy corresponding to the kth outer scenario. Once we estimate the parameters for the GB2 model, we use the expectation for predicting the value of the policy at time t. Since we incorporate covariates through the scale parameter, we can estimate it as i 0 1 1 exp z b B p ˆ + , q ˆ i i k i a ˆ a ˆ i i yb = , i = 1, 2, . . . , s and k = 1, . . . , n, (10) ik B( p ˆ , q ˆ ) i i where z is the vector containing the kth outer scenario of the risk factors affecting the ith representative policy. Once we obtain an estimate of the distribution of each representative policy at time t, we extend this information to the remaining policies. As already carried out for the OLS model, we are going to exploit both the matrices X and X on which we now construct a new GB2 regression model. Therefore, let Y be the n s matrix whose elements yb denote the value of the ith ik representative policy corresponding to the kth outer scenario obtained through Equation (10). Now, we construct a GB2 regression model in order to infer, starting from the set of representative policies, the distribution of the entire portfolio. Hence, recalling the pdf defined in Equation (7), we define the following log-likelihood function as: " ! # s s s ja j yb k ik l(a , p , q , b ) =s ln a p x ¯ b + (a p 1) ln(yb ) ( p + q ) ln 1 + , (11) k k k k k k å i k k å ik k k å B( p , q ) exp x ¯ b k k i i=1 i=1 i=1 k where s is the number of the representative policies and x ¯ is the row vector containing the information of the ith representative contract. Once again, after we estimate the parameters through the maximum likelihood ap- proach, we can then derive the distribution at the risk horizon for all the policies inside the insurance portfolio as 1 1 exp x b B p ˆ + , q ˆ i k k ˆ ˆ k a a k k v = , i = 1, 2, . . . , M and k = 1, . . . , n, (12) ik B( p ˆ , q ˆ ) k k where vb is the value of the ith contract corresponding to the kth outer scenario. ik Finally, the entire portfolio value distribution is again obtained by adding up all the policy values corresponding to each outer scenario. Note that the log-likelihood functions in Equations (9) and (11) may have multiple local maxima and since an analytic solution does not exist, we need to rely on a numerical procedure to estimate the involved parameters. We adopt the same multistage optimization algorithm described in Gan and Valdez (2018). 4. Numerical Results In this section, we present some numerical results obtained by exploiting the pre- viously defined models. In particular, we consider a life insurance portfolio with M = 10,000 contracts, and we focus on approximating its value distribution at the future time t = 1 year. These policies can be of three different types: a unit-linked pure endowment contract with a minimum maturity guarantee G = 100 payable upon the survival of the policyholder at the maturity date T, term life insurance policy which pays the value of a reference asset in case of death before maturity T, and an immediate life annuity contract Risks 2021, 9, 177 9 of 17 with continuous survival benefits equal to the 10% of a reference asset value up to the entire life of the insured person. We consider different policyholders, both males and females, with different ages x at time t = 0, which is also assumed to be the inception time of each policy. These characteristics are reported in Table 1. We assume that the insurance benefits depend upon a reference asset with the initial value S . In Tables A1 and A2 given in Appendix A, we report the values of the involved parameters in Equations (1)–(3). In particular, concerning mortality, we have calibrated the survival curve implied by Equation (3) on the Italian males and females mortality data in the year 2016 obtained from the Human Mortality Database for each age x 2 f55, . . . , 65g, and we assumed a longevity risk premium d = 0. We conduct this numerical experiment by varying both the number of outer simula- tions, n, and the number of representative policies, s. In particular, we adopt a monthly Euler ’s discretization setting in order to project n 2 f1000, 5000, 10000g outer trajectories of each risk factor under the P-measure, and then for each outer scenario we further simulate n ¯ = 2 inner trajectories under the risk-neutral probability measure. With this simulation set, we are able to obtain a first rough estimate of Y on which we construct the LSMC and GB2 models discussed in Sections 3.1 and 3.2, respectively. Note that, concerning the LSMC method, we exploit as basis functions Hermite polynomials of orders 1 and 2, which are denoted, respectively, as LSMC_1 and LSMC_2 hereafter. To determine the number of representative contracts s, we start from the informal rule proposed by Loeppky et al. (2009), which provide reasons and evidence supporting that the sample size should be about 10 times the input dimension. In our case, the dimension of covariates in the design matrix X is 5 (including the binary dummy variables converted from the categorical variables), and so we choose s = 50 as the initial number of representative contracts. However, we investigate the models’ performances by setting s = 75 and s = 100. Finally, the results are compared with a solid benchmark obtained through a nested simulations approach based on 10,000  2500 simulations. This allows us to conclude on the reliability of the proposed methodologies and to compare them in terms of computa- tional demand. Figure 1 shows the Quantile-Quantile (Q-Q) plots of the portfolio value at time t = 1 obtained by the nested simulations algorithm (assumed to be the theoretical one) and those predicted by the GB2 regression model and the LSMC models based on n = 10,000 outer simulations and by varying the number of representative contracts s 2 f50, 75, 100g. In this regard, we can see from Figure 1 that the proposed methodologies provide a good approximation except for the right tail of the distribution. In particular, concerning the GB2 regression model, we can see that the higher the number of representative contracts, the better the approximation. For a comprehensive analysis, we perform multiple runs of each proposed method; in particular, the following analysis is based on 50 runs. In Tables 2–4, we report the Mean Absolute Percentage Error (MAPE) relative to different quantities obtained by performing 50 runs of the proposed methodologies with a fixed number of outer scenarios (n = 10,000) and by varying the number of representative contracts (s 2 f50, 75, 100g). Risks 2021, 9, 177 10 of 17 Figure 1. Q-Q plots relative to the future value distribution of the insurance portfolio. The theoretical distribution is assumed to be the one obtained by nested simulations based on 10,000  2500 trajectories. The first row refers to the GB2 regression model based on 10,000 outer scenarios and by varying the number of representative contracts, s 2 f50, 75, 100g. The second and third rows refer to the LSMC method with Hermite polynomials of orders 1 and 2 based on 10,000 outer scenarios and by varying the number of representative contracts, s 2 f50, 75, 100g. Table 2. This table reports the MAPE of the estimates obtained by running 50 times the GB2 and LSMC methods with n = 10,000 and s = 50. The benchmark values are based on a nested simulations algorithm with 10,000 2500 trajectories applied to the entire portfolio. 5th Perc. 10th Perc. Median Mean 90th Perc. 95th Perc. 99th Perc. 99.5th Perc. GB2 2.812% 2.180% 1.798% 2.594% 3.832% 4.016% 6.154% 4.375% LSMC_1 3.238% 3.000% 2.399% 2.557% 2.398% 2.174% 2.436% 2.722% LSMC_2 2.762% 2.754% 2.567% 2.557% 2.436% 2.114% 2.356% 2.841% Table 3. This table reports the MAPE of the estimates obtained by running 50 times the GB2 and LSMC methods with n = 10,000 and s = 75. The benchmark values are based on a nested simulations algorithm with 10,000  2500 trajectories applied to the entire portfolio. 5th Perc. 10th Perc. Median Mean 90th Perc. 95th Perc. 99th Perc. 99.5th Perc. GB2 1.971% 1.782% 0.806% 0.542% 3.605% 3.949% 6.094% 3.867% LSMC_1 2.500% 1.338% 1.530% 1.392% 1.251% 1.657% 0.941% 1.678% LSMC_2 1.828% 1.047% 1.756% 1.392% 1.307% 1.485% 1.842% 2.142% Risks 2021, 9, 177 11 of 17 Table 4. This table reports the MAPE of the estimates obtained by running 50 times the GB2 and LSMC methods with n = 10,000 and s = 100. The benchmark values are based on a nested simulations algorithm with 10,000  2500 trajectories applied to the entire portfolio. 5th Perc. 10th Perc. Median Mean 90th Perc. 95th Perc. 99th Perc. 99.5th Perc. GB2 1.986% 1.745% 0.519% 0.347% 1.129% 1.313% 2.856% 1.944% LSMC_1 1.629% 1.504% 0.440% 0.627% 0.764% 0.824% 0.958% 2.561% LSMC_2 1.148% 1.145% 0.578% 0.627% 0.762% 0.986% 2.101% 2.334% If we compare Tables 2–4, it is evident that increasing the number of representative contracts s leads to a better approximation of the mean and of the other considered mea- sures of position. Moreover, it seems that the GB2 model, at least for a low number of representative contracts, is not able to adequately model the right tail of the distribution. In Table 5, we report the Mean Percentage Error (MPE) and MAPE relative to the mean estimates obtained by running the GB2 and LSMC methods 50 times with different numbers of outer simulations, n, and representative contracts, s. Table 5. This table reports the MPE and MAPE of the mean estimates obtained by running 50 times the GB2 and LSMC methods and varying the number of outer simulations (Outer) and that of representative contracts s. The benchmark value is based on a nested simulations algorithm with 10,000  2500 trajectories applied to the entire portfolio. s = 50 s = 75 s = 100 Outer Method MPE MAPE MPE MAPE MPE MAPE GB2 3.612% 3.612% 0.163% 0.983% 0.240% 0.923% 1000 LSMC_1 3.475% 3.475% 2.104% 2.221% 1.017% 1.364% LSMC_2 3.475% 3.475% 2.104% 2.221% 1.017% 1.364% GB2 2.981% 2.981% 0.715% 0.747% 0.301% 0.474% 5000 LSMC_1 2.840% 2.840% 1.533% 1.533% 1.029% 1.092% LSMC_2 2.840% 2.840% 1.533% 1.533% 1.029% 1.092% GB2 2.594% 2.594% 0.491% 0.542% 0.179% 0.347% 10,000 LSMC_1 2.557% 2.557% 1.392% 1.392% 0.490% 0.627% LSMC_2 2.557% 2.557% 1.392% 1.392% 0.490% 0.627% Looking at Table 5, we can see that for a fixed number of outer scenarios and for each applied method, the accuracy of the mean estimates increases with the number of representative contracts s. Moreover, it is evident that in most of the considered config- urations, the GB2 model outperforms the LSMC methods. Furthermore, if we look at the last column of Table 5 (s = 100), for instance, we can see that the higher the number of outer scenarios, the better the approximation. Finally, we can see that increasing the number of basis functions up to degree two in the LSMC method does not improve the accuracy of the mean estimates. This is probably due to the few outer simulated trajectories (at most 10,000 paths), which is not sufficient to appreciate the improvement which is usually expected. In the left-hand side of Figure A1 given in Appendix B, we report the corresponding box-plots from which it is possible to see that, in each of the considered configurations, the LSMC method systematically underestimates the quantity of interest. Concerning the estimate of the 99.5th percentile of the distribution, which would be of interest for valuing solvency capital requirements, Table 6 reports the MPE and MAPE relative to 50 estimates obtained by varying both the number of simulations and the number of representative contracts. Risks 2021, 9, 177 12 of 17 Table 6. This table reports the MPE and MAPE of the 99.5th percentile estimates obtained by running the GB2 and LSMC methods 50 times and varying the number of outer simulations (Outer) and that of representative contracts s. The benchmark value is based on a nested simulations algorithm with 10,000  2500 trajectories applied to the entire portfolio. s = 50 s = 75 s = 100 Outer Method MPE MAPE MPE MAPE MPE MAPE GB2 3.936% 6.570% 1.512% 5.453% 1.410% 4.494% 1000 LSMC_1 2.664% 3.715% 6.308% 6.478% 2.961% 4.253% LSMC_2 0.252% 6.487% 4.211% 7.150% 1.438% 5.517% GB2 4.110% 4.723% 3.813% 4.018% 0.081% 2.653% 5000 LSMC 2.908% 3.001% 4.708% 4.722% 1.659% 2.006% LSMC_2 1.787% 3.484% 3.118% 4.017% 0.462% 3.110% GB2 4.157% 4.375% 3.737% 3.867% 0.421% 1.944% 10,000 LSMC_1 2.643% 2.722% 1.560% 1.678% 2.522% 2.561% LSMC_2 2.259% 2.841% 0.131% 2.142% 1.007% 2.334% From Table 6, we can detect a similar behaviour as the one previously discussed. Specifically, we can see that, concerning the GB2 model, an increase in the number of representative contracts (for fixed n) leads to an improvement of the resulting estimates. On the contrary, for the LSMC method, there is no clear pattern. Indeed, as we can see, increasing the number of representative contracts (for a fixed n) does not lead to a clear improvement in the results. Moreover, increasing the number of basis functions as well as the number of outer simulations does not increase the accuracy of the estimates (see also the right side of Figure A1 in Appendix B). As in the case of the mean estimate, this could be due to the small number of outer simulations, and so we may conclude that passing from 1000 to 10000 trajectories is still not sufficient to exploit more basis functions. Once again, if we look at the case of n = 10,000 and s = 100, the GB2 model outperforms the LSMC approach. Now, let us examine the speed of the proposed algorithms with respect to the bench- mark. Table 7 shows the runtime of GB2 and LSMC expressed as a percentage of the time required by the nested simulation method based on 10,000 outers and 2500 inners. Note that we conducted all experiments using R on a computer equipped with an Intel Core(TM) i7-1065G7 CPU 1.50 GHz processor with 12 GB of RAM and Windows 10 Home operating system. Table 7. Percentage of the runtime required by the GB2 and LSMC methods with respect to the nested simulations approach. Note that the computational demand to construct the benchmark with a nested simulations approach based on 10,000 2500 scenarios applied to the entire portfolio is about 187,200 s. n = 1000 n = 5000 n = 10,000 Method s = 50 s = 75 s = 100 s = 50 s = 75 s = 100 s = 50 s = 75 s = 100 GB2 0.069% 0.078% 0.098% 0.337% 0.380% 0.501% 0.660% 0.832% 1.021% LSMC_1 0.005% 0.006% 0.007% 0.012% 0.018% 0.019% 0.036% 0.045% 0.047% LSMC_2 0.005% 0.006% 0.007% 0.013% 0.019% 0.020% 0.037% 0.046% 0.047% As we can see from Table 7, by applying the proposed methodologies, we have drastically reduced the computational time required instead by a nested simulations approach. Moreover, as expected, the LSMC method presented in Section 3.1 outperforms the GB2 model in terms of time in each of the proposed configurations. However, this is due to the existence of a closed form formula for the estimation of the involved parameters. Indeed, as stated in Section 3.2, the estimation procedure for the GB2 model is based on a multistage optimization algorithm due to the complexity of the likelihood functions, which Risks 2021, 9, 177 13 of 17 may have multiple local maxima. Regardless, if compared with the simulations within simulations method, the GB2 model proved to be an accurate and efficient alternative. Full LSMC To provide an exhaustive analysis, we consider a straightforward application of the LSMC method. Hence, we apply the LSMC method on each contract composing the insurance portfolio without considering any set of representative policies. The results are then compared with those already shown in the previous section both in terms of accuracy and computational demand. Just as an example, we construct the LSMC model by exploiting as set of basis functions Hermite polynomials with order 1 based on 10,000 2 simulations (LSMC_Full). Table 8 reports the MPE and MAPE relative to the 5th- percentile, the mean, and the 99.5th percentile estimates obtained by performing 50 runs of the proposed methods. Further, we report the results relative to the GB2 model (GB2) and LSMC method with Hermite polynomials of order 1 (LSMC_1) and order 2 (LSMC_2) based on 10,000  2 simulations and s = 100 representative policies. Table 8. This table reports the MPE and MAPE relative to the 5th percentile, the mean, and the 99.5th percentile estimates obtained by applying different methodologies. GB2 stands for the GB2 regression model based on n = 10,000 outer scenarios and s = 100 representative policies; LSMC_1 refers to the LSMC method based on n = 10,000 outer scenarios and s = 100 representative policies with Hermite polynomials of order 1; LSMC_2 refers to the LSMC method based on n = 10,000 outer scenarios and s = 100 representative policies with Hermite polynomials of order 2; LSMC_Full refers to the LSMC method based on n = 10,000 outer scenarios and constructed on each contract in the insurance portfolio. The results are compared with the corresponding benchmark value based on nested simulations with 10000 2500 trajectories applied to the entire portfolio. 5th Perc. Mean 99.5th Perc. Method MPE MAPE MPE MAPE MPE MAPE GB2 1.986% 1.986% 0.179% 0.347% 0.421% 1.944% LSMC_1 1.472% 1.629% 0.490% 0.627% 2.522% 2.561% LSMC_2 0.742% 1.148% 0.490% 0.627% 1.007% 2.334% LSMC_Full 0.501% 1.032% 0.084% 0.461% 0.420% 1.070% As is shown in Table 8, the errors relative to the LSMC_Full approach are lower than those of the other proposed methods since the estimates are based on the entire insurance portfolio, i.e., this approach does not suffer of any uncertainty related to the missingness of policies in its estimation procedure. Figure A2 given in Appendix B reports the box-plots on which the quantities in Table 8 are based on. Finally, we compare these methods in terms of time. In Table 9, we report the com- putational time required by the algorithms. We can see that the naive application of the LSMC approach is more computationally expensive with respect to the GB2 and LSMC models based on a set of representative policies. Table 9. Runtime, in seconds, of GB2 model and LSMC methods based on 10,000 2 simulations and s = 100 representative contracts (GB2, LSMC_1, LSMC_2). LSMC_Full refers to the LSMC method applied to each contract in the insurance portfolio. Method Time GB2 1911.445 LSMC_1 87.824 LSMC_2 88.290 LSMC_Full 7847.960 Risks 2021, 9, 177 14 of 17 5. Conclusions In this paper, we addressed the problem of approximating the value of a life insurance portfolio at a future time by proposing two different methodologies able to avoid the time-consuming nested simulations approach. The first approach can be thought of as extension of the well-known LSMC method, while the second is based on the GB2 distri- bution, which is widely used to approximate the fair value of portfolios of life insurance policies. To validate the proposal, we have considered a solid benchmark obtained by nested simulations, and we compared the two proposed methodologies both in terms of accuracy and efficiency. The analysis has been carried out by considering an ever increasing number of simulations and representative policies, from which it turned out that, generally, both the methodologies are able to provide increasingly accurate results. Moreover, the LSMC method proved to be faster in computational terms but also less accurate than the GB2 model. Furthermore, the proposed methodologies have been compared with a straightforward application of the LSMC method (i.e., without considering any subset of representative policies), which turned out to be more accurate but computationally more expensive. Extensive numerical results have shown that the proposed methods represent viable alternatives to the full nested Monte Carlo model. Therefore, the proposed metamodeling approach may help insurance and reinsurance undertakings to reduce the computational budget needed, for instance, in the context of evaluating solvency capital requirements. In this regard, it can be used to evaluate the future cash-flows (inflows and outflows) generated by the entire portfolio by considering at first only a subset of policies, and then extend to the remaining ones. Indeed, this represents the main issue for deriving the full loss distribution on which the Value-at-Risk measure should be obtained, as prescribed by the European Solvency II directive. Author Contributions: Both authors contributed equally to this manuscript. Both authors have read and agreed to the published version of the manuscript. Funding: This research received no external funding. Conflicts of Interest: The authors declare no conflict of interest. Appendix A. Parameter Values Table A1 shows the parameter values assumed for the dynamics of the reference asset and interest rates defined in Equations (1) and (2). Table A1. Parameters of the reference asset value process, S, and interest rate stochastic process, r. S s l r a q s g r 0 S 0 r 100 0.20 0.00 0.04 0.10 0.02 0.02 0.00 0.00 Table A2 shows the estimated parameters of the mortality model defined in Equation (3) obtained by fitting the corresponding survival curve on that implied by the Italian males and females mortality data in year 2016 obtained from the Human Mortality Database for each age x 2 f55, . . . , 65g. Risks 2021, 9, 177 15 of 17 Table A2. Estimated parameters of the stochastic mortality model for Italian male (left) and female (right) aged x 2 f55, . . . , 65g in 2016. Male Female Age a ˆ b s ˆ a ˆ s ˆ m m 55 0.00040 0.0881 0.00157 0.00010 0.10017 0.00100 56 0.00700 0.0705 0.00262 0.00001 0.11110 0.00100 57 0.00001 0.1051 0.00100 0.00001 0.11060 0.00100 58 0.00001 0.1045 0.00390 0.00009 0.10740 0.00850 59 0.00040 0.0832 0.00100 0.00001 0.11570 0.00100 60 0.00060 0.0743 0.00100 0.00042 0.08362 0.00669 61 0.00030 0.0907 0.00100 0.00044 0.08505 0.00100 62 0.00010 0.1033 0.00710 0.00001 0.11990 0.00100 63 0.00012 0.1063 0.00750 0.00040 0.09704 0.00182 64 0.00008 0.1112 0.00810 0.00039 0.09860 0.00376 65 0.00020 0.1075 0.00123 0.00049 0.09558 0.00720 Appendix B. Further Results Figure A1 reports the boxplot relative to the mean (left) and the 99.5th percentile (right) estimates obtained by running 50 times the GB2 and LSMC methods varying both the number of outer scenarios, n, and that of the representative policies, s. In this regard, we can see that the variability of the estimates decreases as the number of outer scenarios and the number of representative contracts increases. Mean Perc. 99.5% s = 50 s = 75 s = 100 s = 50 s = 75 s = 100 s = 50 s = 75 s = 100 s = 50 s = 75 s = 100 s = 50 s = 75 s = 100 s = 50 s = 75 s = 100 n = 1000 n = 5000 n = 10,000 n = 1000 n = 5000 n = 10,000 Figure A1. Boxplots relative to the mean (left) and the 99.5th percentile (right) estimates obtained by running the GB2 and LSMC methods 50 times and varying the number of outer simulations n and that of representative contracts s. The red line refers to the benchmark value based on a nested simulations algorithm with 10,000  2500 trajectories applied to the entire portfolio. Figure A2 compares the straightforward application of the LSMC approach with respect to the proposed methodologies providing the boxplots relative to the mean and the 99.5th percentile estimates. 1,050,000 1,105,575 1,161,150 1,216,725 1,272,300 GB2 LSMC1 LSMC2 GB2 LSMC1 LSMC2 GB2 LSMC1 LSMC2 GB2 LSMC1 LSMC2 GB2 LSMC1 LSMC2 GB2 LSMC1 LSMC2 GB2 LSMC1 LSMC2 GB2 LSMC1 LSMC2 GB2 LSMC1 LSMC2 1,300,000 1,576,250 1,852,500 2,128,750 2,405,000 GB2 LSMC1 LSMC2 GB2 LSMC1 LSMC2 GB2 LSMC1 LSMC2 GB2 LSMC1 LSMC2 GB2 LSMC1 LSMC2 GB2 LSMC1 LSMC2 GB2 LSMC1 LSMC2 GB2 LSMC1 LSMC2 GB2 LSMC1 LSMC2 Risks 2021, 9, 177 16 of 17 Mean Perc. 99.5% GB2 LSMC_1 LSMC_2 LSMC_Full GB2 LSMC_1 LSMC_2 LSMC_Full Figure A2. Boxplots relative to the mean and the 99.5th percentile estimates obtained by running the proposed methodolo- gies 50 times. GB2 stands for the GB2 regression model based on 10,000 outer scenarios and s = 100 representative policies; LSMC_1 refers to the LSMC method based on 10,000 outer scenarios and s = 100 representative policies with Hermite polynomials of order 1; LSMC_2 refers to the LSMC method based on 10,000 outer scenarios and s = 100 representative policies with Hermite polynomials of order 2; LSMC_Full refers to the LSMC method based on 10,000 outer scenarios and constructed on each contract in the insurance portfolio. The red line refers to the benchmark value based on a nested simulations algorithm with 10,000  2500 trajectories applied to the entire portfolio. References Allemang, Dean, and Jim Hendler. 2011. Semantic Web for the Working Ontologist: Effective Modeling in RDFS and OWL, 2nd ed. San Francisco: Morgan Kaufmann Publishers Inc. Barton, Russel R. 2015. Tutorial: Simulation metamodeling. Paper presented at 2015 Winter Simulation Conference (WSC), Huntington Beach, CA, USA, December 6–9. Bauer, Daniel, Daniela Bergmann, and Andreas Reuss. 2010. Solvency II and Nested Simulations-a Least-Squares Monte Carlo Approach. Working Paper. Georgia State University and Ulm University. Available online: http://citeseerx.ist.psu.edu/viewdoc/ summary?doi=10.1.1.466.1983 (accessed on 17 August 2021). Beirlant, Jan, Yuri Goegebeur, Johan Segers, and Jozef L. Teugels. 2004. Statistics of Extremes: Theory and Applications. Chichester: Wiley. Biffis, Enrico. 2005. Affine processes for dynamic mortality and actuarial valuations. Insurance: Mathematics and Economics 37: 443–68. Boyer, M. Martin, and Lars Stentoft. 2013. If we can simulate it, we can insure it: An application to longevity risk management. Insurance: Mathematics and Economics 52: 35–45. Carrière, Jacques F. 1996. Valuation of the early-exercise price for options using simulations and nonparametric regression. Insurance: Mathematics and Economics 19: 19–30. Cathcart, Mark, and Steven Morrison. 2009. Variable annuity economic capital: The least-squares Monte Carlo approach. Life & Pensions 2: 44–48. European Parliament, and European Council. 2009. Directive 2009/138/EC on the Taking-Up and Pursuit of the Business of Insurance and Reinsurance (Solvency II). Brussels: European Council. Floryszczak, Anthony, Olivier Le Courtois, and Mohamed Majri. 2016. Inside the Solvency II black box: Net Asset Values and Solvency Capital Requirements with a least-squares Monte-Carlo approach. Insurance: Mathematics and Economics 71: 15–26. Frees, Edward W., and Emiliano A. Valdez. 2008. Hierarchical Insurance Claims Modeling. Journal of the American Statistical Association 103: 1457–69. Frees, Edward W., Gee Lee, and Lu Yang. 2016. Multivariate Frequency-Severity Regression Models in Insurance. Risks 4: 4. Fung, Man Chung, Katja Ignatieva, and Michael Sherris. 2014. Systematic mortality risk: An analysis of guaranteed lifetime withdrawal benefits in variable annuities. Insurance: Mathematics and Economics 58: 103–15. Gan, Guojun, and Emiliano A. Valdez. 2018. Regression modeling for the valuation of large variable annuity portfolios. North American Actuarial Journal 22: 40–54. Gan, Guojun, and X. Sheldon Lin. 2015. Valuation of large variable annuity portfolios under nested simulation: A functional data approach. Insurance: Mathematics and Economics 62: 138–50. Gan, Guojun. 2013. Application of data clustering and machine learning in variable annuity valuation. Insurance: Mathematics and Economics 53: 795–801. Gan, Guojun. 2015. Application of metamodeling to the valuation of large variable annuity portfolios. Paper presented at 2015 Winter Simulation Conference (WSC), Huntington Beach, CA, USA, December 6–9. Krah, Anne-Sophie, Zoran Nikolic, ´ and Ralf Korn. 2018. A least-squares Monte Carlo framework in proxy modeling of life insurance companies. Risks 6: 2–26. 1,170,000 1,180,000 1,190,000 1,200,000 1,210,000 1,760,000 1,833,750 1,907,500 1,981,250 2,055,000 Risks 2021, 9, 177 17 of 17 Loeppky, Jason L., Jerome Sacks, and William J. Welch. 2009. Choosing the sample size of a computer experiment: A practical guide. Technometrics 51: 366–76. Longstaff, Francis A., and Eduardo S. Schwartz. 2001. Valuing American options by simulations: A simple least-squares approach. The Review of Financial Studies 1: 113–47. McDonald, James B. 1984. Some generalized functions for the size distribution of income. Econometrica 52: 647–63. Minasny, Budiman, and Alex B. McBratney. 2006. A conditioned Latin hypercube method for sampling in the presence of ancillary information. Computers & Geosciences 32: 1378–88. Sun, Jiafeng, Edward W. Frees, and Marjorie A. Rosenberg. 2008. Heavy-tailed longitudinal data modeling using copulas. Insurance: Mathematics and Economics 42: 817–30. Tilley, James A. 1993. Valuing American options in a path simulation model. Transactions of Society of Actuaries 45: 499–520. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Risks Multidisciplinary Digital Publishing Institute

Modeling the Future Value Distribution of a Life Insurance Portfolio

Risks , Volume 9 (10) – Oct 2, 2021

Loading next page...
 
/lp/multidisciplinary-digital-publishing-institute/modeling-the-future-value-distribution-of-a-life-insurance-portfolio-RsH6oxeqzw

References (29)

Publisher
Multidisciplinary Digital Publishing Institute
Copyright
© 1996-2021 MDPI (Basel, Switzerland) unless otherwise stated Disclaimer The statements, opinions and data contained in the journals are solely those of the individual authors and contributors and not of the publisher and the editor(s). MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. Terms and Conditions Privacy Policy
ISSN
2227-9091
DOI
10.3390/risks9100177
Publisher site
See Article on Publisher Site

Abstract

risks Article Modeling the Future Value Distribution of a Life Insurance Portfolio 1, ,† 2,3,† Massimo Costabile * and Fabio Viviano Department of Economics, Statistics and Finance, University of Calabria, Ponte Bucci Cubo 0 C, 87036 Rende, CS, Italy Department of Economics and Statistics, University of Udine, Via Tomadini 30/A, 33100 Udine, Italy; viviano.fabio@spes.uniud.it Department of Economics, Business, Mathematics and Statistics “B. de Finetti”, University of Trieste, Piazzale Europa 1, 34127 Trieste, Italy * Correspondence: massimo.costabile@unical.it † These authors contributed equally to this work. Abstract: This paper addresses the problem of approximating the future value distribution of a large and heterogeneous life insurance portfolio which would play a relevant role, for instance, for solvency capital requirement valuations. Based on a metamodel, we first select a subset of representative policies in the portfolio. Then, by using Monte Carlo simulations, we obtain a rough estimate of the policies’ values at the chosen future date and finally we approximate the distribution of a single policy and of the entire portfolio by means of two different approaches, the ordinary least-squares method and a regression method based on the class of generalized beta distribution of the second kind. Extensive numerical experiments are provided to assess the performance of the proposed models. Keywords: GB2; LSMC; metamodel; regression models; Solvency II JEL Classification: G22 Citation: Costabile, Massimo, and Fabio Viviano. 2021. Modelling the Future Value Distribution of a Life Insurance Portfolio. Risks 9: 177. 1. Introduction https://doi.org/10.3390/risks9100177 In many relevant situations, life insurers face the necessity to determine the distri- bution of the value of their portfolio of policies at a certain future date. This happens, Academic Editor: Mercedes Ayuso for example, when regulators need to maintain solvency capital requirements in order to continue to conduct business, as stated in the Solvency II directive or in the Swiss Solvency Received: 19 August 2021 Test. In particular, Article 101(3) of the European directive requires that the Solvency Accepted: 27 September 2021 Capital Requirement “shall correspond to the Value-at-Risk of the basic own funds of an Published: 2 October 2021 insurance or reinsurance undertaking subject to a confidence level of 99.5% over a one-year period” (see European Parliament and European Council 2009). As a consequence, insur- Publisher’s Note: MDPI stays neutral ers are obliged to assess the value of assets and liabilities at a future date, the so-called risk with regard to jurisdictional claims in horizon, in order to derive their full loss distributions. To achieve this, the relevant risk published maps and institutional affil- factors must be projected at the risk horizon and then, conditional on the realized values, iations. a market consistent valuation of the insurer ’s assets and liabilities is required. This has led insurance and reinsurance companies to face a computationally intensive problem. Indeed, due to the complex structure of the insurer ’s liabilities, in general, closed form formulas are not available and a straightforward approach, common among insurers, is to Copyright: © 2021 by the authors. obtain an estimate through nested Monte Carlo simulations. Unfortunately, this approach Licensee MDPI, Basel, Switzerland. is extremely time consuming and becomes readily unmanageable from a computational This article is an open access article point of view. In this regard, one possible alternative method proposed in the literature distributed under the terms and to reduce the computational effort and to preserve the accuracy of the desired estimates conditions of the Creative Commons is the Least-Squares Monte Carlo (LSMC) method, firstly introduced by Carrière (1996), Attribution (CC BY) license (https:// Tilley (1993), and Longstaff and Schwartz (2001) in the context of American-type Option creativecommons.org/licenses/by/ Pricing. Application of the LSMC method for valuing solvency capital requirements in the 4.0/). Risks 2021, 9, 177. https://doi.org/10.3390/risks9100177 https://www.mdpi.com/journal/risks Risks 2021, 9, 177 2 of 17 insurance business was proposed in Cathcart and Morrison (2009) and Bauer et al. (2010). Moreover, Floryszczak et al. (2016) and Krah et al. (2018) illustrate a practical implemen- tation of the LSMC in this particular context. The above-mentioned papers, proposed in the actuarial literature, share the common feature of evaluating capital requirements for a single policy. In the case of an entire portfolio of policies, the nested simulation approach is even more difficult to implement due to the huge computational effort needed. For instance, assuming 10,000 outer trajectories simulated from the current time to the risk horizon for each one of the v risk factors, and then 2500 inner paths for each outer, with a monthly discretization for 20 years, and considering an insurance portfolio composed of 10,000 contracts, the total number of cash-flow projections needed would be 10,000  v  2500 12  20  10,000 = v  6  10 , which is very hard to manage. In order to keep the computational complexity of the evaluation problem at a reason- able level, we propose a metamodeling approach. Metamodeling, introduced in system engineering (see Barton 2015), can be defined as “the practice of using a model to describe another model as an instance” (see Allemang and Hendler 2011). This approach has also been widely used in the actuarial literature to estimate the price and Greeks of large portfo- lios of life insurance policies. For instance, Gan (2013) developed a metamodel based on data clustering and machine learning to price large portfolios of variable annuities, while Gan and Lin (2015) tackled a similar problem by developing a functional data approach. In addition, Gan (2015) compares the data clustering approach and Latin hypercube sam- pling to select representative variable annuities. Finally, Gan and Valdez (2018) proposes a metamodel to estimate partial Greeks of variable annuities with dependence. In the present paper, the metamodel we propose to approximate the future value distribution of a life insurance portfolio is constructed in different steps: 1. Select a subset of representative policies by means of conditional Latin hypercube sampling; 2. Project the risk factors from the evaluation date to the risk horizon by means of outer simulations; 3. Compute a rough estimate of each representative policy by means of a very limited (say two) number of inner simulations; 4. Create a regression model to approximate the distribution of the value of representa- tive policies; 5. Use the regression model to estimate the future value distribution of the entire portfolio. We propose two different approaches to develop the regression model in steps 4 and 5. The first approach relies upon the well-established Ordinary Least Squares (OLS) method for approximating the conditional distribution of each representative policy at the risk horizon, and then a second OLS regression is applied to estimate the future value distribution of the entire portfolio. Roughly speaking, we may say that the LSMC method is applied to estimate the distribution of the value of each representative policy at the risk horizon, and then this information is extended to the entire portfolio by means of a simple OLS regression. We call this approach the LSMC method. The second approach exploits the class of generalized beta of the second kind (GB2) distributions to model the conditional distribution of each representative policy value at the risk horizon and also to estimate the future value distribution of the entire portfolio. We underline that the GB2 regression model has been used in Gan and Valdez (2018) for modeling the fair current market values of guarantees embedded in a large variable annuity portfolio starting from a set of representative policies. Extensive numerical experiments have been conducted in order to assess the performance of the proposed models. The re- mainder of the paper is structured as follows. Section 2 provides the evaluation framework and Section 3 introduces the metamodeling approach. Section 4 illustrates some numerical results, and finally, in Section 5, conclusions are drawn. Risks 2021, 9, 177 3 of 17 2. The Evaluation Framework We consider a life insurance portfolio with M contracts underwritten by different policyholders (males and females) of different ages at the inception date t = 0. We take into account different types of life insurance policies which differ from each other in terms of maturity, policyholders’ ages, and sex. In particular, we consider unit-linked products, term life insurance and immediate life annuities. We assume that the unit-linked product pays, upon reaching maturity, and assuming the survival of the insured, the maximum value between the minimum guaranteed benefit and the value of a specific reference asset. The immediate life annuity is assumed to pay 10% of the level of a given reference asset continuously whilst the insured is alive; finally, the term insurance contract pays the total value of the asset upon the death of the policyholder before maturity. Regarding all the possible policy configurations, see Table 1. Table 1. This table shows the parameters used to generate the life insurance portfolio. Feature Value Policyholder age {55, . . ., 65} Sex {Male, Female} Maturity {10, 15, 20, 25, 30} Product type {Unit-linked, Term Insurance, Life Annuity} Since our task is to approximate the portfolio value distribution at the risk horizon starting from a set of representative policies, we use the Conditional Latin Hypercube Sampling (CLHS) method (see Minasny and McBratney 2006). Indeed, this approach has already been applied to select subsets of representative policies providing reliable results, e.g., see Gan and Valdez (2018). Therefore, in order to select a set of s representative contracts, we apply the CLHS method to the design matrix X, which contains all the features characterizing each specific policy, i.e., types, maturity, sex and age of the policyholder. Note that the categorical variables are treated as dummy variables. In order to project the cash-flows generated by the contracts over time, we need to sim- ulate the possible evolution of the risk factors. In this regard, we consider a computational framework where mortality, interest rate and the reference asset are taken into account. Despite insurance companies being exposed to systematic and non-systematic mortality risks, in our setting we consider only the first component for computational purposes due to the big dimension of the portfolio that will be considered. Let (W,F,P) be a filtered probability space large enough to support a process X in k d R , representing the evolution of financial variables, and a process Y in R , representing the evolution of mortality. The filtration F = (F ) represents the flow of information t0 available as time passes by; this includes knowledge of the evolution of all state variables up to each time t and of whether the policyholder has died by then. Specifically, we define F as the s-algebra generated by G [H , where t t t G = s(Z : 0  s  t), H = s I : 0  s  t , t s t fVsg k+d and where Z = (X, Y) is the joint state variables process in R . Thus, we have F = G_H, X Y with G = G _ G and with H = (H ) being the smallest filtration with respect to t0 which V is a stopping time and interpreted as the remaining lifetime of an insured. For more detail of modeling mortality under the intensity-based framework, see Biffis (2005). Risks 2021, 9, 177 4 of 17 Under the physical probability measure, P, we assume that the financial risk factors (reference asset value S, and interest rate r) dynamics are described by the following stochastic differential equations 1,P dS(t) = S(t)(r(t) + l)dt + S(t)s dW (t), (1) S(0) = S , 1,P where l is the risk premium, s is a positive constant, W (t) is a standard Wiener process, and r(t) is the risk-free interest rate, which is assumed to follow the dynamics 2,P dr(t) = a(q r(t))dt + s dW (t), (2) r(0) = r . 2,P Here, W (t) is a standard Wiener process, and the coefficients a, q, s are positive constants representing the speed of mean reversion, the long-term interest rate, and the interest rate volatility, respectively. Further, we assume that the two Wiener processes, 1,P 2,P W (t) and W (t), are correlated with the correlation coefficient r. In the absence of arbitrage opportunities, an equivalent martingale measure Q exists, under which all financial security prices are martingales after deflation by the money market account. We refer the readers to Biffis (2005) for more detail. Under the risk-neutral probability measure, Q, the dynamics in Equations (1) and (2) can be re-written as 1,Q dS(t) = S(t)r(t)dt + S(t)s dW (t), and 2,Q dr(t) = a q g r(t) dt + s dW (t), 1,Q 2,Q where g is the market price of risk. Note that W (t) and W (t) are two correlated standard Wiener processes with the coefficient of correlation r under Q. Concerning mortality, following Fung et al. (2014), we assume that the force of mor- tality, m (t), under the physical probability measure P for an individual aged x at time x+t t = 0, evolves accordingly to the following one-factor, non-mean-reverting and time- homogeneous affine process: 3,P dm (t) = [a + bm (t)]dt + s m (t)dW (t), (3) x+t x+t m x+t m (0) > 0, 3,P where a 6= 0, b > 0, s > 0 represent the volatility of the mortality intensity and W (t) is 1,P a standard Wiener process which is assumed to be independent with respect to W (t) and 2,P W (t). As pointed out by Fung et al. (2014), the important advantages of the mortality model defined in Equation (3) are its tractability since analytical expressions are available to evaluate survival probabilities, and also its simplicity since the model dynamics can be easily simulated. Furthermore, this model guarantees that, under specific conditions, the force of mortality is strictly positive (i.e., if a  s /2). The dynamics in Equation (3) under Q can be defined as 3,Q dm (t) = a + b ds m (t) dt + s m (t)dW (t), x+t m x+t m x+t m (0) > 0, 3,Q where W (t) is a standard Wiener process under the risk-neutral measure and d is the market price of the systematic mortality risk. Note that the parameters in the stochastic mortality model are estimated by calibrating the implied survival curve to the one obtained from the Italian population data of year Risks 2021, 9, 177 5 of 17 2016 (assumed to be t = 0) collected from the Human Mortality Database (see Fung et al. 2014). The calibration procedure was conducted for all policyholder ages and genders reported in Table 1. Finally, it is worth noting that, due to the flexibility of the methodology that will be proposed, different and/or more complex dynamics to describe the evolution of the risk factors may be assumed with respect to the ones assumed above. 3. Problem and Methodology Under the framework defined in Section 2, we need to evaluate the streams of pay- ments embedded in each policy inside the insurance portfolio. Before discussing the methodology, let us recall some results provided by Biffis (2005) related to the time-t fair values of the most common payoffs embedded in typical life insurance products, i.e., survival and death benefits. Proposition 1. (Survival benefit.) Let C be a bounded G-adapted process. Then, the time-t fair value SB (C ; T) of the time-T survival benefit of amount C , with 0  t  T, is given by: t T T h i h i R R T T r ds (r +m )ds s s s t t SB C ; T = E e I C j F = I E e C j G . ( ) t T T t T t fV>Tg fV>tg In particular, if C is G -adapted, the following holds: h R i h R i T T r ds X m ds Y s s t t SB (C ; T) = I E e C j G E e j G . t T T fV>tg t t Proposition 2. (Death benefit.) Let C be a bounded Gpredictable process. Then, the time-t fair value DB (C ; T) of the death benefit of amount C , payable in case the insured dies before time T, t V V with 0  t  T, is given by h i h i R R V u r ds (r +m )ds s s s t t DB (C ; T) = E e C I j F = I E e m C j G du. t V V t u u t ft<VTg fV>tg In particular, if C is G -predictable, the following holds h i h i R R u u r ds X m ds Y s s t t DB (C ; T) = I E e C j G E e m j G du. t V u u fV>tg t t We refer the readers to Biffis (2005) for the corresponding proofs and further de- tails. Therefore, as we can see from Propositions 1 and 2, evaluating life insurance poli- cies at future times implies solving conditional expectations for which often analytical formulas do not exist. Due to this, simulation-based approaches are extensively used (see Boyer and Stentoft 2013), among which we mention the nested simulations method where a high number of inner simulations branch out from another huge set of outer scenar- ios. However, the simulations within simulations approach is computationally challenging, especially when several policies are considered, as in our case. Therefore, in the following, we are going to discuss two methodologies to evaluate the streams of payments embedded in each policy inside the insurance portfolio. For this purpose, we project the relevant risk factors affecting the policy (i.e., S, r, and m) under the physical probability measure from time t = 0 up to the risk horizon t, and then for each outer scenario another set of inner trajectories is simulated under the risk-neutral measure. In order to avoid the huge computational cost of a pure nested model, as in the LSMC approach, we simulate n possible outer trajectories of the risk factors and then for each of them we further simulate n ¯  n inner paths. Following this approach, let Z be an n v matrix, where the row vector z contains the kth outer scenario of the v risk factors affecting the value of the ith representative policy. For each vector z and for time t < t  T, we simulate n ¯ trajectories under the risk-neutral probability measure. To simplify the notation, Risks 2021, 9, 177 6 of 17 we focus on the ith representative policy, and we denote z the vector containing the j,t time-t values of the risk factors along the jth inner trajectory corresponding to the kth outer scenario. Moreover, we label Y a n s matrix where the element y represents the value of ik the ith policy corresponding to the kth outer scenario obtained by averaging across the few inner simulations. Formally, n ¯ i k y = F z i = 1, . . . , s, and k = 1, . . . , n, (4) ik å å t j,t n ¯ j=1 t<tT where F  s represent the discounted cash-flows at time t of the ith policy with maturity T . ( ) In this way, we obtain a first (rough) estimate of each representative policy value distribution at the future time t. The next step is to obtain a more accurate estimate of the distribution of the time-t value of each representative policy and then to infer the distribution of the time-t value of the entire portfolio. We achieve this by applying two different approaches, an OLS as in the least-squares Monte Carlo method and a GB2 model. 3.1. The LSMC Method The least-squares Monte Carlo method applied to the problem of computing the distribution of the insurer ’s liabilities at a certain future date is based on the idea that the bias deriving from the few inner simulations can be reduced by approximating the involved conditional expectations with a linear combination of basis functions depending on some covariates, whose coefficients are estimated through an ordinary least-squares procedure (see Bauer et al. 2010 for further details). A straightforward application of the LSMC approach would be to apply the method on each policy inside the insurance portfolio. However, this kind of strategy would be quite computationally expensive due to the big dimensions of an insurance portfolio. Due to this, we propose applying the LSMC method first on just a set of representative policies and then through an OLS regression extend it to the entire portfolio. Hence, according to the LSMC method, we assume that the conditional ith representa- tive policy value, yb , can be expressed as a linear combination of basis functions depending ik on the covariate matrix z as follows: i i yb = b e z i = 1, . . . , s and k = 1, . . . , n, (5) ik å j j=1 where e () is the jth basis function in the regression, L is the number of basis functions, and b s represent the coefficients estimated through 2 3 n L i i i i ˆ ˆ 4 5 b , . . . , b = argmin y b e z . L å ik å j 1 j k b ,...,b 1 L k=1 j=1 In this way, we obtain an n s matrix Y where each row vector yb contains the values of each representative policy corresponding to the kth outer scenario. Now, in order to approximate the distribution of the value of the entire portfolio, we construct an OLS regression model for each outer scenario. In this regard, we denote with X an M (w + 1) matrix, where the row vector x contains the w covariates (gender, product type, age, and maturity) characterizing the ith contract in the portfolio plus an intercept term (M is the total number of contracts inside the insurance portfolio). Moreover, let X be the s (w + 1) matrix describing the structure of the representative insurance portfolio. Hence, x ¯ contains the w covariates characterizing the ith representative contract plus an intercept term. Therefore, we regress each row vector yb (k = 1, . . . , n) on the covariate matrix X, and once the coefficients are estimated, we extend them to the remaining policies by Risks 2021, 9, 177 7 of 17 exploiting the matrix X. In this way, we obtain the value of the ith contract corresponding to the kth outer scenario, which is denoted by v . Formally, ik v = x b i = 1, . . . , M and k = 1, . . . , n, (6) ik i where 0 0 0 0 ˆ ¯ ¯ ¯ b = X X X yb . k k Finally, the entire portfolio value distribution is obtained by adding up all the policy values in Equation (6) corresponding to each outer scenario. 3.2. The GB2 Model A GB2 model appears to provide a flexible family of distributions as it nests a range of standard distributions as special or limiting cases, such as the log-normal, the generalized- gamma, the Burr type III, the Burr type XII and many other (see McDonald 1984). Moreover, it has been used in several actuarial applications (e.g., see Gan and Valdez 2018) to model the fair market value of a portfolio made up of life insurance policies. A GB2 random variable can be constructed from a transformed ratio of two gamma random variables. The density function of a GB2 random variable, Y, is given by h   i pq a p1 a jaj y y f (y) = 1 + , y > 0, (7) bB( p, q) b b where a 6= 0, p > 0, q > 0 are shape parameters, b > 0 is the scale parameter, B() is the Beta function, and its expectation equals: 1 1 B p + , q a a E[Y] = b , (8) B( p, q) which exists if p < < q. In order to approximate the value of the portfolio, at first we approximate the time-t value of each representative policy, and then we use this information to approximate the distribution of the value of the entire insurance portfolio at the risk horizon. To achieve this, we construct two different GB2 regression models which exploit the generated information at the risk horizon i.e. S(t), r(t), and m(t) , and then the features characterizing uniquely each policy, respectively. Specifically, since the policy values y obtained from Equation (4) are not accurate ik due to the few inner trajectories on which they are based on, we aim at reducing the bias by estimating the involved conditional expectation through a GB2 regression model. In this regard, we assume that the ith policy value at time t conditioned on a specific outer scenario is a GB2 random variable with parameters (a , p , q , b ). In particular, we make i i i i the b parameter depend on some independent covariates (i.e., the value at time t of the risk factors which affect the policy of interest). Note that several approaches to incorpo- rate covariates in the GB2 regression model exist as well as different re-parametrization (see Beirlant et al. 2004; Frees and Valdez 2008). However, as noticed by Sun et al. (2008) and Frees et al. (2016), incorporating them into the scale parameter, b, facilitates the inter- pretability of the model; indeed, as can be seen in Equation (8), the expectation will change proportionally with respect to b, allowing one to interpret the regression coefficients as proportional changes. i i 0 Hence, b Z = exp Z b , where b = (b , b , . . . , b ) are the corresponding coef- i i;0 i;1 i;v ficients attached to each risk-factor. Note that the matrix Z now includes an intercept term. We can use the maximum likelihood method to estimate the parameters. Since we incorporate covariates through the scale parameter, we can write the log-likelihood function of the model as Risks 2021, 9, 177 8 of 17 " ! # n n n ja j y i ik i 0 l(a , p , q , b ) =n ln a p z b + (a p 1) ln(y ) ( p + q ) ln 1 + , (9) i i i i i i å i i å ik i i å k i i 0 B( p , q ) exp z b i i k=1 k=1 k=1 k i where i = 1,. . . s, n is the number of the generated outer scenarios and y denotes the value ik of the ith policy corresponding to the kth outer scenario. Once we estimate the parameters for the GB2 model, we use the expectation for predicting the value of the policy at time t. Since we incorporate covariates through the scale parameter, we can estimate it as i 0 1 1 exp z b B p ˆ + , q ˆ i i k i a ˆ a ˆ i i yb = , i = 1, 2, . . . , s and k = 1, . . . , n, (10) ik B( p ˆ , q ˆ ) i i where z is the vector containing the kth outer scenario of the risk factors affecting the ith representative policy. Once we obtain an estimate of the distribution of each representative policy at time t, we extend this information to the remaining policies. As already carried out for the OLS model, we are going to exploit both the matrices X and X on which we now construct a new GB2 regression model. Therefore, let Y be the n s matrix whose elements yb denote the value of the ith ik representative policy corresponding to the kth outer scenario obtained through Equation (10). Now, we construct a GB2 regression model in order to infer, starting from the set of representative policies, the distribution of the entire portfolio. Hence, recalling the pdf defined in Equation (7), we define the following log-likelihood function as: " ! # s s s ja j yb k ik l(a , p , q , b ) =s ln a p x ¯ b + (a p 1) ln(yb ) ( p + q ) ln 1 + , (11) k k k k k k å i k k å ik k k å B( p , q ) exp x ¯ b k k i i=1 i=1 i=1 k where s is the number of the representative policies and x ¯ is the row vector containing the information of the ith representative contract. Once again, after we estimate the parameters through the maximum likelihood ap- proach, we can then derive the distribution at the risk horizon for all the policies inside the insurance portfolio as 1 1 exp x b B p ˆ + , q ˆ i k k ˆ ˆ k a a k k v = , i = 1, 2, . . . , M and k = 1, . . . , n, (12) ik B( p ˆ , q ˆ ) k k where vb is the value of the ith contract corresponding to the kth outer scenario. ik Finally, the entire portfolio value distribution is again obtained by adding up all the policy values corresponding to each outer scenario. Note that the log-likelihood functions in Equations (9) and (11) may have multiple local maxima and since an analytic solution does not exist, we need to rely on a numerical procedure to estimate the involved parameters. We adopt the same multistage optimization algorithm described in Gan and Valdez (2018). 4. Numerical Results In this section, we present some numerical results obtained by exploiting the pre- viously defined models. In particular, we consider a life insurance portfolio with M = 10,000 contracts, and we focus on approximating its value distribution at the future time t = 1 year. These policies can be of three different types: a unit-linked pure endowment contract with a minimum maturity guarantee G = 100 payable upon the survival of the policyholder at the maturity date T, term life insurance policy which pays the value of a reference asset in case of death before maturity T, and an immediate life annuity contract Risks 2021, 9, 177 9 of 17 with continuous survival benefits equal to the 10% of a reference asset value up to the entire life of the insured person. We consider different policyholders, both males and females, with different ages x at time t = 0, which is also assumed to be the inception time of each policy. These characteristics are reported in Table 1. We assume that the insurance benefits depend upon a reference asset with the initial value S . In Tables A1 and A2 given in Appendix A, we report the values of the involved parameters in Equations (1)–(3). In particular, concerning mortality, we have calibrated the survival curve implied by Equation (3) on the Italian males and females mortality data in the year 2016 obtained from the Human Mortality Database for each age x 2 f55, . . . , 65g, and we assumed a longevity risk premium d = 0. We conduct this numerical experiment by varying both the number of outer simula- tions, n, and the number of representative policies, s. In particular, we adopt a monthly Euler ’s discretization setting in order to project n 2 f1000, 5000, 10000g outer trajectories of each risk factor under the P-measure, and then for each outer scenario we further simulate n ¯ = 2 inner trajectories under the risk-neutral probability measure. With this simulation set, we are able to obtain a first rough estimate of Y on which we construct the LSMC and GB2 models discussed in Sections 3.1 and 3.2, respectively. Note that, concerning the LSMC method, we exploit as basis functions Hermite polynomials of orders 1 and 2, which are denoted, respectively, as LSMC_1 and LSMC_2 hereafter. To determine the number of representative contracts s, we start from the informal rule proposed by Loeppky et al. (2009), which provide reasons and evidence supporting that the sample size should be about 10 times the input dimension. In our case, the dimension of covariates in the design matrix X is 5 (including the binary dummy variables converted from the categorical variables), and so we choose s = 50 as the initial number of representative contracts. However, we investigate the models’ performances by setting s = 75 and s = 100. Finally, the results are compared with a solid benchmark obtained through a nested simulations approach based on 10,000  2500 simulations. This allows us to conclude on the reliability of the proposed methodologies and to compare them in terms of computa- tional demand. Figure 1 shows the Quantile-Quantile (Q-Q) plots of the portfolio value at time t = 1 obtained by the nested simulations algorithm (assumed to be the theoretical one) and those predicted by the GB2 regression model and the LSMC models based on n = 10,000 outer simulations and by varying the number of representative contracts s 2 f50, 75, 100g. In this regard, we can see from Figure 1 that the proposed methodologies provide a good approximation except for the right tail of the distribution. In particular, concerning the GB2 regression model, we can see that the higher the number of representative contracts, the better the approximation. For a comprehensive analysis, we perform multiple runs of each proposed method; in particular, the following analysis is based on 50 runs. In Tables 2–4, we report the Mean Absolute Percentage Error (MAPE) relative to different quantities obtained by performing 50 runs of the proposed methodologies with a fixed number of outer scenarios (n = 10,000) and by varying the number of representative contracts (s 2 f50, 75, 100g). Risks 2021, 9, 177 10 of 17 Figure 1. Q-Q plots relative to the future value distribution of the insurance portfolio. The theoretical distribution is assumed to be the one obtained by nested simulations based on 10,000  2500 trajectories. The first row refers to the GB2 regression model based on 10,000 outer scenarios and by varying the number of representative contracts, s 2 f50, 75, 100g. The second and third rows refer to the LSMC method with Hermite polynomials of orders 1 and 2 based on 10,000 outer scenarios and by varying the number of representative contracts, s 2 f50, 75, 100g. Table 2. This table reports the MAPE of the estimates obtained by running 50 times the GB2 and LSMC methods with n = 10,000 and s = 50. The benchmark values are based on a nested simulations algorithm with 10,000 2500 trajectories applied to the entire portfolio. 5th Perc. 10th Perc. Median Mean 90th Perc. 95th Perc. 99th Perc. 99.5th Perc. GB2 2.812% 2.180% 1.798% 2.594% 3.832% 4.016% 6.154% 4.375% LSMC_1 3.238% 3.000% 2.399% 2.557% 2.398% 2.174% 2.436% 2.722% LSMC_2 2.762% 2.754% 2.567% 2.557% 2.436% 2.114% 2.356% 2.841% Table 3. This table reports the MAPE of the estimates obtained by running 50 times the GB2 and LSMC methods with n = 10,000 and s = 75. The benchmark values are based on a nested simulations algorithm with 10,000  2500 trajectories applied to the entire portfolio. 5th Perc. 10th Perc. Median Mean 90th Perc. 95th Perc. 99th Perc. 99.5th Perc. GB2 1.971% 1.782% 0.806% 0.542% 3.605% 3.949% 6.094% 3.867% LSMC_1 2.500% 1.338% 1.530% 1.392% 1.251% 1.657% 0.941% 1.678% LSMC_2 1.828% 1.047% 1.756% 1.392% 1.307% 1.485% 1.842% 2.142% Risks 2021, 9, 177 11 of 17 Table 4. This table reports the MAPE of the estimates obtained by running 50 times the GB2 and LSMC methods with n = 10,000 and s = 100. The benchmark values are based on a nested simulations algorithm with 10,000  2500 trajectories applied to the entire portfolio. 5th Perc. 10th Perc. Median Mean 90th Perc. 95th Perc. 99th Perc. 99.5th Perc. GB2 1.986% 1.745% 0.519% 0.347% 1.129% 1.313% 2.856% 1.944% LSMC_1 1.629% 1.504% 0.440% 0.627% 0.764% 0.824% 0.958% 2.561% LSMC_2 1.148% 1.145% 0.578% 0.627% 0.762% 0.986% 2.101% 2.334% If we compare Tables 2–4, it is evident that increasing the number of representative contracts s leads to a better approximation of the mean and of the other considered mea- sures of position. Moreover, it seems that the GB2 model, at least for a low number of representative contracts, is not able to adequately model the right tail of the distribution. In Table 5, we report the Mean Percentage Error (MPE) and MAPE relative to the mean estimates obtained by running the GB2 and LSMC methods 50 times with different numbers of outer simulations, n, and representative contracts, s. Table 5. This table reports the MPE and MAPE of the mean estimates obtained by running 50 times the GB2 and LSMC methods and varying the number of outer simulations (Outer) and that of representative contracts s. The benchmark value is based on a nested simulations algorithm with 10,000  2500 trajectories applied to the entire portfolio. s = 50 s = 75 s = 100 Outer Method MPE MAPE MPE MAPE MPE MAPE GB2 3.612% 3.612% 0.163% 0.983% 0.240% 0.923% 1000 LSMC_1 3.475% 3.475% 2.104% 2.221% 1.017% 1.364% LSMC_2 3.475% 3.475% 2.104% 2.221% 1.017% 1.364% GB2 2.981% 2.981% 0.715% 0.747% 0.301% 0.474% 5000 LSMC_1 2.840% 2.840% 1.533% 1.533% 1.029% 1.092% LSMC_2 2.840% 2.840% 1.533% 1.533% 1.029% 1.092% GB2 2.594% 2.594% 0.491% 0.542% 0.179% 0.347% 10,000 LSMC_1 2.557% 2.557% 1.392% 1.392% 0.490% 0.627% LSMC_2 2.557% 2.557% 1.392% 1.392% 0.490% 0.627% Looking at Table 5, we can see that for a fixed number of outer scenarios and for each applied method, the accuracy of the mean estimates increases with the number of representative contracts s. Moreover, it is evident that in most of the considered config- urations, the GB2 model outperforms the LSMC methods. Furthermore, if we look at the last column of Table 5 (s = 100), for instance, we can see that the higher the number of outer scenarios, the better the approximation. Finally, we can see that increasing the number of basis functions up to degree two in the LSMC method does not improve the accuracy of the mean estimates. This is probably due to the few outer simulated trajectories (at most 10,000 paths), which is not sufficient to appreciate the improvement which is usually expected. In the left-hand side of Figure A1 given in Appendix B, we report the corresponding box-plots from which it is possible to see that, in each of the considered configurations, the LSMC method systematically underestimates the quantity of interest. Concerning the estimate of the 99.5th percentile of the distribution, which would be of interest for valuing solvency capital requirements, Table 6 reports the MPE and MAPE relative to 50 estimates obtained by varying both the number of simulations and the number of representative contracts. Risks 2021, 9, 177 12 of 17 Table 6. This table reports the MPE and MAPE of the 99.5th percentile estimates obtained by running the GB2 and LSMC methods 50 times and varying the number of outer simulations (Outer) and that of representative contracts s. The benchmark value is based on a nested simulations algorithm with 10,000  2500 trajectories applied to the entire portfolio. s = 50 s = 75 s = 100 Outer Method MPE MAPE MPE MAPE MPE MAPE GB2 3.936% 6.570% 1.512% 5.453% 1.410% 4.494% 1000 LSMC_1 2.664% 3.715% 6.308% 6.478% 2.961% 4.253% LSMC_2 0.252% 6.487% 4.211% 7.150% 1.438% 5.517% GB2 4.110% 4.723% 3.813% 4.018% 0.081% 2.653% 5000 LSMC 2.908% 3.001% 4.708% 4.722% 1.659% 2.006% LSMC_2 1.787% 3.484% 3.118% 4.017% 0.462% 3.110% GB2 4.157% 4.375% 3.737% 3.867% 0.421% 1.944% 10,000 LSMC_1 2.643% 2.722% 1.560% 1.678% 2.522% 2.561% LSMC_2 2.259% 2.841% 0.131% 2.142% 1.007% 2.334% From Table 6, we can detect a similar behaviour as the one previously discussed. Specifically, we can see that, concerning the GB2 model, an increase in the number of representative contracts (for fixed n) leads to an improvement of the resulting estimates. On the contrary, for the LSMC method, there is no clear pattern. Indeed, as we can see, increasing the number of representative contracts (for a fixed n) does not lead to a clear improvement in the results. Moreover, increasing the number of basis functions as well as the number of outer simulations does not increase the accuracy of the estimates (see also the right side of Figure A1 in Appendix B). As in the case of the mean estimate, this could be due to the small number of outer simulations, and so we may conclude that passing from 1000 to 10000 trajectories is still not sufficient to exploit more basis functions. Once again, if we look at the case of n = 10,000 and s = 100, the GB2 model outperforms the LSMC approach. Now, let us examine the speed of the proposed algorithms with respect to the bench- mark. Table 7 shows the runtime of GB2 and LSMC expressed as a percentage of the time required by the nested simulation method based on 10,000 outers and 2500 inners. Note that we conducted all experiments using R on a computer equipped with an Intel Core(TM) i7-1065G7 CPU 1.50 GHz processor with 12 GB of RAM and Windows 10 Home operating system. Table 7. Percentage of the runtime required by the GB2 and LSMC methods with respect to the nested simulations approach. Note that the computational demand to construct the benchmark with a nested simulations approach based on 10,000 2500 scenarios applied to the entire portfolio is about 187,200 s. n = 1000 n = 5000 n = 10,000 Method s = 50 s = 75 s = 100 s = 50 s = 75 s = 100 s = 50 s = 75 s = 100 GB2 0.069% 0.078% 0.098% 0.337% 0.380% 0.501% 0.660% 0.832% 1.021% LSMC_1 0.005% 0.006% 0.007% 0.012% 0.018% 0.019% 0.036% 0.045% 0.047% LSMC_2 0.005% 0.006% 0.007% 0.013% 0.019% 0.020% 0.037% 0.046% 0.047% As we can see from Table 7, by applying the proposed methodologies, we have drastically reduced the computational time required instead by a nested simulations approach. Moreover, as expected, the LSMC method presented in Section 3.1 outperforms the GB2 model in terms of time in each of the proposed configurations. However, this is due to the existence of a closed form formula for the estimation of the involved parameters. Indeed, as stated in Section 3.2, the estimation procedure for the GB2 model is based on a multistage optimization algorithm due to the complexity of the likelihood functions, which Risks 2021, 9, 177 13 of 17 may have multiple local maxima. Regardless, if compared with the simulations within simulations method, the GB2 model proved to be an accurate and efficient alternative. Full LSMC To provide an exhaustive analysis, we consider a straightforward application of the LSMC method. Hence, we apply the LSMC method on each contract composing the insurance portfolio without considering any set of representative policies. The results are then compared with those already shown in the previous section both in terms of accuracy and computational demand. Just as an example, we construct the LSMC model by exploiting as set of basis functions Hermite polynomials with order 1 based on 10,000 2 simulations (LSMC_Full). Table 8 reports the MPE and MAPE relative to the 5th- percentile, the mean, and the 99.5th percentile estimates obtained by performing 50 runs of the proposed methods. Further, we report the results relative to the GB2 model (GB2) and LSMC method with Hermite polynomials of order 1 (LSMC_1) and order 2 (LSMC_2) based on 10,000  2 simulations and s = 100 representative policies. Table 8. This table reports the MPE and MAPE relative to the 5th percentile, the mean, and the 99.5th percentile estimates obtained by applying different methodologies. GB2 stands for the GB2 regression model based on n = 10,000 outer scenarios and s = 100 representative policies; LSMC_1 refers to the LSMC method based on n = 10,000 outer scenarios and s = 100 representative policies with Hermite polynomials of order 1; LSMC_2 refers to the LSMC method based on n = 10,000 outer scenarios and s = 100 representative policies with Hermite polynomials of order 2; LSMC_Full refers to the LSMC method based on n = 10,000 outer scenarios and constructed on each contract in the insurance portfolio. The results are compared with the corresponding benchmark value based on nested simulations with 10000 2500 trajectories applied to the entire portfolio. 5th Perc. Mean 99.5th Perc. Method MPE MAPE MPE MAPE MPE MAPE GB2 1.986% 1.986% 0.179% 0.347% 0.421% 1.944% LSMC_1 1.472% 1.629% 0.490% 0.627% 2.522% 2.561% LSMC_2 0.742% 1.148% 0.490% 0.627% 1.007% 2.334% LSMC_Full 0.501% 1.032% 0.084% 0.461% 0.420% 1.070% As is shown in Table 8, the errors relative to the LSMC_Full approach are lower than those of the other proposed methods since the estimates are based on the entire insurance portfolio, i.e., this approach does not suffer of any uncertainty related to the missingness of policies in its estimation procedure. Figure A2 given in Appendix B reports the box-plots on which the quantities in Table 8 are based on. Finally, we compare these methods in terms of time. In Table 9, we report the com- putational time required by the algorithms. We can see that the naive application of the LSMC approach is more computationally expensive with respect to the GB2 and LSMC models based on a set of representative policies. Table 9. Runtime, in seconds, of GB2 model and LSMC methods based on 10,000 2 simulations and s = 100 representative contracts (GB2, LSMC_1, LSMC_2). LSMC_Full refers to the LSMC method applied to each contract in the insurance portfolio. Method Time GB2 1911.445 LSMC_1 87.824 LSMC_2 88.290 LSMC_Full 7847.960 Risks 2021, 9, 177 14 of 17 5. Conclusions In this paper, we addressed the problem of approximating the value of a life insurance portfolio at a future time by proposing two different methodologies able to avoid the time-consuming nested simulations approach. The first approach can be thought of as extension of the well-known LSMC method, while the second is based on the GB2 distri- bution, which is widely used to approximate the fair value of portfolios of life insurance policies. To validate the proposal, we have considered a solid benchmark obtained by nested simulations, and we compared the two proposed methodologies both in terms of accuracy and efficiency. The analysis has been carried out by considering an ever increasing number of simulations and representative policies, from which it turned out that, generally, both the methodologies are able to provide increasingly accurate results. Moreover, the LSMC method proved to be faster in computational terms but also less accurate than the GB2 model. Furthermore, the proposed methodologies have been compared with a straightforward application of the LSMC method (i.e., without considering any subset of representative policies), which turned out to be more accurate but computationally more expensive. Extensive numerical results have shown that the proposed methods represent viable alternatives to the full nested Monte Carlo model. Therefore, the proposed metamodeling approach may help insurance and reinsurance undertakings to reduce the computational budget needed, for instance, in the context of evaluating solvency capital requirements. In this regard, it can be used to evaluate the future cash-flows (inflows and outflows) generated by the entire portfolio by considering at first only a subset of policies, and then extend to the remaining ones. Indeed, this represents the main issue for deriving the full loss distribution on which the Value-at-Risk measure should be obtained, as prescribed by the European Solvency II directive. Author Contributions: Both authors contributed equally to this manuscript. Both authors have read and agreed to the published version of the manuscript. Funding: This research received no external funding. Conflicts of Interest: The authors declare no conflict of interest. Appendix A. Parameter Values Table A1 shows the parameter values assumed for the dynamics of the reference asset and interest rates defined in Equations (1) and (2). Table A1. Parameters of the reference asset value process, S, and interest rate stochastic process, r. S s l r a q s g r 0 S 0 r 100 0.20 0.00 0.04 0.10 0.02 0.02 0.00 0.00 Table A2 shows the estimated parameters of the mortality model defined in Equation (3) obtained by fitting the corresponding survival curve on that implied by the Italian males and females mortality data in year 2016 obtained from the Human Mortality Database for each age x 2 f55, . . . , 65g. Risks 2021, 9, 177 15 of 17 Table A2. Estimated parameters of the stochastic mortality model for Italian male (left) and female (right) aged x 2 f55, . . . , 65g in 2016. Male Female Age a ˆ b s ˆ a ˆ s ˆ m m 55 0.00040 0.0881 0.00157 0.00010 0.10017 0.00100 56 0.00700 0.0705 0.00262 0.00001 0.11110 0.00100 57 0.00001 0.1051 0.00100 0.00001 0.11060 0.00100 58 0.00001 0.1045 0.00390 0.00009 0.10740 0.00850 59 0.00040 0.0832 0.00100 0.00001 0.11570 0.00100 60 0.00060 0.0743 0.00100 0.00042 0.08362 0.00669 61 0.00030 0.0907 0.00100 0.00044 0.08505 0.00100 62 0.00010 0.1033 0.00710 0.00001 0.11990 0.00100 63 0.00012 0.1063 0.00750 0.00040 0.09704 0.00182 64 0.00008 0.1112 0.00810 0.00039 0.09860 0.00376 65 0.00020 0.1075 0.00123 0.00049 0.09558 0.00720 Appendix B. Further Results Figure A1 reports the boxplot relative to the mean (left) and the 99.5th percentile (right) estimates obtained by running 50 times the GB2 and LSMC methods varying both the number of outer scenarios, n, and that of the representative policies, s. In this regard, we can see that the variability of the estimates decreases as the number of outer scenarios and the number of representative contracts increases. Mean Perc. 99.5% s = 50 s = 75 s = 100 s = 50 s = 75 s = 100 s = 50 s = 75 s = 100 s = 50 s = 75 s = 100 s = 50 s = 75 s = 100 s = 50 s = 75 s = 100 n = 1000 n = 5000 n = 10,000 n = 1000 n = 5000 n = 10,000 Figure A1. Boxplots relative to the mean (left) and the 99.5th percentile (right) estimates obtained by running the GB2 and LSMC methods 50 times and varying the number of outer simulations n and that of representative contracts s. The red line refers to the benchmark value based on a nested simulations algorithm with 10,000  2500 trajectories applied to the entire portfolio. Figure A2 compares the straightforward application of the LSMC approach with respect to the proposed methodologies providing the boxplots relative to the mean and the 99.5th percentile estimates. 1,050,000 1,105,575 1,161,150 1,216,725 1,272,300 GB2 LSMC1 LSMC2 GB2 LSMC1 LSMC2 GB2 LSMC1 LSMC2 GB2 LSMC1 LSMC2 GB2 LSMC1 LSMC2 GB2 LSMC1 LSMC2 GB2 LSMC1 LSMC2 GB2 LSMC1 LSMC2 GB2 LSMC1 LSMC2 1,300,000 1,576,250 1,852,500 2,128,750 2,405,000 GB2 LSMC1 LSMC2 GB2 LSMC1 LSMC2 GB2 LSMC1 LSMC2 GB2 LSMC1 LSMC2 GB2 LSMC1 LSMC2 GB2 LSMC1 LSMC2 GB2 LSMC1 LSMC2 GB2 LSMC1 LSMC2 GB2 LSMC1 LSMC2 Risks 2021, 9, 177 16 of 17 Mean Perc. 99.5% GB2 LSMC_1 LSMC_2 LSMC_Full GB2 LSMC_1 LSMC_2 LSMC_Full Figure A2. Boxplots relative to the mean and the 99.5th percentile estimates obtained by running the proposed methodolo- gies 50 times. GB2 stands for the GB2 regression model based on 10,000 outer scenarios and s = 100 representative policies; LSMC_1 refers to the LSMC method based on 10,000 outer scenarios and s = 100 representative policies with Hermite polynomials of order 1; LSMC_2 refers to the LSMC method based on 10,000 outer scenarios and s = 100 representative policies with Hermite polynomials of order 2; LSMC_Full refers to the LSMC method based on 10,000 outer scenarios and constructed on each contract in the insurance portfolio. The red line refers to the benchmark value based on a nested simulations algorithm with 10,000  2500 trajectories applied to the entire portfolio. References Allemang, Dean, and Jim Hendler. 2011. Semantic Web for the Working Ontologist: Effective Modeling in RDFS and OWL, 2nd ed. San Francisco: Morgan Kaufmann Publishers Inc. Barton, Russel R. 2015. Tutorial: Simulation metamodeling. Paper presented at 2015 Winter Simulation Conference (WSC), Huntington Beach, CA, USA, December 6–9. Bauer, Daniel, Daniela Bergmann, and Andreas Reuss. 2010. Solvency II and Nested Simulations-a Least-Squares Monte Carlo Approach. Working Paper. Georgia State University and Ulm University. Available online: http://citeseerx.ist.psu.edu/viewdoc/ summary?doi=10.1.1.466.1983 (accessed on 17 August 2021). Beirlant, Jan, Yuri Goegebeur, Johan Segers, and Jozef L. Teugels. 2004. Statistics of Extremes: Theory and Applications. Chichester: Wiley. Biffis, Enrico. 2005. Affine processes for dynamic mortality and actuarial valuations. Insurance: Mathematics and Economics 37: 443–68. Boyer, M. Martin, and Lars Stentoft. 2013. If we can simulate it, we can insure it: An application to longevity risk management. Insurance: Mathematics and Economics 52: 35–45. Carrière, Jacques F. 1996. Valuation of the early-exercise price for options using simulations and nonparametric regression. Insurance: Mathematics and Economics 19: 19–30. Cathcart, Mark, and Steven Morrison. 2009. Variable annuity economic capital: The least-squares Monte Carlo approach. Life & Pensions 2: 44–48. European Parliament, and European Council. 2009. Directive 2009/138/EC on the Taking-Up and Pursuit of the Business of Insurance and Reinsurance (Solvency II). Brussels: European Council. Floryszczak, Anthony, Olivier Le Courtois, and Mohamed Majri. 2016. Inside the Solvency II black box: Net Asset Values and Solvency Capital Requirements with a least-squares Monte-Carlo approach. Insurance: Mathematics and Economics 71: 15–26. Frees, Edward W., and Emiliano A. Valdez. 2008. Hierarchical Insurance Claims Modeling. Journal of the American Statistical Association 103: 1457–69. Frees, Edward W., Gee Lee, and Lu Yang. 2016. Multivariate Frequency-Severity Regression Models in Insurance. Risks 4: 4. Fung, Man Chung, Katja Ignatieva, and Michael Sherris. 2014. Systematic mortality risk: An analysis of guaranteed lifetime withdrawal benefits in variable annuities. Insurance: Mathematics and Economics 58: 103–15. Gan, Guojun, and Emiliano A. Valdez. 2018. Regression modeling for the valuation of large variable annuity portfolios. North American Actuarial Journal 22: 40–54. Gan, Guojun, and X. Sheldon Lin. 2015. Valuation of large variable annuity portfolios under nested simulation: A functional data approach. Insurance: Mathematics and Economics 62: 138–50. Gan, Guojun. 2013. Application of data clustering and machine learning in variable annuity valuation. Insurance: Mathematics and Economics 53: 795–801. Gan, Guojun. 2015. Application of metamodeling to the valuation of large variable annuity portfolios. Paper presented at 2015 Winter Simulation Conference (WSC), Huntington Beach, CA, USA, December 6–9. Krah, Anne-Sophie, Zoran Nikolic, ´ and Ralf Korn. 2018. A least-squares Monte Carlo framework in proxy modeling of life insurance companies. Risks 6: 2–26. 1,170,000 1,180,000 1,190,000 1,200,000 1,210,000 1,760,000 1,833,750 1,907,500 1,981,250 2,055,000 Risks 2021, 9, 177 17 of 17 Loeppky, Jason L., Jerome Sacks, and William J. Welch. 2009. Choosing the sample size of a computer experiment: A practical guide. Technometrics 51: 366–76. Longstaff, Francis A., and Eduardo S. Schwartz. 2001. Valuing American options by simulations: A simple least-squares approach. The Review of Financial Studies 1: 113–47. McDonald, James B. 1984. Some generalized functions for the size distribution of income. Econometrica 52: 647–63. Minasny, Budiman, and Alex B. McBratney. 2006. A conditioned Latin hypercube method for sampling in the presence of ancillary information. Computers & Geosciences 32: 1378–88. Sun, Jiafeng, Edward W. Frees, and Marjorie A. Rosenberg. 2008. Heavy-tailed longitudinal data modeling using copulas. Insurance: Mathematics and Economics 42: 817–30. Tilley, James A. 1993. Valuing American options in a path simulation model. Transactions of Society of Actuaries 45: 499–520.

Journal

RisksMultidisciplinary Digital Publishing Institute

Published: Oct 2, 2021

Keywords: GB2; LSMC; metamodel; regression models; Solvency II

There are no references for this article.