Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Time series with infinite-order partial copula dependence

Time series with infinite-order partial copula dependence 1IntroductionThe principal aim of this article is to show that the s-vine (or stationary d-vine) decomposition of a joint density provides a very natural vehicle for generalizing the class of stationary Gaussian time series to permit both non-Gaussian marginal behaviour and non-linear and non-Gaussian serial dependence behaviour. In particular, this approach provides a route to defining a rich class of tractable non-Gaussian ARMA and ARFIMA processes; the resulting models have the potential to offer improved statistical fits in any application where classical ARMA models or their long-memory ARFIMA extensions are used.Vine models of dependence have been developed in a series of publications [1,6,7, 8,24,25,28,42]. There are a number of different configurations for vines, but the most suitable one for longitudinal data applications is the d-vine, which is able to describe the strict stationarity of a random vector under some additional translation-invariance restrictions on the vine structure. A recent paper by Nagler et al. [36] investigated the vine structures that can be used to construct stationary multivariate time series. The results of Nagler et al. imply that, for univariate applications, the d-vine is in fact the only structure for which translation-invariance restrictions are sufficient to guarantee stationarity; we follow them in referring to these restricted d-vines as stationary vines, or s-vines.Vine models are best understood as copula models of dependence, and there is now a large literature on copula models for time series. While the main focus of much of this literature has been on cross-sectional dependence between multiple time series, there is also a growing literature on modelling serial dependence within single series and lagged dependence across series. First-order Markov copula models [3,12,15,17] are simple examples of s-vine processes. A number of authors have written on higher-order Markov extensions for univariate series or multivariate series [4,10,22,30,36,43]. There is also literature showing how these models may be adapted to the particular requirements of time series showing stochastic volatility, including the mixture-copula approach of Loaiza-Maya et al. [30] and the v-transform approach of McNeil and Bladt [9,33].This article makes the following novel contributions to the development of time series models based on vine copulas. First, we suggest how s-vine models may be generalized to infinite order, and we propose accompanying generalizations of the classical concepts of causality and invertibility for linear processes that may be applied to s-vine processes. Second, we provide additional insight into the issues of stability and ergodicity for s-vine processes, and we show how finite or infinite copula sequences may be used to develop non-linear filters of independent noise that generalize linear filters. Finally, we propose a practical and parsimonious approach to building s-vine processes in which copula sequences are parameterized by a function that we call the Kendall partial autocorrelation function; the latter may be borrowed from other well-known processes, such as Gaussian ARMA or ARFIMA processes, thus yielding natural non-Gaussian analogues of these models.We believe that our approach may serve as a useful framework to faciliate further study in the field. Several interesting theoretical questions remain, particularly relating to necessary and sufficient conditions for the stability of models based on infinite copula sequences, as well as the interplay of copula sequences and long memory. However, on the practical side, the models are already eminently usable; methods exist for estimation and random number generation, and we suggest some new ideas for model validation using residuals. An example shows the benefits that may arise from using these models.This article is structured as follows. Section 2 sets out notation and basic concepts and makes the connection between s-vine copulas and s-vine processes; key objects in the development of processes are sequences of functions that we refer to as Rosenblatt functions. In Section 3, we show that finite-order s-vine processes are Markov chains belonging to the particular sub-category of non-linear state-space models. Section 4 explains why Gaussian processes form a sub-class of s-vine processes and shows how the classical theory for linear processes may be reinterpreted as a theory of the behaviour of Rosenblatt functions. Section 5 uses the Gaussian analogy to suggest requirements for stable, infinite-order, non-Gaussian s-vine processes; a practical approach to model building is developed and illustrated with an application to macroeconomic data. Section 6 concludes. Proofs can be found in Appendix A, while additional material on the Markov chain analysis of finite-order processes is collected in Appendix B.2S-vine processes2.1S-vine copulasIf a random vector (X1,…,Xn)\left({X}_{1},\ldots ,{X}_{n})admits a joint density f(x1,…,xn)f\left({x}_{1},\ldots ,{x}_{n})then the latter may be decomposed as a d-vine. Writing fXi{f}_{{X}_{i}}for the marginal density of Xi{X}_{i}, the decomposition is (1)f(x1,…,xn)=∏i=1nfXi(xi)∏k=1n−1∏j=k+1ncj−k,j∣Sj−k,j(Fj−k∣Sj−k,j(xj−k),Fj∣Sj−k,j(xj)),f\left({x}_{1},\ldots ,{x}_{n})=\left(\mathop{\prod }\limits_{i=1}^{n}{f}_{{X}_{i}}\left({x}_{i})\right)\mathop{\prod }\limits_{k=1}^{n-1}\mathop{\prod }\limits_{j=k+1}^{n}{c}_{j-k,j| {S}_{j-k,j}}\left({F}_{j-k| {S}_{j-k,j}}\left({x}_{j-k}),{F}_{j| {S}_{j-k,j}}\left({x}_{j})),where Sj−k,j={j−k+1,…,j−1}{S}_{j-k,j}=\left\{j-k+1,\ldots ,j-1\right\}is the set of indices of the variables which lie between Xj−k{X}_{j-k}and Xj{X}_{j}, cj−k,j∣Sj−k,j{c}_{j-k,j| {S}_{j-k,j}}is the density of the bivariate copula Cj−k,j∣Sj−k,j{C}_{j-k,j| {S}_{j-k,j}}of the joint distribution function (df) of Xj−k{X}_{j-k}and Xj{X}_{j}conditional on the intermediate variables Xj−k+1,…,Xj−1{X}_{j-k+1},\ldots ,{X}_{j-1}, and (2)Fi∣Sj−k,j(x)=P(Xi⩽x∣Xj−k+1=xj−k+1,…,Xj−1=xj−1),i∈{j−k,j}{F}_{i| {S}_{j-k,j}}\left(x)={\mathbb{P}}\left({X}_{i}\leqslant x| {X}_{j-k+1}={x}_{j-k+1},\ldots ,{X}_{j-1}={x}_{j-1}),\hspace{1.0em}i\in \left\{j-k,j\right\}denotes the conditional df of variable iiconditional on these variables; note that Sj−1,j=∅{S}_{j-1,j}=\varnothing and so the conditioning set is dropped in this case. The decomposition in Eq. (1) implies a decomposition of the density c(u1,…,un)c\left({u}_{1},\ldots ,{u}_{n})of the unique copula of (X1,…,Xn)\left({X}_{1},\ldots ,{X}_{n}), which is given implicitly by (3)c(F1(x1),…,Fn(xn))=∏k=1n−1∏j=k+1ncj−k,j∣Sj−k,j(Fj−k∣Sj−k,j(xj−k),Fj∣Sj−k,j(xj)).c({F}_{1}\left({x}_{1}),\ldots ,{F}_{n}\left({x}_{n}))=\mathop{\prod }\limits_{k=1}^{n-1}\mathop{\prod }\limits_{j=k+1}^{n}{c}_{j-k,j| {S}_{j-k,j}}\left({F}_{j-k| {S}_{j-k,j}}\left({x}_{j-k}),{F}_{j| {S}_{j-k,j}}\left({x}_{j})).In practical applications, interest centres on models that admit the simplified d-vine decomposition in which the copula densities cj−k,j∣Sj−k,j{c}_{j-k,j| {S}_{j-k,j}}do not depend on the values of variables in the conditioning set Sj−k,j{S}_{j-k,j}and we can simply write cj−k,j{c}_{j-k,j}. Any set of copula densities {cj−k,j:1⩽k⩽n−1,k+1⩽j⩽n}\left\{{c}_{j-k,j}:1\leqslant k\leqslant n-1,k+1\leqslant j\leqslant n\right\}and any set of marginal densities fXi{f}_{{X}_{i}}may be used in the simplified version of (1) to create a valid nn-dimensional joint density. A number of papers have examined the limitations imposed by working with simplified vine copula models [20,35,44,45]. In Mroz et al. [35], it is shown that the class of simplified vines is not dense in the space of copulas for a number of metrics including the one induced by total variation distance. These results may be interpreted as showing that there exist multivariate distributions that are difficult to approximate with simplified d-vines. However, the simplified d-vine construction still greatly enlarges the class of tractable densities for time series applications.We are interested in strictly stationary stochastic processes whose higher-dimensional marginal distributions are simplified d-vines. As well as forcing fX1=⋯=fXn{f}_{{X}_{1}}=\cdots ={f}_{{X}_{n}}, this requirement imposes translation-invariance conditions on the copula densities cj−k,j{c}_{j-k,j}and conditional dfs F⋅∣Sj−k,j{F}_{\cdot | {S}_{j-k,j}}appearing in the simplified form of Eq. (1). It must be the case that cj−k,j{c}_{j-k,j}is the same for all j∈{k+1,…,n}j\in \left\{k+1,\ldots ,n\right\}, and so each pair copula density in the model can be associated with a lag kkand we can write ck≔cj−k,j{c}_{k}:= {c}_{j-k,j}, where ck{c}_{k}is the density of some bivariate copula Ck{C}_{k}. The conditional dfs can be represented by two sets of functions Rk(1):(0,1)k×(0,1)→(0,1){R}_{k}^{\left(1)}:{\left(0,1)}^{k}\times \left(0,1)\to \left(0,1)and Rk(2):(0,1)k×(0,1)→(0,1){R}_{k}^{\left(2)}:{\left(0,1)}^{k}\times \left(0,1)\to \left(0,1), which are defined in a recursive, interlacing fashion by R1(1)(u,x)=h1(1)(u,x){R}_{1}^{\left(1)}\left(u,x)={h}_{1}^{\left(1)}\left(u,x), R1(2)(u,x)=h1(2)(x,u){R}_{1}^{\left(2)}\left(u,x)={h}_{1}^{\left(2)}\left(x,u), and, for k⩾2k\geqslant 2, (4)Rk(1)(u,x)=hk(1)(Rk−1(2)(u−1,u1),Rk−1(1)(u−1,x)),Rk(2)(u,x)=hk(2)(Rk−1(2)(u−k,x),Rk−1(1)(u−k,uk)),\begin{array}{rcl}{R}_{k}^{\left(1)}\left({\boldsymbol{u}},x)& =& {h}_{k}^{\left(1)}({R}_{k-1}^{\left(2)}\left({{\boldsymbol{u}}}_{-1},{u}_{1}),{R}_{k-1}^{\left(1)}\left({{\boldsymbol{u}}}_{-1},x)),\\ {R}_{k}^{\left(2)}\left({\boldsymbol{u}},x)& =& {h}_{k}^{\left(2)}({R}_{k-1}^{\left(2)}\left({{\boldsymbol{u}}}_{-k},x),{R}_{k-1}^{\left(1)}\left({{\boldsymbol{u}}}_{-k},{u}_{k})),\end{array}where hk(i)(u1,u2)=∂∂uiCk(u1,u2){h}_{k}^{\left(i)}\left({u}_{1},{u}_{2})=\frac{\partial }{\partial {u}_{i}}{C}_{k}\left({u}_{1},{u}_{2})and u−i{{\boldsymbol{u}}}_{-i}indicates the vector u{\boldsymbol{u}}with the iith component removed.By using this new notation, we obtain a simplified form of Eq. (1) in which the density of the copula ccin Eq. (3) takes the form (5)c(n)(u1,…,un)=∏k=1n−1∏j=k+1nck(Rk−1(2)(u[j−k+1,j−1],uj−k),Rk−1(1)(u[j−k+1,j−1],uj)),{c}_{\left(n)}\left({u}_{1},\ldots ,{u}_{n})=\mathop{\prod }\limits_{k=1}^{n-1}\mathop{\prod }\limits_{j=k+1}^{n}{c}_{k}({R}_{k-1}^{\left(2)}\left({{\boldsymbol{u}}}_{\left[j-k+1,j-1]},{u}_{j-k}),{R}_{k-1}^{\left(1)}\left({{\boldsymbol{u}}}_{\left[j-k+1,j-1]},{u}_{j})),where u[j−k+1,j−1]=(uj−k+1,…,uj−1)⊤{{\boldsymbol{u}}}_{\left[j-k+1,j-1]}={\left({u}_{j-k+1},\ldots ,{u}_{j-1})}^{\top }. Note that, for simplicity of formulas, we abuse notation by including terms involving R0(1){R}_{0}^{\left(1)}and R0(2){R}_{0}^{\left(2)}; these terms should be interpreted as R0(1)(⋅,u)=R0(2)(⋅,u)=u{R}_{0}^{\left(1)}\left(\cdot ,u)={R}_{0}^{\left(2)}\left(\cdot ,u)=ufor all uu. Following Nagler et al. [36], we refer to a model with copula density of the form Eq. (5) as a stationary d-vine or s-vine.If a random vector (U1,…,Un)\left({U}_{1},\ldots ,{U}_{n})follows the copula C(n){C}_{\left(n)}with density c(n){c}_{\left(n)}in Eq. (5), then for any k∈{1,…,n−1}k\in \left\{1,\ldots ,n-1\right\}and j∈{k+1,…,n}j\in \left\{k+1,\ldots ,n\right\}, we have (6)Rk(1)(u,x)=P(Uj⩽x∣Uj−k=u1,…,Uj−1=uk),Rk(2)(u,x)=P(Uj−k⩽x∣Uj−k+1=u1,…,Uj=uk),\begin{array}{rcl}{R}_{k}^{\left(1)}\left({\boldsymbol{u}},x)& =& {\mathbb{P}}\left({U}_{j}\leqslant x| {U}_{j-k}={u}_{1},\ldots ,{U}_{j-1}={u}_{k}),\\ {R}_{k}^{\left(2)}\left({\boldsymbol{u}},x)& =& {\mathbb{P}}\left({U}_{j-k}\leqslant x| {U}_{j-k+1}={u}_{1},\ldots ,{U}_{j}={u}_{k}),\end{array}and we refer to the conditional distribution functions Rk(1){R}_{k}^{\left(1)}and Rk(2){R}_{k}^{\left(2)}as forward and backward Rosenblatt functions. Henceforth, we will often drop the superscript from the forward function and simply write Rk=Rk(1){R}_{k}={R}_{k}^{\left(1)}to obtain less notationally cumbersome expressions. The conditional densities corresponding to the Rosenblatt functions may be derived from Eq. (5). Writing fk{f}_{k}for the density of the forward Rosenblatt functions, we obtain f1(u,x)=c(2)(u,x)=c1(u,x){f}_{1}\left(u,x)={c}_{\left(2)}\left(u,x)={c}_{1}\left(u,x)and, for k>1k\gt 1(7)fk(u,x)=c(k+1)(u1,…,uk,x)c(k)(u1,…,uk)=∏j=1kcj(Rj−1(2)(u[k−j+2,k],uk−j+1),Rj−1(u[k−j+2,k],x)).{f}_{k}\left({\boldsymbol{u}},x)=\frac{{c}_{\left(k+1)}\left({u}_{1},\ldots ,{u}_{k},x)}{{c}_{\left(k)}\left({u}_{1},\ldots ,{u}_{k})}=\mathop{\prod }\limits_{j=1}^{k}{c}_{j}({R}_{j-1}^{\left(2)}\left({{\boldsymbol{u}}}_{\left[k-j+2,k]},{u}_{k-j+1}),{R}_{j-1}\left({{\boldsymbol{u}}}_{\left[k-j+2,k]},x)).The following assumption will be in force throughout the remainder of the paper.Assumption 1All copulas Ck{C}_{k}used in the construction of s-vine models belong to the class C∞{{\mathcal{C}}}^{\infty }of smooth functions with continuous partial derivatives of all orders. Moreover, their densities ck{c}_{k}are strictly positive on (0,1)2{\left(0,1)}^{2}.This assumption applies to all the standard pair copulas that are used in vine copula models (e.g., Gauss, Clayton, Gumbel, Frank, Joe, and t), as well as non-exchangeable extensions [29] or mixtures of copulas [30]. It ensures, among other things, that for fixed u{\boldsymbol{u}}, the Rosenblatt functions are bijections on (0,1)\left(0,1)with well-defined inverses. Let us write Rk−1(u,z){R}_{k}^{-1}\left({\boldsymbol{u}},z)for the inverses of the Rosenblatt forward functions, satisfying Rk−1(u,z)=x{R}_{k}^{-1}\left({\boldsymbol{u}},z)=xif and only if Rk(u,x)=z{R}_{k}\left({\boldsymbol{u}},x)=z. Inverses can also be defined for the Rosenblatt backward functions but will not be explicitly neededIn the sequel, we refer to the copulas Ck{C}_{k}as partial copulas. They should be distinguished from the bivariate marginal copulas given by C(k)(u,v)=P(Uj−k⩽u,Uj⩽v){C}^{\left(k)}\left(u,v)={\mathbb{P}}\left({U}_{j-k}\leqslant u,{U}_{j}\leqslant v)for any j∈{k+1,…,n}j\in \left\{k+1,\ldots ,n\right\}. The two copulas are related by the formula (8)C(k)(v1,v2)=E(P(Uj−k⩽v1,Uj⩽v2∣Uj−k+1,…,Uj−1))=E(Ck(Rk−1(2)((Uj−k+1,…,Uj−1)⊤,v1),Rk−1((Uj−k+1,…,Uj−1)⊤,v2)))=∫01⋯∫01Ck(Rk−1(2)(u,v1),Rk−1(u,v2))c(k−1)(u)du1⋯duk−1.\begin{array}{rcl}{C}^{\left(k)}\left({v}_{1},{v}_{2})& =& {\mathbb{E}}({\mathbb{P}}\left({U}_{j-k}\leqslant {v}_{1},{U}_{j}\leqslant {v}_{2}| {U}_{j-k+1},\ldots ,{U}_{j-1}))\\ & =& {\mathbb{E}}({C}_{k}({R}_{k-1}^{\left(2)}\left({\left({U}_{j-k+1},\ldots ,{U}_{j-1})}^{\top },{v}_{1}),{R}_{k-1}\left({\left({U}_{j-k+1},\ldots ,{U}_{j-1})}^{\top },{v}_{2})))\\ & =& \underset{0}{\overset{1}{\displaystyle \int }}\cdots \underset{0}{\overset{1}{\displaystyle \int }}{C}_{k}({R}_{k-1}^{\left(2)}\left({\boldsymbol{u}},{v}_{1}),{R}_{k-1}\left({\boldsymbol{u}},{v}_{2})){c}_{\left(k-1)}\left({\boldsymbol{u}}){\rm{d}}{u}_{1}\cdots {\rm{d}}{u}_{k-1}.\end{array}2.2S-vine processesWe use the following general definition for an s-vine process.Definition 1(S-vine process) A strictly stationary time series (Xt)t∈Z{\left({X}_{t})}_{t\in {\mathbb{Z}}}is an s-vine process if for every t∈Zt\in {\mathbb{Z}}and n⩾2n\geqslant 2the nn-dimensional marginal distribution of the vector (Xt,…,Xt+n−1)\left({X}_{t},\ldots ,{X}_{t+n-1})is absolutely continuous and admits a unique copula C(n){C}_{\left(n)}with a joint density c(n){c}_{\left(n)}of the form in Eq. (5). An s-vine process (Ut)t∈Z{\left({U}_{t})}_{t\in {\mathbb{Z}}}is an s-vine copula process if its univariate marginal distribution is standard uniform.Our aim is to construct processes that conform to this definition and investigate their properties and practical application. Since s-vine processes can be endowed with any continuous univariate marginal distribution fX{f}_{X}, we will mostly investigate the properties of s-vine copula processes.2.3A note on reversibilityIt is particularly common in applications of vine copulas to confine interest to standard exchangeable copulas Ck{C}_{k}. In this case, the resulting s-vine processes have the property of reversibility. For any u=(u1,…,un)⊤∈(0,1)n{\boldsymbol{u}}={\left({u}_{1},\ldots ,{u}_{n})}^{\top }\in {\left(0,1)}^{n}, let us write u¯=(un,…,u1)⊤\overline{{\boldsymbol{u}}}={\left({u}_{n},\ldots ,{u}_{1})}^{\top }for the reversed vector.Definition 2An s-vine copula process is reversible if for any n⩾2n\geqslant 2the higher dimensional marginal copulas satisfy C(n)(u)=C(n)(u¯){C}_{\left(n)}\left({\boldsymbol{u}})={C}_{\left(n)}\left(\overline{{\boldsymbol{u}}}).This is equivalent to saying that, for any t,s∈Zt,s\in {\mathbb{Z}}and any n>2,n\gt 2,the set of consecutive variables (Ut+1,…,Ut+n)\left({U}_{t+1},\ldots ,{U}_{t+n})from the process has the same distribution as the reversed vector (Us+n,…,Us+1)\left({U}_{s+n},\ldots ,{U}_{s+1}). The process evolves forwards and backwards in a similar fashion, which may not be ideal for phenomena in which there is a clear temporal notion of causality; however, as soon as non-exchangeable copulas are included, the reversibility is broken. In summary, we have the following simple result.Proposition 1If a copula sequence (Ck)k∈N{\left({C}_{k})}_{k\in {\mathbb{N}}}consists of exchangeable copulas then (i) the Rosenblatt forward and backward functions satisfy Rk(2)(u¯,x)=Rk(u,x){R}_{k}^{\left(2)}\left(\overline{{\boldsymbol{u}}},x)={R}_{k}\left({\boldsymbol{u}},x)for all (u,x)∈(0,1)k×(0,1)\left({\boldsymbol{u}},x)\in {\left(0,1)}^{k}\times \left(0,1)and (ii) the resulting s-vine copula process is reversible.3S-vine processes of finite order3.1Markov constructionThe first class of processes we consider are s-vine copula processes of finite order ppwhich are constructed from a set of copulas {C1,…,Cp}\left\{{C}_{1},\ldots ,{C}_{p}\right\}using the Markov approach described by Joe ([27], p. 145). Starting from a series of iid uniform innovation variables (Zk)k∈N{\left({Z}_{k})}_{k\in {\mathbb{N}}}we can set U1=Z1{U}_{1}={Z}_{1}and (9)Uk=Rk−1−1((U1,…,Uk−1)⊤,Zk),k⩾2.{U}_{k}={R}_{k-1}^{-1}({\left({U}_{1},\ldots ,{U}_{k-1})}^{\top },{Z}_{k}),\hspace{1.0em}k\geqslant 2.By using the inverses of the Rosenblatt forward functions we obtain, for any nn, a random vector (U1,…,Un)\left({U}_{1},\ldots ,{U}_{n})which forms a finite realization from an s-vine process (Ut)t∈Z{\left({U}_{t})}_{t\in {\mathbb{Z}}}. The copula C(n){C}_{\left(n)}of (U1,…,Un)\left({U}_{1},\ldots ,{U}_{n})has density c(n){c}_{\left(n)}in Eq. (5) but the copula densities ck{c}_{k}appearing in this expression satisfy ck(u,v)=1{c}_{k}\left(u,v)=1for k>pk\gt pand the s-vine is said to be truncated at order pp. Moreover, since hk(1)(u,v)=v{h}_{k}^{\left(1)}\left(u,v)=vfor k>pk\gt p, it follows from Eq. (4) that Rk(u,x)=Rk−1(u−1,x)=⋯=Rp(u[k−p+1,k],x){R}_{k}\left({\boldsymbol{u}},x)={R}_{k-1}\left({{\boldsymbol{u}}}_{-1},x)=\cdots ={R}_{p}\left({{\boldsymbol{u}}}_{\left[k-p+1,k]},x)and the updating Eq. (9) satisfies (10)Uk=Rp−1((Uk−p,…,Uk−1)⊤,Zk),k>p,{U}_{k}={R}_{p}^{-1}({\left({U}_{k-p},\ldots ,{U}_{k-1})}^{\top },{Z}_{k}),\hspace{1.0em}k\gt p,showing the Markovian character of the finite-order process.The recursive nature of the construction (Eq. (9)) means that there is an implied set of functions that we will label Sk:(0,1)k×(0,1)→(0,1){S}_{k}:{\left(0,1)}^{k}\times \left(0,1)\to \left(0,1)for k∈Nk\in {\mathbb{N}}such that (11)Uk=Sk−1((Z1,…,Zk−1)⊤,Zk),k⩾2.{U}_{k}={S}_{k-1}\left({\left({Z}_{1},\ldots ,{Z}_{k-1})}^{\top },{Z}_{k}),\hspace{1.0em}k\geqslant 2.The functions (Sk)k∈N{\left({S}_{k})}_{k\in {\mathbb{N}}}satisfy S1(z1,x)=R1−1(z1,x){S}_{1}\left({z}_{1},x)={R}_{1}^{-1}\left({z}_{1},x)and (12)Sk(z,x)=Rk−1((z1,S1(z1,z2),…,Sk−1(z[1,k−1],zk)),x),k⩾2.{S}_{k}\left({\boldsymbol{z}},x)={R}_{k}^{-1}(({z}_{1},{S}_{1}\left({z}_{1},{z}_{2}),\ldots ,{S}_{k-1}\left({{\boldsymbol{z}}}_{\left[1,k-1]},{z}_{k})),x),\hspace{1.0em}k\geqslant 2.The identity in Eq. (11) can be thought of as a causal representation of the process, while the complementary identity Zk=Rk−1((U1,…,Uk−1)⊤,Uk){Z}_{k}={R}_{k-1}\left({\left({U}_{1},\ldots ,{U}_{k-1})}^{\top },{U}_{k})implied by Eq. (9) can be thought of as an invertible representation. We refer to the functions (Sk)k∈N{\left({S}_{k})}_{k\in {\mathbb{N}}}as Rosenblatt inverse functions; they should be distinguished from the inverses of the Rosenblatt forward functions3.2Non-linear state space modelThe s-vine process of order ppcan be viewed as a pp-dimensional Markov chain with state space X=(0,1)p{\mathcal{X}}={\left(0,1)}^{p}. It is standard to treat Markov chains as being indexed by the natural numbers. To that end, for t∈Nt\in {\mathbb{N}}, we introduce the vector-valued process Ut=(Ut,…,Ut+p−1)⊤{{\boldsymbol{U}}}_{t}={\left({U}_{t},\ldots ,{U}_{t+p-1})}^{\top }, starting at U1=(U1,…,Up)⊤{{\boldsymbol{U}}}_{1}={\left({U}_{1},\ldots ,{U}_{p})}^{\top }, defined by the updating equation Ut=F(Ut−1,Zt){{\boldsymbol{U}}}_{t}=F\left({{\boldsymbol{U}}}_{t-1},{Z}_{t}), where (13)F:(0,1)p×(0,1)→(0,1)p,F(u,z)=(u2,…,up,Rp−1(u,z)).F:{\left(0,1)}^{p}\times \left(0,1)\to {\left(0,1)}^{p},\hspace{1.0em}F\left({\boldsymbol{u}},z)=({u}_{2},\ldots ,{u}_{p},{R}_{p}^{-1}\left({\boldsymbol{u}},z)).The Markov chain described by Eq. (13) defines a non-linear state space (NSS) model conforming exactly to the assumptions imposed in Meyn and Tweedie ([34], Section 2.2.2): under Assumption 1, the updating function FFis a smooth (C∞{{\mathcal{C}}}^{\infty }) function; the state space X=(0,1)p{\mathcal{X}}={\left(0,1)}^{p}is an open subset of Rp{{\mathbb{R}}}^{p}; the uniform distribution of innovations (Zt)\left({Z}_{t})will be taken to be supported on the open set (0,1)\left(0,1).Using standard arguments, the NSS model associated with Eq. (13) can be shown to be a ϕ\phi -irreducible, aperiodic Harris recurrent Markov chain and to admit an invariant probability measure π\pi , which is the measure implied by the density c(p){c}_{\left(p)}given by Eq. (5); we summarise the arguments in Appendix B. This in turn allows the ergodic theorem for Harris chains to be applied ([34], Theorem 13.3.3) to conclude that for any initial measure λ\lambda , the Markov transition kernel P(x,⋅){\mathsf{P}}\left({\boldsymbol{x}},\cdot )satisfies ∫λ(dx)Pn(x,⋅)−π(⋅)→0,n→∞,&#x2016;\int \lambda \left({\rm{d}}{\boldsymbol{x}}){{\mathsf{P}}}^{n}\left({\boldsymbol{x}},\cdot )-\pi \left(\cdot )&#x2016;\to 0,\hspace{1.0em}n\to \infty ,where ‖⋅‖\Vert \cdot \Vert denotes the total variation norm. This is also sufficient for the strong law of large numbers (SLLN) to hold ([34], Theorem 17.0.1): for a function g:Rp→Rg:{{\mathbb{R}}}^{p}\to {\mathbb{R}}, if we define Sn(g)=∑k=1ng(Uk){S}_{n}\left(g)={\sum }_{k=1}^{n}g\left({{\boldsymbol{U}}}_{k})and π(g)=∫g(u)c(p)(u)du\pi \left(g)=\int g\left({\boldsymbol{u}}){c}_{\left(p)}\left({\boldsymbol{u}}){\rm{d}}{\boldsymbol{u}}, then limn→∞n−1Sn(g)=π(g){\mathrm{lim}}_{n\to \infty }{n}^{-1}{S}_{n}\left(g)=\pi \left(g), almost surely, provided π(∣g∣)<∞\pi \left(| g| )\lt \infty .Although the Markov models are ergodic, we caution that they can exhibit some very extreme behaviour, albeit for copula choices that we are unlikely to encounter in practice. Figure 1 shows a realisation of 10,000 simulated values from a process of order p=3p=3, in which C1{C}_{1}is a 180-degree rotated Clayton copula with parameter θ=2\theta =2, C2{C}_{2}is a Clayton copula with θ=2\theta =2, and C3{C}_{3}is a rotated Clayton copula with θ=4\theta =4. Since the Clayton copula is well known to have lower tail dependence [25,27], this means that C1{C}_{1}and C3{C}_{3}have upper tail dependence and C3{C}_{3}is more strongly dependent than C1{C}_{1}and C2{C}_{2}. This increasing pattern of partial dependence, coupled with the strong upper tail dependence of C3{C}_{3}, leads to a period of over 1,500 successive values, which are all greater than 0.6. An observer of this process who plots a histogram of the values in this period would have difficulty believing that the marginal distribution is uniform.Figure 1Realisation of 10,000 simulated values from a process of order k=3k=3in which C1{C}_{1}is a 180∘18{0}^{\circ }rotated Clayton copula with parameter θ=2\theta =2, C2{C}_{2}is a Clayton copula with θ=2\theta =2and C2{C}_{2}is a rotated Clayton copula with θ=4\theta =4.This phenomenon is connected to rates of mixing behaviour and ergodic convergence for Markov processes. There is some literature for the case p=1p=1in which these rates are shown to vary with the choice of copula and, in particular, its behaviour in joint tail regions [3,5,12,13,31]. For some results relevant to the case, where p>1p\gt 1, see Rémillard et al. [39].4Gaussian processesGaussian processes are processes whose finite-dimensional marginal distributions are multivariate Gaussian. We will identify the term Gaussian processes with non-singular Gaussian processes throughout; i.e., we assume that the finite-dimensional marginal distributions of Gaussian processes have invertible covariance matrices and admit joint densities. Such processes represent a subclass of the s-vine processes.Proposition 2(1)Every stationary Gaussian process is an s-vine process.(2)Every s-vine process in which the pair copulas of the sequence (Ck)k∈N{\left({C}_{k})}_{k\in N}are Gaussian and the marginal distribution FX{F}_{X}is Gaussian, is a Gaussian process.4.1S-vine representations of Gaussian processesThe first implication of Proposition 2 is that every Gaussian process has a unique s-vine-copula representation. This insight offers methods for constructing or simulating such processes as generic s-vine processes using Eq. (9) and estimating them using a likelihood based on Eq. (5).Let (Xt)t∈N{\left({X}_{t})}_{t\in {\mathbb{N}}}be a stationary Gaussian process with mean μX{\mu }_{X}, variance σX2{\sigma }_{X}^{2}, and autocorrelation function (acf) (ρk)k∈N{\left({\rho }_{k})}_{k\in {\mathbb{N}}}; these three quantities uniquely determine a Gaussian process. We assume the following:Assumption 2The acf (ρk)k∈N{\left({\rho }_{k})}_{k\in {\mathbb{N}}}satisfies ρk→0{\rho }_{k}\to 0as k→∞k\to \infty .It is well known that this is a necessary and sufficient condition for a Gaussian process (Xt)\left({X}_{t})to be a mixing process and therefore ergodic [14,32].The acf uniquely determines the partial autocorrelation function (pacf) (αk)k∈N{\left({\alpha }_{k})}_{k\in {\mathbb{N}}}through a one-to-one transformation [2,38]. Since the partial autocorrelation of a Gaussian process is the correlation of the conditional distribution of (Xt−k,Xt)\left({X}_{t-k},{X}_{t})given the intervening variables, the pair copulas in the s-vine copula representation are given by Ck=CαkGa{C}_{k}={C}_{{\alpha }_{k}}^{\hspace{0.1em}\text{Ga}\hspace{0.1em}}.For k∈Nk\in {\mathbb{N}}let ρk=(ρ1,…,ρk)⊤{{\boldsymbol{\rho }}}_{k}={\left({\rho }_{1},\ldots ,{\rho }_{k})}^{\top }and let Pk{P}_{k}denote the correlation matrix of (X1,…,Xk)\left({X}_{1},\ldots ,{X}_{k}). Clearly, P1=1{P}_{1}=1and, for k>1k\gt 1, Pk{P}_{k}is a symmetric Toeplitz matrix whose diagonals are filled by the first k−1k-1elements of ρk{{\boldsymbol{\rho }}}_{k}; moreover, Pk{P}_{k}is non-singular for all kkunder Assumption 2 ([11], Proposition 4). The one-to-one series of recursive transformations relating (αk)k∈N{\left({\alpha }_{k})}_{k\in {\mathbb{N}}}to (ρk)k∈N{\left({\rho }_{k})}_{k\in {\mathbb{N}}}is α1=ρ1{\alpha }_{1}={\rho }_{1}, and, for k>1k\gt 1, (14)αk=ρk−ρk−1⊤Pk−1−1ρ¯k−11−ρk−1⊤Pk−1−1ρk−1,ρk=αk(1−ρk−1⊤Pk−1−1ρk−1)+ρk−1⊤Pk−1−1ρ¯k−1;\begin{array}{rcl}{\alpha }_{k}& =& \frac{{\rho }_{k}-{{\boldsymbol{\rho }}}_{k-1}^{\top }{P}_{k-1}^{-1}{\overline{{\boldsymbol{\rho }}}}_{k-1}}{1-{{\boldsymbol{\rho }}}_{k-1}^{\top }{P}_{k-1}^{-1}{{\boldsymbol{\rho }}}_{k-1}},\hspace{1.0em}{\rho }_{k}={\alpha }_{k}(1-{{\boldsymbol{\rho }}}_{k-1}^{\top }{P}_{k-1}^{-1}{{\boldsymbol{\rho }}}_{k-1})+{{\boldsymbol{\rho }}}_{k-1}^{\top }{P}_{k-1}^{-1}{\overline{{\boldsymbol{\rho }}}}_{k-1};\end{array}see, for example, Joe [26] or the Durbin–Levinson Algorithm ([11], Proposition 5.2.1).Remark 1Note that the restriction to non-singular Gaussian processes ensures that ∣ρk∣<1| {\rho }_{k}| \lt 1and ∣αk∣<1| {\alpha }_{k}| \lt 1, for all k∈Nk\in {\mathbb{N}}, and this is henceforth always assumed.We review three examples of well-known Gaussian processes from the point of view of s-vine processes.Example 1(Gaussian ARMA models) Any causal Gaussian ARMA(pp,qq) model may be represented as an s-vine process, and full maximum likelihood estimation can be carried out using a joint density based on Eq. (5). If ϕ=(ϕ1,…,ψp)⊤{\boldsymbol{\phi }}={\left({\phi }_{1},\ldots ,{\psi }_{p})}^{\top }and ψ=(ψ1,…,ψq)⊤{\boldsymbol{\psi }}={\left({\psi }_{1},\ldots ,{\psi }_{q})}^{\top }denote the AR and MA parameters and ρk(ϕ,ψ){\rho }_{k}\left({\boldsymbol{\phi }},{\boldsymbol{\psi }})the acf, then we can use the transformation in Eq. (14) to parameterize Eq. (5) in terms of ϕ{\boldsymbol{\phi }}and ψ{\boldsymbol{\psi }}using Gaussian pair copulas Ck=Cαk(ϕ,ψ)Ga{C}_{k}={C}_{{\alpha }_{k}\left({\boldsymbol{\phi }},{\boldsymbol{\psi }})}^{\hspace{0.1em}\text{Ga}\hspace{0.1em}}. In practice, this approach is more of theoretical interest since standard estimation methods are generally much faster.Example 2(Fractional Gaussian noise [FGN]) This process has acf given by ρk(H)=12((k+1)2H+(k−1)2H−2k2H),0<H<1,{\rho }_{k}\left(H)=\frac{1}{2}({\left(k+1)}^{2H}+{\left(k-1)}^{2H}-2{k}^{2H}),\hspace{1.0em}0\lt H\lt 1,where HHis the Hurst exponent [41]. Thus, the transformation Eq. (14) may be used to parameterize Eq. (5) in terms of HHusing Gaussian pair copulas Ck=Cαk(H)Ga{C}_{k}={C}_{{\alpha }_{k}\left(H)}^{\hspace{0.1em}\text{Ga}\hspace{0.1em}}and the FGN model may be fitted to data as an s-vine process and HHmay be estimated.Example 3(Gaussian ARFIMA models) The ARFIMA(p,d,qp,d,q) model with −1/2<d<1/2-1\hspace{0.1em}\text{/}2\lt d\lt 1\text{/}\hspace{0.1em}2can be handled in a similar way to the ARMA(p,qp,q) model, of which it is a generalization. In the case where p=q=0p=q=0, it has been shown [21] that (15)αk=dk−d,k∈N;{\alpha }_{k}=\frac{d}{k-d},\hspace{1.0em}k\in {\mathbb{N}};see also Brockwell and Davis ([11], Theorem 13.2.1). The simple closed-form expression for the pacf means that the ARFIMA(0,d,00,d,0) model is even more convenient to treat as an s-vine than FGN; the two models are in fact very similar in behaviour although not identical. It is interesting to note that the pacf is not summable and similar behaviour holds for some other ARFIMA processes. For example, for p,q∈N∪{0}p,q\in {\mathbb{N}}\cup \left\{0\right\}and 0<d<1/20\lt d\lt 1\hspace{0.1em}\text{/}\hspace{0.1em}2, the pacf satisfies ∣αk∣∼d/k| {\alpha }_{k}| \hspace{0.33em} \sim \hspace{0.33em}d\hspace{0.1em}\text{/}\hspace{0.1em}kas k→∞k\to \infty [23].4.2New Gaussian processes from s-vinesA further implication of Proposition 2 is that it shows how we can create and estimate some new stationary and ergodic Gaussian processes without setting them up in the classical way using recurrence equations, lag operators, and Gaussian innovations. Instead we choose sequences of Gaussian pair copulas (Ck)\left({C}_{k})parameterized by sequences of partial correlations (αk)\left({\alpha }_{k}).As in the previous section, we can begin with a parametric form for the acf ρk(θ){\rho }_{k}\left({\boldsymbol{\theta }})such that ρk(θ)→0{\rho }_{k}\left({\boldsymbol{\theta }})\to 0as k→∞k\to \infty and build the model using pair copulas parameterized by the parameters θ{\boldsymbol{\theta }}of the implied pacf αk(θ){\alpha }_{k}\left({\boldsymbol{\theta }}). Alternatively we can choose a parametric form for the pacf αk(θ){\alpha }_{k}\left({\boldsymbol{\theta }})directly.Any finite set of values {α1,…,αp}\left\{{\alpha }_{1},\ldots ,{\alpha }_{p}\right\}yields an AR(p) model, which is a special case of the finite-order s-vine models of Section 3. However, infinite-order processes that satisfy Assumption 2 are more delicate to specify. A necessary condition is that the sequence (αk)\left({\alpha }_{k})satisfies αk→0{\alpha }_{k}\to 0as k→0k\to 0, but this is not sufficient. To see this, note that if αk=(k+1)−1{\alpha }_{k}={\left(k+1)}^{-1}, the relationship (14) implies that ρk=0.5{\rho }_{k}=0.5for all kk, which violates Assumption 2. A sufficient condition follows from a result of Debowski [16], although, in view of Example 3, it is not a necessary condition:Assumption 3The partial acf (αk)k∈N{\left({\alpha }_{k})}_{k\in {\mathbb{N}}}satisfies ∑k=1∞∣αk∣<∞{\sum }_{k=1}^{\infty }| {\alpha }_{k}| \lt \infty .Debowski [16] showed that, if Assumption 3 holds, then the equality (16)1+2∑k=1∞ρk=∏k=1∞1+αk1−αk1+2\mathop{\sum }\limits_{k=1}^{\infty }{\rho }_{k}=\mathop{\prod }\limits_{k=1}^{\infty }\frac{1+{\alpha }_{k}}{1-{\alpha }_{k}}also holds. The rhs of Eq. (16) is a convergent product since absolute summability ensures that the sums ∑k=1∞ln(1±αk){\sum }_{k=1}^{\infty }\mathrm{ln}\left(1\pm {\alpha }_{k})converge. This implies the convergence of ∑k=1∞ρk{\sum }_{k=1}^{\infty }{\rho }_{k}, which implies ρk→0{\rho }_{k}\to 0, which in turn implies that Assumption 2 also holds, as we require.Assumption 3 still allows some quite pathological processes, as noted by Debowski [16]. For example, even for a finite-order AR(pp) process with αk⩾a>0{\alpha }_{k}\geqslant a\gt 0for k∈{1,…,p}k\in \left\{1,\ldots ,p\right\}and αk=0{\alpha }_{k}=0for k>pk\gt p, it follows that ∑k=1∞ρk⩾0.5(((1+a)/(1−a))p−1){\sum }_{k=1}^{\infty }{\rho }_{k}\geqslant 0.5\left({\left(\left(1+a)\text{/}\left(1-a))}^{p}-1), and this grows exponentially with ppleading to an exceptionally slow decay of the acf.4.3Rosenblatt functions for Gaussian processesFor Gaussian processes, the Rosenblatt functions and inverse Rosenblatt functions take relatively tractable forms.Proposition 3Let (Ck)k∈N{\left({C}_{k})}_{k\in {\mathbb{N}}}be a sequence of Gaussian pair copulas with parameters (αk)k∈N{\left({\alpha }_{k})}_{k\in {\mathbb{N}}}and assume that Assumption 2 holds. The forward Rosenblatt functions are given by(17)Rk(u,x)=ΦΦ−1(x)−∑j=1kϕj(k)Φ−1(uk+1−j)σk,{R}_{k}\left({\boldsymbol{u}},x)=\Phi \left(\frac{{\Phi }^{-1}\left(x)-\mathop{\sum }\limits_{j=1}^{k}{\phi }_{j}^{\left(k)}{\Phi }^{-1}\left({u}_{k+1-j})}{{\sigma }_{k}}\right),where σk2=∏j=1i(1−αj2){\sigma }_{k}^{2}={\prod }_{j=1}^{i}\left(1-{\alpha }_{j}^{2})and the coefficients ϕj(k){\phi }_{j}^{\left(k)}are given recursively by(18)ϕj(k)=ϕj(k−1)−αkϕk−j(k−1),j∈{1,…,k−1},αk,j=k.{\phi }_{j}^{\left(k)}=\left\{\begin{array}{ll}{\phi }_{j}^{\left(k-1)}-{\alpha }_{k}{\phi }_{k-j}^{\left(k-1)},& j\in \left\{1,\ldots ,k-1\right\},\\ {\alpha }_{k},& j=k.\end{array}\right.The inverse Rosenblatt functions are given by(19)Sk(z,x)=ΦσkΦ−1(x)+∑j=1kψj(k)Φ−1(zk+1−j),{S}_{k}\left({\boldsymbol{z}},x)=\Phi \left({\sigma }_{k}{\Phi }^{-1}\left(x)+\mathop{\sum }\limits_{j=1}^{k}{\psi }_{j}^{\left(k)}{\Phi }^{-1}\left({z}_{k+1-j})\right),where the coefficients ψj(k){\psi }_{j}^{\left(k)}are given recursively by(20)ψj(k)=∑i=1jϕi(k)ψj−i(k−i),j∈{1,…,k},{\psi }_{j}^{\left(k)}=\mathop{\sum }\limits_{i=1}^{j}{\phi }_{i}^{\left(k)}{\psi }_{j-i}^{\left(k-i)},\hspace{1.0em}j\in \left\{1,\ldots ,k\right\},where ψ0(k)=σk{\psi }_{0}^{\left(k)}={\sigma }_{k}for k⩾1k\geqslant 1and ψ0(0)=1{\psi }_{0}^{\left(0)}=1.We can analyse the behaviour of the Rosenblatt and inverse Rosenblatt functions as k→∞k\to \infty in a number of different cases.4.3.1Gaussian processes of finite orderIn the case of a Gaussian s-vine process of finite-order pp, we have, for k>pk\gt p, that αk=0{\alpha }_{k}=0, σk=σp{\sigma }_{k}={\sigma }_{p}and ϕj(k)=ϕj(p){\phi }_{j}^{\left(k)}={\phi }_{j}^{\left(p)}. If (Uk)k∈N{\left({U}_{k})}_{k\in {\mathbb{N}}}is constructed from (Zk)k∈N{\left({Z}_{k})}_{k\in {\mathbb{N}}}using the algorithm described by Eq. (9), and if we make the substitutions Xk=Φ−1(Uk){X}_{k}={\Phi }^{-1}\left({U}_{k})and εk=Φ−1(Zk){\varepsilon }_{k}={\Phi }^{-1}\left({Z}_{k})as in the proof of Proposition 3, then it follows from Eq. (17) that Xk=∑j=1pϕj(p)Xk−j+σpεk{X}_{k}={\sum }_{j=1}^{p}{\phi }_{j}^{\left(p)}{X}_{k-j}+{\sigma }_{p}{\varepsilon }_{k}for k>pk\gt p, which is the classical recurrence equation that defines a Gaussian AR(pp) process; from Eqs. (11) and (19), we also have that Xk=∑j=1k−1ψj(k−1)εk−j+σpεk{X}_{k}={\sum }_{j=1}^{k-1}{\psi }_{j}^{\left(k-1)}{\varepsilon }_{k-j}+{\sigma }_{p}{\varepsilon }_{k}for k>pk\gt p. These two representations can be written in invertible and causal forms as follows: (21)εk=∑j=0pϕ˜j(p)Xk−jandXk=∑j=0k−1ψj(k−1)εk−j,k>p,{\varepsilon }_{k}=\mathop{\sum }\limits_{j=0}^{p}{\tilde{\phi }}_{j}^{\left(p)}{X}_{k-j}\hspace{1.0em}\hspace{0.1em}\text{and}\hspace{0.1em}\hspace{1.0em}{X}_{k}=\mathop{\sum }\limits_{j=0}^{k-1}{\psi }_{j}^{\left(k-1)}{\varepsilon }_{k-j},\hspace{1.0em}k\gt p,where ϕ˜0(p)=1/σp{\tilde{\phi }}_{0}^{\left(p)}=1\hspace{0.1em}\text{/}\hspace{0.1em}{\sigma }_{p}, ϕ˜j(p)=−ϕj(p)/σp{\tilde{\phi }}_{j}^{\left(p)}=-{\phi }_{j}^{\left(p)}\hspace{0.1em}\text{/}\hspace{0.1em}{\sigma }_{p}for j>1j\gt 1and ψ0(k−1)=σp{\psi }_{0}^{\left(k-1)}={\sigma }_{p}.The first series in Eq. (21) is clearly a finite series, while the classical theory is concerned with conditions on the AR coefficients ϕ˜j(p){\tilde{\phi }}_{j}^{\left(p)}that allow us to pass to an infinite-order moving-average representation as k→∞k\to \infty in the second series. In fact, by setting up our Gaussian models using partial autocorrelations, causality in the classical sense is guaranteed; this follows as a special case of Theorem 1.4.3.2Gaussian processes with absolutely summable partial autocorrelationsWe next consider a more general case where the process may be of infinite order, but Assumption 3 holds. To consider infinite-order models, we now consider a process (Ut)t∈Z{\left({U}_{t})}_{t\in {\mathbb{Z}}}defined on the integers. The result that follows is effectively a restating of a result by Debowski [16] in the particular context of Gaussian s-vine copula processes.Theorem 1Let (Ut)t∈Z{\left({U}_{t})}_{t\in {\mathbb{Z}}}be a Gaussian s-vine copula process for which the parameters (αk)k∈N{\left({\alpha }_{k})}_{k\in {\mathbb{N}}}of the Gaussian pair copula sequence (Ck)k∈N{\left({C}_{k})}_{k\in {\mathbb{N}}}satisfy Assumption 3. Then, for all tt, we have the almost sure limiting representations(22)Ut=limk→∞Sk((Zt−k,…,Zt−1)⊤,Zt){U}_{t}=\mathop{\mathrm{lim}}\limits_{k\to \infty }{S}_{k}\left({\left({Z}_{t-k},\ldots ,{Z}_{t-1})}^{\top },{Z}_{t})(23)Zt=limk→∞Rk((Ut−k,…,Ut−1)⊤,Ut){Z}_{t}=\mathop{\mathrm{lim}}\limits_{k\to \infty }{R}_{k}\left({\left({U}_{t-k},\ldots ,{U}_{t-1})}^{\top },{U}_{t})for an iid uniform innovation process (Zt)t∈Z{\left({Z}_{t})}_{t\in {\mathbb{Z}}}.4.3.3Long-memory ARFIMA processesAs noted earlier, the pacf of an ARFIMA(p,d,qp,d,q) model with 0<d<0.50\lt d\lt 0.5is not absolutely summable [23], and so Theorem 1 does not apply in this case. Nevertheless, Brockwell and Davis ([11], Section 13.2) show that the Gaussian process has a casual representation of the form Xt=∑j=0∞ψjεt−j{X}_{t}={\sum }_{j=0}^{\infty }{\psi }_{j}{\varepsilon }_{t-j}, where convergence is now in mean square and the coefficients are square summable, i.e., ∑j=0∞ψj2<∞{\sum }_{j=0}^{\infty }{\psi }_{j}^{2}\lt \infty . Since convergence in mean square implies convergence in probability, the continuous mapping theorem implies that a representation of the form Ut=limk→∞Sk((Zt−k,…,Zt−1)⊤,Zt){U}_{t}={\mathrm{lim}}_{k\to \infty }{S}_{k}\left({\left({Z}_{t-k},\ldots ,{Z}_{t-1})}^{\top },{Z}_{t})at least holds under convergence in probability.4.3.4A non-causal and non-invertible caseIf αk=1/(k+1){\alpha }_{k}=1\hspace{0.1em}\text{/}\hspace{0.1em}\left(k+1)for all kk, then ρk=0.5{\rho }_{k}=0.5, and both Assumptions 2 and 3 are violated. It can be verified (for example by induction) that the recursive formulas (18) and (20) imply that ϕj(k)=1/(k+1){\phi }_{j}^{\left(k)}=1\hspace{0.1em}\text{/}\hspace{0.1em}\left(k+1)and ψj(k)=σk−j/(k+2−j){\psi }_{j}^{\left(k)}={\sigma }_{k-j}\hspace{0.1em}\text{/}\hspace{0.1em}\left(k+2-j)for j⩾1j\geqslant 1(recall that ψ0(k)=σk{\psi }_{0}^{\left(k)}={\sigma }_{k}). These coefficient sequences are unusual; the coefficients ϕj(k){\phi }_{j}^{\left(k)}of the Rosenblatt function in Eq. (17) place equal weight on all past values Xk+1−j=Φ−1(Uk+1−j){X}_{k+1-j}={\Phi }^{-1}\left({U}_{k+1-j}), while the coefficients ψj(k){\psi }_{j}^{\left(k)}of the inverse Rosenblatt function on the innovations in Eq. (19) place weight ψk(k)=1/2{\psi }_{k}^{\left(k)}=1\hspace{0.1em}\text{/}\hspace{0.1em}2on the first value ε1=Φ−1(Z1){\varepsilon }_{1}={\Phi }^{-1}\left({Z}_{1})and decreasing weights on more recent values εj{\varepsilon }_{j}, j>1j\gt 1.As k→∞k\to \infty , we do have σk2=∏j=1k(1−1/(k+1)2)→1/2{\sigma }_{k}^{2}={\prod }_{j=1}^{k}\left(1-1\hspace{0.1em}\text{/}\hspace{0.1em}{\left(k+1)}^{2})\to 1\hspace{0.1em}\text{/}\hspace{0.1em}2, but, for fixed j⩾1j\geqslant 1, the terms ψj(k){\psi }_{j}^{\left(k)}and ψj(k){\psi }_{j}^{\left(k)}both converge to the trivial limiting value 0. In particular, we do not obtain a convergent limiting representation of the form in Eq. 22.5General s-vine processesWe now consider infinite-order s-vine copula processes constructed from general sequences (Ck)k∈N{\left({C}_{k})}_{k\in {\mathbb{N}}}of pair copulas.5.1Causality and invertibilityThe key consideration for the stability of an infinite-order process is whether it admits a convergent causal representation. A process (Ut)t∈Z{\left({U}_{t})}_{t\in {\mathbb{Z}}}with such a representation is a convergent non-linear filter of independent noise. It will have the property that Ut{U}_{t}and Ut−k{U}_{t-k}are independent in the limit as k→∞k\to \infty , implying mixing behaviour and ergodicity. We suggest the following definition of the causality and invertibility properties for a general s-vine process.Definition 3Let (Ck)k∈N{\left({C}_{k})}_{k\in {\mathbb{N}}}be a sequence of pair copulas and let (Rk)k∈N{\left({R}_{k})}_{k\in {\mathbb{N}}}and (Sk)k∈N{\left({S}_{k})}_{k\in {\mathbb{N}}}be the corresponding Rosenblatt forward functions and Rosenblatt inverse functions defined by Eqs. (4) and (12). An s-vine copula process (Ut)t∈Z{\left({U}_{t})}_{t\in {\mathbb{Z}}}associated with the sequence (Ck)k∈N{\left({C}_{k})}_{k\in {\mathbb{N}}}is strongly causal if there exists a process of iid uniform random variables (Zt)t∈Z{\left({Z}_{t})}_{t\in {\mathbb{Z}}}such that Eq. (22) holds almost surely for all tt, and it is strongly invertible if representation (Eq. (23)) holds almost surely for all tt. If convergence in Eqs. (22) and (23) only holds in probability, the process is weakly causal or weakly invertible.We know that Gaussian ARMA processes defined as s-vine processes are always strongly causal (and invertible) and that the long-memory ARFIMA(p,d,qp,d,q) process with 0<d<0.50\lt d\lt 0.5is weakly causal. When we consider sequences of Rosenblatt functions for sequences of non-Gaussian pair copulas, proving causality appears to be more challenging mathematically, since it is no longer a question of analysing the convergence of series. In the next section, we use simulations to conjecture that causality holds for a class of processes defined via the Kendall correlations of the copula sequence.In a finite-order process, the copula sequence for any lag kkgreater than the order ppconsists of independence copulas; it seems intuitively clear that, to obtain an infinite-order process with a convergent causal representation, the partial copula sequence (Ck)k∈N{\left({C}_{k})}_{k\in {\mathbb{N}}}should converge to the independence copula C⊥{C}^{\perp }as k→∞k\to \infty . However, in view of Example 4.3.4, this is not a sufficient condition and the speed of convergence of the copula sequence is also important. Ideally, we require conditions on the speed of convergence Ck→C⊥{C}_{k}\to {C}^{\perp }so that the marginal copula C(k){C}^{\left(k)}in Eq. (8) also tends to C⊥{C}^{\perp }; in that case, the variables Ut{U}_{t}and Ut−k{U}_{t-k}are asymptotically independent as k→∞k\to \infty and mixing behaviour follows.5.2A practical approach to non-Gaussian s-vinesSuppose we take a sequence of pair copulas (Ck)k∈N{\left({C}_{k})}_{k\in {\mathbb{N}}}from some parametric family and parameterize them in such a way that (i) the copulas converge uniformly to the independence copula as k→∞k\to \infty and (ii) the level of dependence of each copula Ck{C}_{k}is identical to that of a Gaussian pair copula sequence that gives rise to an ergodic Gaussian process. The intuition here is that by sticking close to the pattern of decay of dependence in a well-behaved Gaussian process, we might hope to construct a stable causal process that is both mixing and ergodic.A natural way of making “level of dependence” concrete is to consider the Kendall rank correlation function of the copula sequence, defined in the following way.Definition 4The Kendall partial autocorrelation function (kpacf) (τk)k∈N{\left({\tau }_{k})}_{k\in {\mathbb{N}}}associated with a copula sequence (Ck)k∈N{\left({C}_{k})}_{k\in {\mathbb{N}}}is given by τk=τ(Ck){\tau }_{k}=\tau \left({C}_{k}), for k∈Nk\in {\mathbb{N}}, where τ(C)\tau \left(C)denotes the Kendall’s tau coefficient for a copula CC.For a Gaussian copula sequence with Ck=CαkGa{C}_{k}={C}_{{\alpha }_{k}}^{\hspace{0.1em}\text{Ga}\hspace{0.1em}}, we have (24)τk=2πarcsin(αk).{\tau }_{k}=\frac{2}{\pi }\arcsin \left({\alpha }_{k}).As in Section 4.2, suppose that (αk(θ))k∈N{\left({\alpha }_{k}\left({\boldsymbol{\theta }}))}_{k\in {\mathbb{N}}}is the pacf of a stationary and ergodic model Gaussian process parametrized by the parameters θ{\boldsymbol{\theta }}, such as an ARMA or ARFIMA model; this implies a parametric form for the kpacf (τk(θ))k∈N{\left({\tau }_{k}\left({\boldsymbol{\theta }}))}_{k\in {\mathbb{N}}}. The idea is to choose a sequence of non-Gaussian pair copulas that shares this kpacf.A practical problem that may arise is that τk=τk(θ){\tau }_{k}={\tau }_{k}\left({\boldsymbol{\theta }})can take any value in (−1,1)\left(-1,1)in practice; only certain copula families, such as Gauss and Frank, are said to be comprehensive and yield any value for τk{\tau }_{k}. If we wish to use, for example, a sequence of Gumbel copulas to build our model, then we need to find a solution for negative values of Kendall’s tau. One possibility is to allow 90 or 270 degree rotations of the copula at negative values of τk{\tau }_{k}and another is to substitute a comprehensive copula at any position kkin the sequence such that τk{\tau }_{k}is negative.Remark 2Note that the assumption that the pair copulas Ck{C}_{k}converge to the independence copula has implications for using ttcopulas Cν,αt{C}_{\nu ,\alpha }^{t}in this approach. The terms of the copula sequence Ck=Cνk,αkt{C}_{k}={C}_{{\nu }_{k},{\alpha }_{k}}^{t}would have to satisfy νk→∞{\nu }_{k}\to \infty and αk→0{\alpha }_{k}\to 0as k→∞k\to \infty ; the sequence given by Ck=Cν,αkt{C}_{k}={C}_{\nu ,{\alpha }_{k}}^{t}for fixed ν\nu does not converge to the independence copula as αk→0{\alpha }_{k}\to 0. While the sequence (αk)k∈N{\left({\alpha }_{k})}_{k\in {\mathbb{N}}}can be connected to the kpacf by the same formula (24), the sequence (νk)k∈N{\left({\nu }_{k})}_{k\in {\mathbb{N}}}is not fixed by the kpacf. It is simpler in this approach to work with copula families with a single parameter so that there is a one-to-one relationship between Kendall’s tau and the copula parameter.To compare the speed of convergence of the copula filter for different copula sequences sharing the same kpacf, we conduct some simulation experiments. For fixed nnand for a fixed realization z1,…,zn{z}_{1},\ldots ,{z}_{n}of independent uniform noise we plot the points (k,Sk(z[n−k,n−1],zn))\left(k,{S}_{k}\left({{\boldsymbol{z}}}_{\left[n-k,n-1]},{z}_{n}))for k∈{1,…,n−1}k\in \left\{1,\ldots ,n-1\right\}. We expect the points to converge to a fixed value as k→n−1k\to n-1, provided we take a sufficiently large value of nn. When the copula sequence consists of Clayton copulas we will refer to the model as a Clayton copula filter; similarly, Gumbel copulas yield a Gumbel copula filter; and so on. The following examples suggest that there are some differences in the convergence rates of the copula filters. This appears to relate to the tail dependence characteristics of the copulas [25,27]. We recall that the Gumbel and Joe copulas are upper tail dependent, while the Clayton copula is lower tail dependent; the Gauss and Frank copulas are tail independent. The filters based on sequences of tail-dependent copulas generally show slower convergence.Example 4(Non-Gaussian ARMA(1,1) models). In this example, we consider s-vine copula processes sharing the kpacf of the ARMA(1,1) model with autoregressive parameter 0.95 and moving-average parameter -0.85. Fixing n=201n=201, we obtain Figure 2. Convergence appears to be fastest for the Gaussian and Frank copula filters and slowest for the Clayton filter, followed by the Joe filter; the Gumbel filter is an intermediate case. We can also discern a tendency for jumps in the value of Sk(z[n−k,n−1],zn){S}_{k}\left({{\boldsymbol{z}}}_{\left[n-k,n-1]},{z}_{n})to be upward for the upper tail-dependent Gumbel and Joe copulas and downward for the lower tail-dependent Clayton copula.Figure 2Plots of (k,Sk(z[n−k,n−1],zn))\left(k,{S}_{k}\left({{\boldsymbol{z}}}_{\left[n-k,n-1]},{z}_{n}))for k∈{1,…,n−1}k\in \left\{1,\ldots ,n-1\right\}for the copula filters of ARMA(1,1) models; see Example 4. Horizontal lines show ultimate values Sn−1(z[1,n−1],zn){S}_{n-1}\left({{\boldsymbol{z}}}_{\left[1,n-1]},{z}_{n}).Example 5(Non-Gaussian ARFIMA(1, dd, 1) models) In this example, we consider s-vine copula processes sharing the kpacf of the ARFIMA(1, dd, 1) model with autoregressive parameter 0.95, moving-average parameter −0.85-0.85and fractional differencing parameter d=0.02d=0.02. The latter implies that the pacf of the Gaussian process satisfies ∣αk∣∼0.02/k| {\alpha }_{k}| \hspace{0.33em} \sim \hspace{0.33em}0.02\hspace{0.1em}\text{/}\hspace{0.1em}kas k→∞k\to \infty [23]. The lack of absolute summability means that the Gaussian copula process does not satisfy the conditions of Theorem 1. It is an unresolved question whether any of these processes is causal. Fixing n=701n=701, we obtain Figure 3. For the realized series of innovations used in the picture, convergence appears to take place, but it is extremely slow. The tail-dependent Clayton and Joe copulas appear to take longest to settle down.Figure 3Plots of (k,Sk(z[n−k,n−1],zn))\left(k,{S}_{k}\left({{\boldsymbol{z}}}_{\left[n-k,n-1]},{z}_{n}))for k∈{1,…,n−1}k\in \left\{1,\ldots ,n-1\right\}for the copula filters of ARFIMA(1, dd, 1) models; see Example 5. Horizontal lines show ultimate values Sn−1(z[1,n−1],zn){S}_{n-1}\left({{\boldsymbol{z}}}_{\left[1,n-1]},{z}_{n}).An obvious practical solution that circumvents the issue of whether the infinite-order process has a convergent causal representation is to truncate the copula sequence (Ck)k∈N{\left({C}_{k})}_{k\in {\mathbb{N}}}so that Ck=C⊥{C}_{k}={C}^{\perp }for k>pk\gt pfor some relatively large but fixed value pp. This places us back in the setting of ergodic Markov chains but, by parameterizing models through the kpacf, we preserve the advantages of parsimony.5.3An example with real dataFor this example, we have used data on the US CPI (consumer price index) taken from the OECD webpage. We analyse the log-differenced time series of quarterly CPI values from the first quarter of 1960 to the 4th quarter of 2020, which can be interpreted as measuring the rate of inflation ([46], Sections 14.2–14.4). The inflation data are shown in the upper-left panel of Figure 4; there are n=244n=244observations.Figure 4Top row: log-differenced CPI data and estimated kpacf of s-vine copula process using Gumbel copula sequence. Middle row: QQ-plots for residuals from models based on Gaussian (left) and Gumbel (right) copula sequences. Bottom row: QQ-plots of the data against fitted normal (left) and skewed Student (right) marginal distributions.To establish a baseline model, we use an automatic ARMA selection algorithm, and this selects an ARMA(5,1) model. We first address the issue of whether the implied Gaussian copula sequence in an ARMA(5,1) model can be replaced by Gumbel, Clayton, Frank, or Joe copula sequences (or 180 degree rotations thereof); for any lag kkat which the estimated kpacf τk{\tau }_{k}is negative, we retain a Gaussian copula and so the non-Gaussian copula sequences are actually hybrid sequences with some Gaussian terms. The data (x1,…,xn)\left({x}_{1},\ldots ,{x}_{n})are transformed to pseudo-observations (u1,…,un)\left({u}_{1},\ldots ,{u}_{n})on the copula scale using the empirical distribution function, and the s-vine copula process is estimated by maximum-likelihood; this is the commonly used pseudo-maximum-likelihood method [12,19].The best model results from replacing Gaussian copulas with Gumbel copulas, and the improvements in AIC and BIC are shown in the upper panel of Table 1; the improvement in fit is strikingly large. While the presented results relate to infinite-order processes, we note that very similar result (not tabulated) are obtained by fitting s-vine copula processes of finite order, where the kpacf is truncated at lag 30. Parameter estimates for the infinite-order models are presented in Table 2.Table 1Comparison of models by AIC and BIC: the top two lines relate to models for the pseudo-copula data (u1,…,un)\left({u}_{1},\ldots ,{u}_{n}), while the bottom three lines relate to full models of the original data (x1,…,xn)\left({x}_{1},\ldots ,{x}_{n})No. parsAICBICGaussian copula process6−184.62−163.64Gumbel copula process6−209.28−188.30Gaussian process8372.73400.71Gaussian copula process + skewed Student margin10352.50387.47Gumbel copula process + skewed Student margin10319.17354.14Table 2Parameter estimates and standard errors for s-vine copula processes with Gaussian and Gumbel copula sequences fitted to the pseudo-copula data (u1,…,un)\left({u}_{1},\ldots ,{u}_{n})θ(Ga){{\boldsymbol{\theta }}}^{\text{(Ga)}}s.e.θ(Gu){{\boldsymbol{\theta }}}^{\text{(Gu)}}s.e.ϕ1{\phi }_{1}−0.3810.104−0.2320.130ϕ2{\phi }_{2}0.1440.0810.1360.094ϕ3{\phi }_{3}0.1970.0630.1800.061ϕ4{\phi }_{4}0.4620.0750.4100.077ϕ5{\phi }_{5}0.3240.0630.2660.061ψ1{\psi }_{1}0.8700.0980.7710.118The residual QQ-plots in the middle row of Figure 4 give further insight into the improved fit of the process with Gumbel copulas. In the usual manner, residuals are reconstructions of the unobserved innovation variables. If (R^k)k∈N{\left({\widehat{R}}_{k})}_{k\in {\mathbb{N}}}denotes the sequence of estimated Rosenblatt forward functions, implied by the sequence (C^k)k∈N{\left({\widehat{C}}_{k})}_{k\in {\mathbb{N}}}of estimated copulas, then residuals (z1,…,zn)\left({z}_{1},\ldots ,{z}_{n})are constructed by setting z1=u1{z}_{1}={u}_{1}and zt=R^t−1(u[1,t−1],ut){z}_{t}={\widehat{R}}_{t-1}\left({{\boldsymbol{u}}}_{\left[1,t-1]},{u}_{t})for t>1t\gt 1. To facilitate graphical analysis, these are transformed onto the standard normal scale so that the QQ-plots in the middle row of Figure 4 relate to the values (Φ−1(z1),…,Φ−1(zn))\left({\Phi }^{-1}\left({z}_{1}),\ldots ,{\Phi }^{-1}\left({z}_{n}))and are against a standard normal reference distribution. The residuals from the baseline Gaussian copula appear to deviate from normality, whereas the residuals from the Gumbel copula model are much better behaved; the latter pass a Shapiro-Wilk test of normality (pp-value = 0.97), whereas the former do not (pp-value = 0.01).The picture of the kpacf in the top right panel of Figure 4 requires further comment. This plot attempts to show how well the kpacf of the fitted copula sequence matches the empirical Kendall partial autocorrelations of the data. The continuous line is the kpacf of the Gumbel/Gaussian copula sequence used in the best-fitting vine copula model of (u1,…,un)\left({u}_{1},\ldots ,{u}_{n}). The vertical bars show the empirical Kendall partial autocorrelations of the data at each lag kk. However, the method should really be considered as “semi-empirical” as it uses the fitted parametric copulas at lags 1,…,k−11,\ldots ,k-1in order to construct the necessary data for lag kk. The data used to estimate an empirical lag kkrank correlation are the points {(R^k−1(2)(u[j−k+1,j−1],uj−k),R^k−1(u[j−k+1,j−1],uj)),j=k+1,…,n},\{({\widehat{R}}_{k-1}^{\left(2)}\left({{\boldsymbol{u}}}_{\left[j-k+1,j-1]},{u}_{j-k}),{\widehat{R}}_{k-1}\left({{\boldsymbol{u}}}_{\left[j-k+1,j-1]},{u}_{j})),\hspace{1.0em}j=k+1,\ldots ,n\},where Rk^\widehat{{R}_{k}}and R^k(2){\widehat{R}}_{k}^{\left(2)}denote the estimates of forward and backward Rosenblatt functions; it may be noted that these data are precisely the points at which the copula density ck{c}_{k}is evaluated when the model likelihood based on c(n){c}_{\left(n)}in Eq. (5) is maximized.The kpacf shows positive dependence between inflation rates at the first 5 lags; moreover, the choice of Gumbel copula suggests asymmetry and upper tail dependence in the bivariate distribution of inflation rates at time points that are close together; in other words, large values of inflation are particularly strongly associated with large values of inflation in previous quarters, while low values are more weakly associated.We next consider composite models for the original data (x1,…,xn)\left({x}_{1},\ldots ,{x}_{n})consisting of a marginal distribution and an s-vine copula process. The baseline model is simply a Gaussian process with Gaussian copula sequence and Gaussian marginal distribution. We experimented with a number of alternatives to the normal marginal and obtained good results with the skewed Student distribution from the family of skewed distributions proposed by Fernandez and Steel [18]. Table 1 contains results for models which combine the Gaussian and Gumbel copula sequences with the skewed Student margin; the improvement obtained by using a Gumbel sequence with a skewed Student margin is clear from the AIC and BIC values. The QQ-plots of the data against the fitted marginal distributions in the bottom row of Figure 4 also show the superiority of the skewed Student to the Gaussian distribution for this dataset.The fitting method used for the composite model results in Table 1 is the two-stage IFM (inference functions for margins) method [25] in which the margin is estimated first, the data are transformed to approximately uniform using the marginal model, and the copula process is estimated by ML in a second step.The estimated values of the degree of freedom and skewness parameters in the skewed Student t marginal distribution are ν=3.19\nu =3.19and γ=1.47\gamma =1.47, respectively. These suggest that inflation rates (changes in log CPI) follow a heavy tailed, infinite-kurtosis distribution (tail index = 3.19) that is skewed to the right.6ConclusionThe s-vine processes provide a class of tractable stationary models that can capture non-linear and non-Gaussian serial dependence behaviour as well as any continuous marginal behaviour. By defining models of infinite order and using the approach based on the Kendall partial autocorrelation function (kpacf), we obtain a very natural generalization of classical Gaussian processes, such as Gaussian ARMA or ARFIMA.The models are straightforward to apply. The parsimonious parametrization based on the kpacf makes maximum likelihood inference feasible. Analogues of many of the standard tools for time series analysis in the time domain are available, including estimation methods for the kpacf and residual plots that shed light on the quality of the fit of the copula model. By separating the issues of serial dependence and marginal modelling, we can obtain bespoke descriptions of both aspects that avoid the compromises of the more “off-the-shelf” classical approach. The example of Section 5.3 indicates the kind of gains that can be obtained; it seems likely that many empirical applications of classical ARMA could be substantially improved by the use of models in the general s-vine class. In combination with v-transforms [33], s-vine models could also be used to model data showing stochastic volatility following the approach developed by Bladt and McNeil [9].To increase the practical options for model building it would be of interest to consider how copulas with more than one parameter, such as the t copula or the symmetrized Joe-Clayton copula [37] could be incorporated into the methodology. The parameters would have to be allowed to change in a smooth parsimonious manner such that the partial copula sequence (Ck)k∈N{\left({C}_{k})}_{k\in {\mathbb{N}}}converged to the independence copula while the Kendall correlations (τk)k∈N{\left({\tau }_{k})}_{k\in {\mathbb{N}}}followed the chosen form of kpacf for every kk. This is a topic for further research.The approach we have adopted should also be of interest to theoreticians as there are a number of challenging open questions to be addressed. While we have proposed definitions of causality and invertibility for general s-vine processes, we currently lack a mathematical methodology for checking convergence of causal and invertible representations for sequences of non-Gaussian pair copulas.There are some very interesting questions to address about the relationship between the partial copula sequence (Ck)k∈N{\left({C}_{k})}_{k\in {\mathbb{N}}}, the rate of convergence of causal representations and the rate of ergodic mixing of the resulting processes. The example of Figure 1 indicates that, even for a finite-order process, some very extreme models can be constructed that mix extremely slowly. Moreover, Example 5 suggests that non-Gaussian copula sequences serve to further elongate memory in long-memory processes, and this raises questions about the effect of the tail dependence properties of the copula sequence on rates of convergence and length of memory.It would also be of interest to confirm our conjecture that the pragmatic approach adopted in Section 5.2, in which the kpacf of the (infinite) partial copula sequence (Ck)k∈N{\left({C}_{k})}_{k\in {\mathbb{N}}}is matched to that of a stationary and ergodic Gaussian process, always yields a stationary and ergodic s-vine model, regardless of the choice of copula sequence. However, for practical applications, the problem can be obviated by truncating the copula sequence at some large finite lag kk, so that we are dealing with an ergodic Markov chain as shown in Section 3. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Dependence Modeling de Gruyter

Time series with infinite-order partial copula dependence

Dependence Modeling , Volume 10 (1): 21 – Jan 1, 2022

Loading next page...
 
/lp/de-gruyter/time-series-with-infinite-order-partial-copula-dependence-90sWjO78kV

References (50)

Publisher
de Gruyter
Copyright
© 2022 Martin Bladt and Alexander J. McNeil, published by De Gruyter
ISSN
2300-2298
eISSN
2300-2298
DOI
10.1515/demo-2022-0105
Publisher site
See Article on Publisher Site

Abstract

1IntroductionThe principal aim of this article is to show that the s-vine (or stationary d-vine) decomposition of a joint density provides a very natural vehicle for generalizing the class of stationary Gaussian time series to permit both non-Gaussian marginal behaviour and non-linear and non-Gaussian serial dependence behaviour. In particular, this approach provides a route to defining a rich class of tractable non-Gaussian ARMA and ARFIMA processes; the resulting models have the potential to offer improved statistical fits in any application where classical ARMA models or their long-memory ARFIMA extensions are used.Vine models of dependence have been developed in a series of publications [1,6,7, 8,24,25,28,42]. There are a number of different configurations for vines, but the most suitable one for longitudinal data applications is the d-vine, which is able to describe the strict stationarity of a random vector under some additional translation-invariance restrictions on the vine structure. A recent paper by Nagler et al. [36] investigated the vine structures that can be used to construct stationary multivariate time series. The results of Nagler et al. imply that, for univariate applications, the d-vine is in fact the only structure for which translation-invariance restrictions are sufficient to guarantee stationarity; we follow them in referring to these restricted d-vines as stationary vines, or s-vines.Vine models are best understood as copula models of dependence, and there is now a large literature on copula models for time series. While the main focus of much of this literature has been on cross-sectional dependence between multiple time series, there is also a growing literature on modelling serial dependence within single series and lagged dependence across series. First-order Markov copula models [3,12,15,17] are simple examples of s-vine processes. A number of authors have written on higher-order Markov extensions for univariate series or multivariate series [4,10,22,30,36,43]. There is also literature showing how these models may be adapted to the particular requirements of time series showing stochastic volatility, including the mixture-copula approach of Loaiza-Maya et al. [30] and the v-transform approach of McNeil and Bladt [9,33].This article makes the following novel contributions to the development of time series models based on vine copulas. First, we suggest how s-vine models may be generalized to infinite order, and we propose accompanying generalizations of the classical concepts of causality and invertibility for linear processes that may be applied to s-vine processes. Second, we provide additional insight into the issues of stability and ergodicity for s-vine processes, and we show how finite or infinite copula sequences may be used to develop non-linear filters of independent noise that generalize linear filters. Finally, we propose a practical and parsimonious approach to building s-vine processes in which copula sequences are parameterized by a function that we call the Kendall partial autocorrelation function; the latter may be borrowed from other well-known processes, such as Gaussian ARMA or ARFIMA processes, thus yielding natural non-Gaussian analogues of these models.We believe that our approach may serve as a useful framework to faciliate further study in the field. Several interesting theoretical questions remain, particularly relating to necessary and sufficient conditions for the stability of models based on infinite copula sequences, as well as the interplay of copula sequences and long memory. However, on the practical side, the models are already eminently usable; methods exist for estimation and random number generation, and we suggest some new ideas for model validation using residuals. An example shows the benefits that may arise from using these models.This article is structured as follows. Section 2 sets out notation and basic concepts and makes the connection between s-vine copulas and s-vine processes; key objects in the development of processes are sequences of functions that we refer to as Rosenblatt functions. In Section 3, we show that finite-order s-vine processes are Markov chains belonging to the particular sub-category of non-linear state-space models. Section 4 explains why Gaussian processes form a sub-class of s-vine processes and shows how the classical theory for linear processes may be reinterpreted as a theory of the behaviour of Rosenblatt functions. Section 5 uses the Gaussian analogy to suggest requirements for stable, infinite-order, non-Gaussian s-vine processes; a practical approach to model building is developed and illustrated with an application to macroeconomic data. Section 6 concludes. Proofs can be found in Appendix A, while additional material on the Markov chain analysis of finite-order processes is collected in Appendix B.2S-vine processes2.1S-vine copulasIf a random vector (X1,…,Xn)\left({X}_{1},\ldots ,{X}_{n})admits a joint density f(x1,…,xn)f\left({x}_{1},\ldots ,{x}_{n})then the latter may be decomposed as a d-vine. Writing fXi{f}_{{X}_{i}}for the marginal density of Xi{X}_{i}, the decomposition is (1)f(x1,…,xn)=∏i=1nfXi(xi)∏k=1n−1∏j=k+1ncj−k,j∣Sj−k,j(Fj−k∣Sj−k,j(xj−k),Fj∣Sj−k,j(xj)),f\left({x}_{1},\ldots ,{x}_{n})=\left(\mathop{\prod }\limits_{i=1}^{n}{f}_{{X}_{i}}\left({x}_{i})\right)\mathop{\prod }\limits_{k=1}^{n-1}\mathop{\prod }\limits_{j=k+1}^{n}{c}_{j-k,j| {S}_{j-k,j}}\left({F}_{j-k| {S}_{j-k,j}}\left({x}_{j-k}),{F}_{j| {S}_{j-k,j}}\left({x}_{j})),where Sj−k,j={j−k+1,…,j−1}{S}_{j-k,j}=\left\{j-k+1,\ldots ,j-1\right\}is the set of indices of the variables which lie between Xj−k{X}_{j-k}and Xj{X}_{j}, cj−k,j∣Sj−k,j{c}_{j-k,j| {S}_{j-k,j}}is the density of the bivariate copula Cj−k,j∣Sj−k,j{C}_{j-k,j| {S}_{j-k,j}}of the joint distribution function (df) of Xj−k{X}_{j-k}and Xj{X}_{j}conditional on the intermediate variables Xj−k+1,…,Xj−1{X}_{j-k+1},\ldots ,{X}_{j-1}, and (2)Fi∣Sj−k,j(x)=P(Xi⩽x∣Xj−k+1=xj−k+1,…,Xj−1=xj−1),i∈{j−k,j}{F}_{i| {S}_{j-k,j}}\left(x)={\mathbb{P}}\left({X}_{i}\leqslant x| {X}_{j-k+1}={x}_{j-k+1},\ldots ,{X}_{j-1}={x}_{j-1}),\hspace{1.0em}i\in \left\{j-k,j\right\}denotes the conditional df of variable iiconditional on these variables; note that Sj−1,j=∅{S}_{j-1,j}=\varnothing and so the conditioning set is dropped in this case. The decomposition in Eq. (1) implies a decomposition of the density c(u1,…,un)c\left({u}_{1},\ldots ,{u}_{n})of the unique copula of (X1,…,Xn)\left({X}_{1},\ldots ,{X}_{n}), which is given implicitly by (3)c(F1(x1),…,Fn(xn))=∏k=1n−1∏j=k+1ncj−k,j∣Sj−k,j(Fj−k∣Sj−k,j(xj−k),Fj∣Sj−k,j(xj)).c({F}_{1}\left({x}_{1}),\ldots ,{F}_{n}\left({x}_{n}))=\mathop{\prod }\limits_{k=1}^{n-1}\mathop{\prod }\limits_{j=k+1}^{n}{c}_{j-k,j| {S}_{j-k,j}}\left({F}_{j-k| {S}_{j-k,j}}\left({x}_{j-k}),{F}_{j| {S}_{j-k,j}}\left({x}_{j})).In practical applications, interest centres on models that admit the simplified d-vine decomposition in which the copula densities cj−k,j∣Sj−k,j{c}_{j-k,j| {S}_{j-k,j}}do not depend on the values of variables in the conditioning set Sj−k,j{S}_{j-k,j}and we can simply write cj−k,j{c}_{j-k,j}. Any set of copula densities {cj−k,j:1⩽k⩽n−1,k+1⩽j⩽n}\left\{{c}_{j-k,j}:1\leqslant k\leqslant n-1,k+1\leqslant j\leqslant n\right\}and any set of marginal densities fXi{f}_{{X}_{i}}may be used in the simplified version of (1) to create a valid nn-dimensional joint density. A number of papers have examined the limitations imposed by working with simplified vine copula models [20,35,44,45]. In Mroz et al. [35], it is shown that the class of simplified vines is not dense in the space of copulas for a number of metrics including the one induced by total variation distance. These results may be interpreted as showing that there exist multivariate distributions that are difficult to approximate with simplified d-vines. However, the simplified d-vine construction still greatly enlarges the class of tractable densities for time series applications.We are interested in strictly stationary stochastic processes whose higher-dimensional marginal distributions are simplified d-vines. As well as forcing fX1=⋯=fXn{f}_{{X}_{1}}=\cdots ={f}_{{X}_{n}}, this requirement imposes translation-invariance conditions on the copula densities cj−k,j{c}_{j-k,j}and conditional dfs F⋅∣Sj−k,j{F}_{\cdot | {S}_{j-k,j}}appearing in the simplified form of Eq. (1). It must be the case that cj−k,j{c}_{j-k,j}is the same for all j∈{k+1,…,n}j\in \left\{k+1,\ldots ,n\right\}, and so each pair copula density in the model can be associated with a lag kkand we can write ck≔cj−k,j{c}_{k}:= {c}_{j-k,j}, where ck{c}_{k}is the density of some bivariate copula Ck{C}_{k}. The conditional dfs can be represented by two sets of functions Rk(1):(0,1)k×(0,1)→(0,1){R}_{k}^{\left(1)}:{\left(0,1)}^{k}\times \left(0,1)\to \left(0,1)and Rk(2):(0,1)k×(0,1)→(0,1){R}_{k}^{\left(2)}:{\left(0,1)}^{k}\times \left(0,1)\to \left(0,1), which are defined in a recursive, interlacing fashion by R1(1)(u,x)=h1(1)(u,x){R}_{1}^{\left(1)}\left(u,x)={h}_{1}^{\left(1)}\left(u,x), R1(2)(u,x)=h1(2)(x,u){R}_{1}^{\left(2)}\left(u,x)={h}_{1}^{\left(2)}\left(x,u), and, for k⩾2k\geqslant 2, (4)Rk(1)(u,x)=hk(1)(Rk−1(2)(u−1,u1),Rk−1(1)(u−1,x)),Rk(2)(u,x)=hk(2)(Rk−1(2)(u−k,x),Rk−1(1)(u−k,uk)),\begin{array}{rcl}{R}_{k}^{\left(1)}\left({\boldsymbol{u}},x)& =& {h}_{k}^{\left(1)}({R}_{k-1}^{\left(2)}\left({{\boldsymbol{u}}}_{-1},{u}_{1}),{R}_{k-1}^{\left(1)}\left({{\boldsymbol{u}}}_{-1},x)),\\ {R}_{k}^{\left(2)}\left({\boldsymbol{u}},x)& =& {h}_{k}^{\left(2)}({R}_{k-1}^{\left(2)}\left({{\boldsymbol{u}}}_{-k},x),{R}_{k-1}^{\left(1)}\left({{\boldsymbol{u}}}_{-k},{u}_{k})),\end{array}where hk(i)(u1,u2)=∂∂uiCk(u1,u2){h}_{k}^{\left(i)}\left({u}_{1},{u}_{2})=\frac{\partial }{\partial {u}_{i}}{C}_{k}\left({u}_{1},{u}_{2})and u−i{{\boldsymbol{u}}}_{-i}indicates the vector u{\boldsymbol{u}}with the iith component removed.By using this new notation, we obtain a simplified form of Eq. (1) in which the density of the copula ccin Eq. (3) takes the form (5)c(n)(u1,…,un)=∏k=1n−1∏j=k+1nck(Rk−1(2)(u[j−k+1,j−1],uj−k),Rk−1(1)(u[j−k+1,j−1],uj)),{c}_{\left(n)}\left({u}_{1},\ldots ,{u}_{n})=\mathop{\prod }\limits_{k=1}^{n-1}\mathop{\prod }\limits_{j=k+1}^{n}{c}_{k}({R}_{k-1}^{\left(2)}\left({{\boldsymbol{u}}}_{\left[j-k+1,j-1]},{u}_{j-k}),{R}_{k-1}^{\left(1)}\left({{\boldsymbol{u}}}_{\left[j-k+1,j-1]},{u}_{j})),where u[j−k+1,j−1]=(uj−k+1,…,uj−1)⊤{{\boldsymbol{u}}}_{\left[j-k+1,j-1]}={\left({u}_{j-k+1},\ldots ,{u}_{j-1})}^{\top }. Note that, for simplicity of formulas, we abuse notation by including terms involving R0(1){R}_{0}^{\left(1)}and R0(2){R}_{0}^{\left(2)}; these terms should be interpreted as R0(1)(⋅,u)=R0(2)(⋅,u)=u{R}_{0}^{\left(1)}\left(\cdot ,u)={R}_{0}^{\left(2)}\left(\cdot ,u)=ufor all uu. Following Nagler et al. [36], we refer to a model with copula density of the form Eq. (5) as a stationary d-vine or s-vine.If a random vector (U1,…,Un)\left({U}_{1},\ldots ,{U}_{n})follows the copula C(n){C}_{\left(n)}with density c(n){c}_{\left(n)}in Eq. (5), then for any k∈{1,…,n−1}k\in \left\{1,\ldots ,n-1\right\}and j∈{k+1,…,n}j\in \left\{k+1,\ldots ,n\right\}, we have (6)Rk(1)(u,x)=P(Uj⩽x∣Uj−k=u1,…,Uj−1=uk),Rk(2)(u,x)=P(Uj−k⩽x∣Uj−k+1=u1,…,Uj=uk),\begin{array}{rcl}{R}_{k}^{\left(1)}\left({\boldsymbol{u}},x)& =& {\mathbb{P}}\left({U}_{j}\leqslant x| {U}_{j-k}={u}_{1},\ldots ,{U}_{j-1}={u}_{k}),\\ {R}_{k}^{\left(2)}\left({\boldsymbol{u}},x)& =& {\mathbb{P}}\left({U}_{j-k}\leqslant x| {U}_{j-k+1}={u}_{1},\ldots ,{U}_{j}={u}_{k}),\end{array}and we refer to the conditional distribution functions Rk(1){R}_{k}^{\left(1)}and Rk(2){R}_{k}^{\left(2)}as forward and backward Rosenblatt functions. Henceforth, we will often drop the superscript from the forward function and simply write Rk=Rk(1){R}_{k}={R}_{k}^{\left(1)}to obtain less notationally cumbersome expressions. The conditional densities corresponding to the Rosenblatt functions may be derived from Eq. (5). Writing fk{f}_{k}for the density of the forward Rosenblatt functions, we obtain f1(u,x)=c(2)(u,x)=c1(u,x){f}_{1}\left(u,x)={c}_{\left(2)}\left(u,x)={c}_{1}\left(u,x)and, for k>1k\gt 1(7)fk(u,x)=c(k+1)(u1,…,uk,x)c(k)(u1,…,uk)=∏j=1kcj(Rj−1(2)(u[k−j+2,k],uk−j+1),Rj−1(u[k−j+2,k],x)).{f}_{k}\left({\boldsymbol{u}},x)=\frac{{c}_{\left(k+1)}\left({u}_{1},\ldots ,{u}_{k},x)}{{c}_{\left(k)}\left({u}_{1},\ldots ,{u}_{k})}=\mathop{\prod }\limits_{j=1}^{k}{c}_{j}({R}_{j-1}^{\left(2)}\left({{\boldsymbol{u}}}_{\left[k-j+2,k]},{u}_{k-j+1}),{R}_{j-1}\left({{\boldsymbol{u}}}_{\left[k-j+2,k]},x)).The following assumption will be in force throughout the remainder of the paper.Assumption 1All copulas Ck{C}_{k}used in the construction of s-vine models belong to the class C∞{{\mathcal{C}}}^{\infty }of smooth functions with continuous partial derivatives of all orders. Moreover, their densities ck{c}_{k}are strictly positive on (0,1)2{\left(0,1)}^{2}.This assumption applies to all the standard pair copulas that are used in vine copula models (e.g., Gauss, Clayton, Gumbel, Frank, Joe, and t), as well as non-exchangeable extensions [29] or mixtures of copulas [30]. It ensures, among other things, that for fixed u{\boldsymbol{u}}, the Rosenblatt functions are bijections on (0,1)\left(0,1)with well-defined inverses. Let us write Rk−1(u,z){R}_{k}^{-1}\left({\boldsymbol{u}},z)for the inverses of the Rosenblatt forward functions, satisfying Rk−1(u,z)=x{R}_{k}^{-1}\left({\boldsymbol{u}},z)=xif and only if Rk(u,x)=z{R}_{k}\left({\boldsymbol{u}},x)=z. Inverses can also be defined for the Rosenblatt backward functions but will not be explicitly neededIn the sequel, we refer to the copulas Ck{C}_{k}as partial copulas. They should be distinguished from the bivariate marginal copulas given by C(k)(u,v)=P(Uj−k⩽u,Uj⩽v){C}^{\left(k)}\left(u,v)={\mathbb{P}}\left({U}_{j-k}\leqslant u,{U}_{j}\leqslant v)for any j∈{k+1,…,n}j\in \left\{k+1,\ldots ,n\right\}. The two copulas are related by the formula (8)C(k)(v1,v2)=E(P(Uj−k⩽v1,Uj⩽v2∣Uj−k+1,…,Uj−1))=E(Ck(Rk−1(2)((Uj−k+1,…,Uj−1)⊤,v1),Rk−1((Uj−k+1,…,Uj−1)⊤,v2)))=∫01⋯∫01Ck(Rk−1(2)(u,v1),Rk−1(u,v2))c(k−1)(u)du1⋯duk−1.\begin{array}{rcl}{C}^{\left(k)}\left({v}_{1},{v}_{2})& =& {\mathbb{E}}({\mathbb{P}}\left({U}_{j-k}\leqslant {v}_{1},{U}_{j}\leqslant {v}_{2}| {U}_{j-k+1},\ldots ,{U}_{j-1}))\\ & =& {\mathbb{E}}({C}_{k}({R}_{k-1}^{\left(2)}\left({\left({U}_{j-k+1},\ldots ,{U}_{j-1})}^{\top },{v}_{1}),{R}_{k-1}\left({\left({U}_{j-k+1},\ldots ,{U}_{j-1})}^{\top },{v}_{2})))\\ & =& \underset{0}{\overset{1}{\displaystyle \int }}\cdots \underset{0}{\overset{1}{\displaystyle \int }}{C}_{k}({R}_{k-1}^{\left(2)}\left({\boldsymbol{u}},{v}_{1}),{R}_{k-1}\left({\boldsymbol{u}},{v}_{2})){c}_{\left(k-1)}\left({\boldsymbol{u}}){\rm{d}}{u}_{1}\cdots {\rm{d}}{u}_{k-1}.\end{array}2.2S-vine processesWe use the following general definition for an s-vine process.Definition 1(S-vine process) A strictly stationary time series (Xt)t∈Z{\left({X}_{t})}_{t\in {\mathbb{Z}}}is an s-vine process if for every t∈Zt\in {\mathbb{Z}}and n⩾2n\geqslant 2the nn-dimensional marginal distribution of the vector (Xt,…,Xt+n−1)\left({X}_{t},\ldots ,{X}_{t+n-1})is absolutely continuous and admits a unique copula C(n){C}_{\left(n)}with a joint density c(n){c}_{\left(n)}of the form in Eq. (5). An s-vine process (Ut)t∈Z{\left({U}_{t})}_{t\in {\mathbb{Z}}}is an s-vine copula process if its univariate marginal distribution is standard uniform.Our aim is to construct processes that conform to this definition and investigate their properties and practical application. Since s-vine processes can be endowed with any continuous univariate marginal distribution fX{f}_{X}, we will mostly investigate the properties of s-vine copula processes.2.3A note on reversibilityIt is particularly common in applications of vine copulas to confine interest to standard exchangeable copulas Ck{C}_{k}. In this case, the resulting s-vine processes have the property of reversibility. For any u=(u1,…,un)⊤∈(0,1)n{\boldsymbol{u}}={\left({u}_{1},\ldots ,{u}_{n})}^{\top }\in {\left(0,1)}^{n}, let us write u¯=(un,…,u1)⊤\overline{{\boldsymbol{u}}}={\left({u}_{n},\ldots ,{u}_{1})}^{\top }for the reversed vector.Definition 2An s-vine copula process is reversible if for any n⩾2n\geqslant 2the higher dimensional marginal copulas satisfy C(n)(u)=C(n)(u¯){C}_{\left(n)}\left({\boldsymbol{u}})={C}_{\left(n)}\left(\overline{{\boldsymbol{u}}}).This is equivalent to saying that, for any t,s∈Zt,s\in {\mathbb{Z}}and any n>2,n\gt 2,the set of consecutive variables (Ut+1,…,Ut+n)\left({U}_{t+1},\ldots ,{U}_{t+n})from the process has the same distribution as the reversed vector (Us+n,…,Us+1)\left({U}_{s+n},\ldots ,{U}_{s+1}). The process evolves forwards and backwards in a similar fashion, which may not be ideal for phenomena in which there is a clear temporal notion of causality; however, as soon as non-exchangeable copulas are included, the reversibility is broken. In summary, we have the following simple result.Proposition 1If a copula sequence (Ck)k∈N{\left({C}_{k})}_{k\in {\mathbb{N}}}consists of exchangeable copulas then (i) the Rosenblatt forward and backward functions satisfy Rk(2)(u¯,x)=Rk(u,x){R}_{k}^{\left(2)}\left(\overline{{\boldsymbol{u}}},x)={R}_{k}\left({\boldsymbol{u}},x)for all (u,x)∈(0,1)k×(0,1)\left({\boldsymbol{u}},x)\in {\left(0,1)}^{k}\times \left(0,1)and (ii) the resulting s-vine copula process is reversible.3S-vine processes of finite order3.1Markov constructionThe first class of processes we consider are s-vine copula processes of finite order ppwhich are constructed from a set of copulas {C1,…,Cp}\left\{{C}_{1},\ldots ,{C}_{p}\right\}using the Markov approach described by Joe ([27], p. 145). Starting from a series of iid uniform innovation variables (Zk)k∈N{\left({Z}_{k})}_{k\in {\mathbb{N}}}we can set U1=Z1{U}_{1}={Z}_{1}and (9)Uk=Rk−1−1((U1,…,Uk−1)⊤,Zk),k⩾2.{U}_{k}={R}_{k-1}^{-1}({\left({U}_{1},\ldots ,{U}_{k-1})}^{\top },{Z}_{k}),\hspace{1.0em}k\geqslant 2.By using the inverses of the Rosenblatt forward functions we obtain, for any nn, a random vector (U1,…,Un)\left({U}_{1},\ldots ,{U}_{n})which forms a finite realization from an s-vine process (Ut)t∈Z{\left({U}_{t})}_{t\in {\mathbb{Z}}}. The copula C(n){C}_{\left(n)}of (U1,…,Un)\left({U}_{1},\ldots ,{U}_{n})has density c(n){c}_{\left(n)}in Eq. (5) but the copula densities ck{c}_{k}appearing in this expression satisfy ck(u,v)=1{c}_{k}\left(u,v)=1for k>pk\gt pand the s-vine is said to be truncated at order pp. Moreover, since hk(1)(u,v)=v{h}_{k}^{\left(1)}\left(u,v)=vfor k>pk\gt p, it follows from Eq. (4) that Rk(u,x)=Rk−1(u−1,x)=⋯=Rp(u[k−p+1,k],x){R}_{k}\left({\boldsymbol{u}},x)={R}_{k-1}\left({{\boldsymbol{u}}}_{-1},x)=\cdots ={R}_{p}\left({{\boldsymbol{u}}}_{\left[k-p+1,k]},x)and the updating Eq. (9) satisfies (10)Uk=Rp−1((Uk−p,…,Uk−1)⊤,Zk),k>p,{U}_{k}={R}_{p}^{-1}({\left({U}_{k-p},\ldots ,{U}_{k-1})}^{\top },{Z}_{k}),\hspace{1.0em}k\gt p,showing the Markovian character of the finite-order process.The recursive nature of the construction (Eq. (9)) means that there is an implied set of functions that we will label Sk:(0,1)k×(0,1)→(0,1){S}_{k}:{\left(0,1)}^{k}\times \left(0,1)\to \left(0,1)for k∈Nk\in {\mathbb{N}}such that (11)Uk=Sk−1((Z1,…,Zk−1)⊤,Zk),k⩾2.{U}_{k}={S}_{k-1}\left({\left({Z}_{1},\ldots ,{Z}_{k-1})}^{\top },{Z}_{k}),\hspace{1.0em}k\geqslant 2.The functions (Sk)k∈N{\left({S}_{k})}_{k\in {\mathbb{N}}}satisfy S1(z1,x)=R1−1(z1,x){S}_{1}\left({z}_{1},x)={R}_{1}^{-1}\left({z}_{1},x)and (12)Sk(z,x)=Rk−1((z1,S1(z1,z2),…,Sk−1(z[1,k−1],zk)),x),k⩾2.{S}_{k}\left({\boldsymbol{z}},x)={R}_{k}^{-1}(({z}_{1},{S}_{1}\left({z}_{1},{z}_{2}),\ldots ,{S}_{k-1}\left({{\boldsymbol{z}}}_{\left[1,k-1]},{z}_{k})),x),\hspace{1.0em}k\geqslant 2.The identity in Eq. (11) can be thought of as a causal representation of the process, while the complementary identity Zk=Rk−1((U1,…,Uk−1)⊤,Uk){Z}_{k}={R}_{k-1}\left({\left({U}_{1},\ldots ,{U}_{k-1})}^{\top },{U}_{k})implied by Eq. (9) can be thought of as an invertible representation. We refer to the functions (Sk)k∈N{\left({S}_{k})}_{k\in {\mathbb{N}}}as Rosenblatt inverse functions; they should be distinguished from the inverses of the Rosenblatt forward functions3.2Non-linear state space modelThe s-vine process of order ppcan be viewed as a pp-dimensional Markov chain with state space X=(0,1)p{\mathcal{X}}={\left(0,1)}^{p}. It is standard to treat Markov chains as being indexed by the natural numbers. To that end, for t∈Nt\in {\mathbb{N}}, we introduce the vector-valued process Ut=(Ut,…,Ut+p−1)⊤{{\boldsymbol{U}}}_{t}={\left({U}_{t},\ldots ,{U}_{t+p-1})}^{\top }, starting at U1=(U1,…,Up)⊤{{\boldsymbol{U}}}_{1}={\left({U}_{1},\ldots ,{U}_{p})}^{\top }, defined by the updating equation Ut=F(Ut−1,Zt){{\boldsymbol{U}}}_{t}=F\left({{\boldsymbol{U}}}_{t-1},{Z}_{t}), where (13)F:(0,1)p×(0,1)→(0,1)p,F(u,z)=(u2,…,up,Rp−1(u,z)).F:{\left(0,1)}^{p}\times \left(0,1)\to {\left(0,1)}^{p},\hspace{1.0em}F\left({\boldsymbol{u}},z)=({u}_{2},\ldots ,{u}_{p},{R}_{p}^{-1}\left({\boldsymbol{u}},z)).The Markov chain described by Eq. (13) defines a non-linear state space (NSS) model conforming exactly to the assumptions imposed in Meyn and Tweedie ([34], Section 2.2.2): under Assumption 1, the updating function FFis a smooth (C∞{{\mathcal{C}}}^{\infty }) function; the state space X=(0,1)p{\mathcal{X}}={\left(0,1)}^{p}is an open subset of Rp{{\mathbb{R}}}^{p}; the uniform distribution of innovations (Zt)\left({Z}_{t})will be taken to be supported on the open set (0,1)\left(0,1).Using standard arguments, the NSS model associated with Eq. (13) can be shown to be a ϕ\phi -irreducible, aperiodic Harris recurrent Markov chain and to admit an invariant probability measure π\pi , which is the measure implied by the density c(p){c}_{\left(p)}given by Eq. (5); we summarise the arguments in Appendix B. This in turn allows the ergodic theorem for Harris chains to be applied ([34], Theorem 13.3.3) to conclude that for any initial measure λ\lambda , the Markov transition kernel P(x,⋅){\mathsf{P}}\left({\boldsymbol{x}},\cdot )satisfies ∫λ(dx)Pn(x,⋅)−π(⋅)→0,n→∞,&#x2016;\int \lambda \left({\rm{d}}{\boldsymbol{x}}){{\mathsf{P}}}^{n}\left({\boldsymbol{x}},\cdot )-\pi \left(\cdot )&#x2016;\to 0,\hspace{1.0em}n\to \infty ,where ‖⋅‖\Vert \cdot \Vert denotes the total variation norm. This is also sufficient for the strong law of large numbers (SLLN) to hold ([34], Theorem 17.0.1): for a function g:Rp→Rg:{{\mathbb{R}}}^{p}\to {\mathbb{R}}, if we define Sn(g)=∑k=1ng(Uk){S}_{n}\left(g)={\sum }_{k=1}^{n}g\left({{\boldsymbol{U}}}_{k})and π(g)=∫g(u)c(p)(u)du\pi \left(g)=\int g\left({\boldsymbol{u}}){c}_{\left(p)}\left({\boldsymbol{u}}){\rm{d}}{\boldsymbol{u}}, then limn→∞n−1Sn(g)=π(g){\mathrm{lim}}_{n\to \infty }{n}^{-1}{S}_{n}\left(g)=\pi \left(g), almost surely, provided π(∣g∣)<∞\pi \left(| g| )\lt \infty .Although the Markov models are ergodic, we caution that they can exhibit some very extreme behaviour, albeit for copula choices that we are unlikely to encounter in practice. Figure 1 shows a realisation of 10,000 simulated values from a process of order p=3p=3, in which C1{C}_{1}is a 180-degree rotated Clayton copula with parameter θ=2\theta =2, C2{C}_{2}is a Clayton copula with θ=2\theta =2, and C3{C}_{3}is a rotated Clayton copula with θ=4\theta =4. Since the Clayton copula is well known to have lower tail dependence [25,27], this means that C1{C}_{1}and C3{C}_{3}have upper tail dependence and C3{C}_{3}is more strongly dependent than C1{C}_{1}and C2{C}_{2}. This increasing pattern of partial dependence, coupled with the strong upper tail dependence of C3{C}_{3}, leads to a period of over 1,500 successive values, which are all greater than 0.6. An observer of this process who plots a histogram of the values in this period would have difficulty believing that the marginal distribution is uniform.Figure 1Realisation of 10,000 simulated values from a process of order k=3k=3in which C1{C}_{1}is a 180∘18{0}^{\circ }rotated Clayton copula with parameter θ=2\theta =2, C2{C}_{2}is a Clayton copula with θ=2\theta =2and C2{C}_{2}is a rotated Clayton copula with θ=4\theta =4.This phenomenon is connected to rates of mixing behaviour and ergodic convergence for Markov processes. There is some literature for the case p=1p=1in which these rates are shown to vary with the choice of copula and, in particular, its behaviour in joint tail regions [3,5,12,13,31]. For some results relevant to the case, where p>1p\gt 1, see Rémillard et al. [39].4Gaussian processesGaussian processes are processes whose finite-dimensional marginal distributions are multivariate Gaussian. We will identify the term Gaussian processes with non-singular Gaussian processes throughout; i.e., we assume that the finite-dimensional marginal distributions of Gaussian processes have invertible covariance matrices and admit joint densities. Such processes represent a subclass of the s-vine processes.Proposition 2(1)Every stationary Gaussian process is an s-vine process.(2)Every s-vine process in which the pair copulas of the sequence (Ck)k∈N{\left({C}_{k})}_{k\in N}are Gaussian and the marginal distribution FX{F}_{X}is Gaussian, is a Gaussian process.4.1S-vine representations of Gaussian processesThe first implication of Proposition 2 is that every Gaussian process has a unique s-vine-copula representation. This insight offers methods for constructing or simulating such processes as generic s-vine processes using Eq. (9) and estimating them using a likelihood based on Eq. (5).Let (Xt)t∈N{\left({X}_{t})}_{t\in {\mathbb{N}}}be a stationary Gaussian process with mean μX{\mu }_{X}, variance σX2{\sigma }_{X}^{2}, and autocorrelation function (acf) (ρk)k∈N{\left({\rho }_{k})}_{k\in {\mathbb{N}}}; these three quantities uniquely determine a Gaussian process. We assume the following:Assumption 2The acf (ρk)k∈N{\left({\rho }_{k})}_{k\in {\mathbb{N}}}satisfies ρk→0{\rho }_{k}\to 0as k→∞k\to \infty .It is well known that this is a necessary and sufficient condition for a Gaussian process (Xt)\left({X}_{t})to be a mixing process and therefore ergodic [14,32].The acf uniquely determines the partial autocorrelation function (pacf) (αk)k∈N{\left({\alpha }_{k})}_{k\in {\mathbb{N}}}through a one-to-one transformation [2,38]. Since the partial autocorrelation of a Gaussian process is the correlation of the conditional distribution of (Xt−k,Xt)\left({X}_{t-k},{X}_{t})given the intervening variables, the pair copulas in the s-vine copula representation are given by Ck=CαkGa{C}_{k}={C}_{{\alpha }_{k}}^{\hspace{0.1em}\text{Ga}\hspace{0.1em}}.For k∈Nk\in {\mathbb{N}}let ρk=(ρ1,…,ρk)⊤{{\boldsymbol{\rho }}}_{k}={\left({\rho }_{1},\ldots ,{\rho }_{k})}^{\top }and let Pk{P}_{k}denote the correlation matrix of (X1,…,Xk)\left({X}_{1},\ldots ,{X}_{k}). Clearly, P1=1{P}_{1}=1and, for k>1k\gt 1, Pk{P}_{k}is a symmetric Toeplitz matrix whose diagonals are filled by the first k−1k-1elements of ρk{{\boldsymbol{\rho }}}_{k}; moreover, Pk{P}_{k}is non-singular for all kkunder Assumption 2 ([11], Proposition 4). The one-to-one series of recursive transformations relating (αk)k∈N{\left({\alpha }_{k})}_{k\in {\mathbb{N}}}to (ρk)k∈N{\left({\rho }_{k})}_{k\in {\mathbb{N}}}is α1=ρ1{\alpha }_{1}={\rho }_{1}, and, for k>1k\gt 1, (14)αk=ρk−ρk−1⊤Pk−1−1ρ¯k−11−ρk−1⊤Pk−1−1ρk−1,ρk=αk(1−ρk−1⊤Pk−1−1ρk−1)+ρk−1⊤Pk−1−1ρ¯k−1;\begin{array}{rcl}{\alpha }_{k}& =& \frac{{\rho }_{k}-{{\boldsymbol{\rho }}}_{k-1}^{\top }{P}_{k-1}^{-1}{\overline{{\boldsymbol{\rho }}}}_{k-1}}{1-{{\boldsymbol{\rho }}}_{k-1}^{\top }{P}_{k-1}^{-1}{{\boldsymbol{\rho }}}_{k-1}},\hspace{1.0em}{\rho }_{k}={\alpha }_{k}(1-{{\boldsymbol{\rho }}}_{k-1}^{\top }{P}_{k-1}^{-1}{{\boldsymbol{\rho }}}_{k-1})+{{\boldsymbol{\rho }}}_{k-1}^{\top }{P}_{k-1}^{-1}{\overline{{\boldsymbol{\rho }}}}_{k-1};\end{array}see, for example, Joe [26] or the Durbin–Levinson Algorithm ([11], Proposition 5.2.1).Remark 1Note that the restriction to non-singular Gaussian processes ensures that ∣ρk∣<1| {\rho }_{k}| \lt 1and ∣αk∣<1| {\alpha }_{k}| \lt 1, for all k∈Nk\in {\mathbb{N}}, and this is henceforth always assumed.We review three examples of well-known Gaussian processes from the point of view of s-vine processes.Example 1(Gaussian ARMA models) Any causal Gaussian ARMA(pp,qq) model may be represented as an s-vine process, and full maximum likelihood estimation can be carried out using a joint density based on Eq. (5). If ϕ=(ϕ1,…,ψp)⊤{\boldsymbol{\phi }}={\left({\phi }_{1},\ldots ,{\psi }_{p})}^{\top }and ψ=(ψ1,…,ψq)⊤{\boldsymbol{\psi }}={\left({\psi }_{1},\ldots ,{\psi }_{q})}^{\top }denote the AR and MA parameters and ρk(ϕ,ψ){\rho }_{k}\left({\boldsymbol{\phi }},{\boldsymbol{\psi }})the acf, then we can use the transformation in Eq. (14) to parameterize Eq. (5) in terms of ϕ{\boldsymbol{\phi }}and ψ{\boldsymbol{\psi }}using Gaussian pair copulas Ck=Cαk(ϕ,ψ)Ga{C}_{k}={C}_{{\alpha }_{k}\left({\boldsymbol{\phi }},{\boldsymbol{\psi }})}^{\hspace{0.1em}\text{Ga}\hspace{0.1em}}. In practice, this approach is more of theoretical interest since standard estimation methods are generally much faster.Example 2(Fractional Gaussian noise [FGN]) This process has acf given by ρk(H)=12((k+1)2H+(k−1)2H−2k2H),0<H<1,{\rho }_{k}\left(H)=\frac{1}{2}({\left(k+1)}^{2H}+{\left(k-1)}^{2H}-2{k}^{2H}),\hspace{1.0em}0\lt H\lt 1,where HHis the Hurst exponent [41]. Thus, the transformation Eq. (14) may be used to parameterize Eq. (5) in terms of HHusing Gaussian pair copulas Ck=Cαk(H)Ga{C}_{k}={C}_{{\alpha }_{k}\left(H)}^{\hspace{0.1em}\text{Ga}\hspace{0.1em}}and the FGN model may be fitted to data as an s-vine process and HHmay be estimated.Example 3(Gaussian ARFIMA models) The ARFIMA(p,d,qp,d,q) model with −1/2<d<1/2-1\hspace{0.1em}\text{/}2\lt d\lt 1\text{/}\hspace{0.1em}2can be handled in a similar way to the ARMA(p,qp,q) model, of which it is a generalization. In the case where p=q=0p=q=0, it has been shown [21] that (15)αk=dk−d,k∈N;{\alpha }_{k}=\frac{d}{k-d},\hspace{1.0em}k\in {\mathbb{N}};see also Brockwell and Davis ([11], Theorem 13.2.1). The simple closed-form expression for the pacf means that the ARFIMA(0,d,00,d,0) model is even more convenient to treat as an s-vine than FGN; the two models are in fact very similar in behaviour although not identical. It is interesting to note that the pacf is not summable and similar behaviour holds for some other ARFIMA processes. For example, for p,q∈N∪{0}p,q\in {\mathbb{N}}\cup \left\{0\right\}and 0<d<1/20\lt d\lt 1\hspace{0.1em}\text{/}\hspace{0.1em}2, the pacf satisfies ∣αk∣∼d/k| {\alpha }_{k}| \hspace{0.33em} \sim \hspace{0.33em}d\hspace{0.1em}\text{/}\hspace{0.1em}kas k→∞k\to \infty [23].4.2New Gaussian processes from s-vinesA further implication of Proposition 2 is that it shows how we can create and estimate some new stationary and ergodic Gaussian processes without setting them up in the classical way using recurrence equations, lag operators, and Gaussian innovations. Instead we choose sequences of Gaussian pair copulas (Ck)\left({C}_{k})parameterized by sequences of partial correlations (αk)\left({\alpha }_{k}).As in the previous section, we can begin with a parametric form for the acf ρk(θ){\rho }_{k}\left({\boldsymbol{\theta }})such that ρk(θ)→0{\rho }_{k}\left({\boldsymbol{\theta }})\to 0as k→∞k\to \infty and build the model using pair copulas parameterized by the parameters θ{\boldsymbol{\theta }}of the implied pacf αk(θ){\alpha }_{k}\left({\boldsymbol{\theta }}). Alternatively we can choose a parametric form for the pacf αk(θ){\alpha }_{k}\left({\boldsymbol{\theta }})directly.Any finite set of values {α1,…,αp}\left\{{\alpha }_{1},\ldots ,{\alpha }_{p}\right\}yields an AR(p) model, which is a special case of the finite-order s-vine models of Section 3. However, infinite-order processes that satisfy Assumption 2 are more delicate to specify. A necessary condition is that the sequence (αk)\left({\alpha }_{k})satisfies αk→0{\alpha }_{k}\to 0as k→0k\to 0, but this is not sufficient. To see this, note that if αk=(k+1)−1{\alpha }_{k}={\left(k+1)}^{-1}, the relationship (14) implies that ρk=0.5{\rho }_{k}=0.5for all kk, which violates Assumption 2. A sufficient condition follows from a result of Debowski [16], although, in view of Example 3, it is not a necessary condition:Assumption 3The partial acf (αk)k∈N{\left({\alpha }_{k})}_{k\in {\mathbb{N}}}satisfies ∑k=1∞∣αk∣<∞{\sum }_{k=1}^{\infty }| {\alpha }_{k}| \lt \infty .Debowski [16] showed that, if Assumption 3 holds, then the equality (16)1+2∑k=1∞ρk=∏k=1∞1+αk1−αk1+2\mathop{\sum }\limits_{k=1}^{\infty }{\rho }_{k}=\mathop{\prod }\limits_{k=1}^{\infty }\frac{1+{\alpha }_{k}}{1-{\alpha }_{k}}also holds. The rhs of Eq. (16) is a convergent product since absolute summability ensures that the sums ∑k=1∞ln(1±αk){\sum }_{k=1}^{\infty }\mathrm{ln}\left(1\pm {\alpha }_{k})converge. This implies the convergence of ∑k=1∞ρk{\sum }_{k=1}^{\infty }{\rho }_{k}, which implies ρk→0{\rho }_{k}\to 0, which in turn implies that Assumption 2 also holds, as we require.Assumption 3 still allows some quite pathological processes, as noted by Debowski [16]. For example, even for a finite-order AR(pp) process with αk⩾a>0{\alpha }_{k}\geqslant a\gt 0for k∈{1,…,p}k\in \left\{1,\ldots ,p\right\}and αk=0{\alpha }_{k}=0for k>pk\gt p, it follows that ∑k=1∞ρk⩾0.5(((1+a)/(1−a))p−1){\sum }_{k=1}^{\infty }{\rho }_{k}\geqslant 0.5\left({\left(\left(1+a)\text{/}\left(1-a))}^{p}-1), and this grows exponentially with ppleading to an exceptionally slow decay of the acf.4.3Rosenblatt functions for Gaussian processesFor Gaussian processes, the Rosenblatt functions and inverse Rosenblatt functions take relatively tractable forms.Proposition 3Let (Ck)k∈N{\left({C}_{k})}_{k\in {\mathbb{N}}}be a sequence of Gaussian pair copulas with parameters (αk)k∈N{\left({\alpha }_{k})}_{k\in {\mathbb{N}}}and assume that Assumption 2 holds. The forward Rosenblatt functions are given by(17)Rk(u,x)=ΦΦ−1(x)−∑j=1kϕj(k)Φ−1(uk+1−j)σk,{R}_{k}\left({\boldsymbol{u}},x)=\Phi \left(\frac{{\Phi }^{-1}\left(x)-\mathop{\sum }\limits_{j=1}^{k}{\phi }_{j}^{\left(k)}{\Phi }^{-1}\left({u}_{k+1-j})}{{\sigma }_{k}}\right),where σk2=∏j=1i(1−αj2){\sigma }_{k}^{2}={\prod }_{j=1}^{i}\left(1-{\alpha }_{j}^{2})and the coefficients ϕj(k){\phi }_{j}^{\left(k)}are given recursively by(18)ϕj(k)=ϕj(k−1)−αkϕk−j(k−1),j∈{1,…,k−1},αk,j=k.{\phi }_{j}^{\left(k)}=\left\{\begin{array}{ll}{\phi }_{j}^{\left(k-1)}-{\alpha }_{k}{\phi }_{k-j}^{\left(k-1)},& j\in \left\{1,\ldots ,k-1\right\},\\ {\alpha }_{k},& j=k.\end{array}\right.The inverse Rosenblatt functions are given by(19)Sk(z,x)=ΦσkΦ−1(x)+∑j=1kψj(k)Φ−1(zk+1−j),{S}_{k}\left({\boldsymbol{z}},x)=\Phi \left({\sigma }_{k}{\Phi }^{-1}\left(x)+\mathop{\sum }\limits_{j=1}^{k}{\psi }_{j}^{\left(k)}{\Phi }^{-1}\left({z}_{k+1-j})\right),where the coefficients ψj(k){\psi }_{j}^{\left(k)}are given recursively by(20)ψj(k)=∑i=1jϕi(k)ψj−i(k−i),j∈{1,…,k},{\psi }_{j}^{\left(k)}=\mathop{\sum }\limits_{i=1}^{j}{\phi }_{i}^{\left(k)}{\psi }_{j-i}^{\left(k-i)},\hspace{1.0em}j\in \left\{1,\ldots ,k\right\},where ψ0(k)=σk{\psi }_{0}^{\left(k)}={\sigma }_{k}for k⩾1k\geqslant 1and ψ0(0)=1{\psi }_{0}^{\left(0)}=1.We can analyse the behaviour of the Rosenblatt and inverse Rosenblatt functions as k→∞k\to \infty in a number of different cases.4.3.1Gaussian processes of finite orderIn the case of a Gaussian s-vine process of finite-order pp, we have, for k>pk\gt p, that αk=0{\alpha }_{k}=0, σk=σp{\sigma }_{k}={\sigma }_{p}and ϕj(k)=ϕj(p){\phi }_{j}^{\left(k)}={\phi }_{j}^{\left(p)}. If (Uk)k∈N{\left({U}_{k})}_{k\in {\mathbb{N}}}is constructed from (Zk)k∈N{\left({Z}_{k})}_{k\in {\mathbb{N}}}using the algorithm described by Eq. (9), and if we make the substitutions Xk=Φ−1(Uk){X}_{k}={\Phi }^{-1}\left({U}_{k})and εk=Φ−1(Zk){\varepsilon }_{k}={\Phi }^{-1}\left({Z}_{k})as in the proof of Proposition 3, then it follows from Eq. (17) that Xk=∑j=1pϕj(p)Xk−j+σpεk{X}_{k}={\sum }_{j=1}^{p}{\phi }_{j}^{\left(p)}{X}_{k-j}+{\sigma }_{p}{\varepsilon }_{k}for k>pk\gt p, which is the classical recurrence equation that defines a Gaussian AR(pp) process; from Eqs. (11) and (19), we also have that Xk=∑j=1k−1ψj(k−1)εk−j+σpεk{X}_{k}={\sum }_{j=1}^{k-1}{\psi }_{j}^{\left(k-1)}{\varepsilon }_{k-j}+{\sigma }_{p}{\varepsilon }_{k}for k>pk\gt p. These two representations can be written in invertible and causal forms as follows: (21)εk=∑j=0pϕ˜j(p)Xk−jandXk=∑j=0k−1ψj(k−1)εk−j,k>p,{\varepsilon }_{k}=\mathop{\sum }\limits_{j=0}^{p}{\tilde{\phi }}_{j}^{\left(p)}{X}_{k-j}\hspace{1.0em}\hspace{0.1em}\text{and}\hspace{0.1em}\hspace{1.0em}{X}_{k}=\mathop{\sum }\limits_{j=0}^{k-1}{\psi }_{j}^{\left(k-1)}{\varepsilon }_{k-j},\hspace{1.0em}k\gt p,where ϕ˜0(p)=1/σp{\tilde{\phi }}_{0}^{\left(p)}=1\hspace{0.1em}\text{/}\hspace{0.1em}{\sigma }_{p}, ϕ˜j(p)=−ϕj(p)/σp{\tilde{\phi }}_{j}^{\left(p)}=-{\phi }_{j}^{\left(p)}\hspace{0.1em}\text{/}\hspace{0.1em}{\sigma }_{p}for j>1j\gt 1and ψ0(k−1)=σp{\psi }_{0}^{\left(k-1)}={\sigma }_{p}.The first series in Eq. (21) is clearly a finite series, while the classical theory is concerned with conditions on the AR coefficients ϕ˜j(p){\tilde{\phi }}_{j}^{\left(p)}that allow us to pass to an infinite-order moving-average representation as k→∞k\to \infty in the second series. In fact, by setting up our Gaussian models using partial autocorrelations, causality in the classical sense is guaranteed; this follows as a special case of Theorem 1.4.3.2Gaussian processes with absolutely summable partial autocorrelationsWe next consider a more general case where the process may be of infinite order, but Assumption 3 holds. To consider infinite-order models, we now consider a process (Ut)t∈Z{\left({U}_{t})}_{t\in {\mathbb{Z}}}defined on the integers. The result that follows is effectively a restating of a result by Debowski [16] in the particular context of Gaussian s-vine copula processes.Theorem 1Let (Ut)t∈Z{\left({U}_{t})}_{t\in {\mathbb{Z}}}be a Gaussian s-vine copula process for which the parameters (αk)k∈N{\left({\alpha }_{k})}_{k\in {\mathbb{N}}}of the Gaussian pair copula sequence (Ck)k∈N{\left({C}_{k})}_{k\in {\mathbb{N}}}satisfy Assumption 3. Then, for all tt, we have the almost sure limiting representations(22)Ut=limk→∞Sk((Zt−k,…,Zt−1)⊤,Zt){U}_{t}=\mathop{\mathrm{lim}}\limits_{k\to \infty }{S}_{k}\left({\left({Z}_{t-k},\ldots ,{Z}_{t-1})}^{\top },{Z}_{t})(23)Zt=limk→∞Rk((Ut−k,…,Ut−1)⊤,Ut){Z}_{t}=\mathop{\mathrm{lim}}\limits_{k\to \infty }{R}_{k}\left({\left({U}_{t-k},\ldots ,{U}_{t-1})}^{\top },{U}_{t})for an iid uniform innovation process (Zt)t∈Z{\left({Z}_{t})}_{t\in {\mathbb{Z}}}.4.3.3Long-memory ARFIMA processesAs noted earlier, the pacf of an ARFIMA(p,d,qp,d,q) model with 0<d<0.50\lt d\lt 0.5is not absolutely summable [23], and so Theorem 1 does not apply in this case. Nevertheless, Brockwell and Davis ([11], Section 13.2) show that the Gaussian process has a casual representation of the form Xt=∑j=0∞ψjεt−j{X}_{t}={\sum }_{j=0}^{\infty }{\psi }_{j}{\varepsilon }_{t-j}, where convergence is now in mean square and the coefficients are square summable, i.e., ∑j=0∞ψj2<∞{\sum }_{j=0}^{\infty }{\psi }_{j}^{2}\lt \infty . Since convergence in mean square implies convergence in probability, the continuous mapping theorem implies that a representation of the form Ut=limk→∞Sk((Zt−k,…,Zt−1)⊤,Zt){U}_{t}={\mathrm{lim}}_{k\to \infty }{S}_{k}\left({\left({Z}_{t-k},\ldots ,{Z}_{t-1})}^{\top },{Z}_{t})at least holds under convergence in probability.4.3.4A non-causal and non-invertible caseIf αk=1/(k+1){\alpha }_{k}=1\hspace{0.1em}\text{/}\hspace{0.1em}\left(k+1)for all kk, then ρk=0.5{\rho }_{k}=0.5, and both Assumptions 2 and 3 are violated. It can be verified (for example by induction) that the recursive formulas (18) and (20) imply that ϕj(k)=1/(k+1){\phi }_{j}^{\left(k)}=1\hspace{0.1em}\text{/}\hspace{0.1em}\left(k+1)and ψj(k)=σk−j/(k+2−j){\psi }_{j}^{\left(k)}={\sigma }_{k-j}\hspace{0.1em}\text{/}\hspace{0.1em}\left(k+2-j)for j⩾1j\geqslant 1(recall that ψ0(k)=σk{\psi }_{0}^{\left(k)}={\sigma }_{k}). These coefficient sequences are unusual; the coefficients ϕj(k){\phi }_{j}^{\left(k)}of the Rosenblatt function in Eq. (17) place equal weight on all past values Xk+1−j=Φ−1(Uk+1−j){X}_{k+1-j}={\Phi }^{-1}\left({U}_{k+1-j}), while the coefficients ψj(k){\psi }_{j}^{\left(k)}of the inverse Rosenblatt function on the innovations in Eq. (19) place weight ψk(k)=1/2{\psi }_{k}^{\left(k)}=1\hspace{0.1em}\text{/}\hspace{0.1em}2on the first value ε1=Φ−1(Z1){\varepsilon }_{1}={\Phi }^{-1}\left({Z}_{1})and decreasing weights on more recent values εj{\varepsilon }_{j}, j>1j\gt 1.As k→∞k\to \infty , we do have σk2=∏j=1k(1−1/(k+1)2)→1/2{\sigma }_{k}^{2}={\prod }_{j=1}^{k}\left(1-1\hspace{0.1em}\text{/}\hspace{0.1em}{\left(k+1)}^{2})\to 1\hspace{0.1em}\text{/}\hspace{0.1em}2, but, for fixed j⩾1j\geqslant 1, the terms ψj(k){\psi }_{j}^{\left(k)}and ψj(k){\psi }_{j}^{\left(k)}both converge to the trivial limiting value 0. In particular, we do not obtain a convergent limiting representation of the form in Eq. 22.5General s-vine processesWe now consider infinite-order s-vine copula processes constructed from general sequences (Ck)k∈N{\left({C}_{k})}_{k\in {\mathbb{N}}}of pair copulas.5.1Causality and invertibilityThe key consideration for the stability of an infinite-order process is whether it admits a convergent causal representation. A process (Ut)t∈Z{\left({U}_{t})}_{t\in {\mathbb{Z}}}with such a representation is a convergent non-linear filter of independent noise. It will have the property that Ut{U}_{t}and Ut−k{U}_{t-k}are independent in the limit as k→∞k\to \infty , implying mixing behaviour and ergodicity. We suggest the following definition of the causality and invertibility properties for a general s-vine process.Definition 3Let (Ck)k∈N{\left({C}_{k})}_{k\in {\mathbb{N}}}be a sequence of pair copulas and let (Rk)k∈N{\left({R}_{k})}_{k\in {\mathbb{N}}}and (Sk)k∈N{\left({S}_{k})}_{k\in {\mathbb{N}}}be the corresponding Rosenblatt forward functions and Rosenblatt inverse functions defined by Eqs. (4) and (12). An s-vine copula process (Ut)t∈Z{\left({U}_{t})}_{t\in {\mathbb{Z}}}associated with the sequence (Ck)k∈N{\left({C}_{k})}_{k\in {\mathbb{N}}}is strongly causal if there exists a process of iid uniform random variables (Zt)t∈Z{\left({Z}_{t})}_{t\in {\mathbb{Z}}}such that Eq. (22) holds almost surely for all tt, and it is strongly invertible if representation (Eq. (23)) holds almost surely for all tt. If convergence in Eqs. (22) and (23) only holds in probability, the process is weakly causal or weakly invertible.We know that Gaussian ARMA processes defined as s-vine processes are always strongly causal (and invertible) and that the long-memory ARFIMA(p,d,qp,d,q) process with 0<d<0.50\lt d\lt 0.5is weakly causal. When we consider sequences of Rosenblatt functions for sequences of non-Gaussian pair copulas, proving causality appears to be more challenging mathematically, since it is no longer a question of analysing the convergence of series. In the next section, we use simulations to conjecture that causality holds for a class of processes defined via the Kendall correlations of the copula sequence.In a finite-order process, the copula sequence for any lag kkgreater than the order ppconsists of independence copulas; it seems intuitively clear that, to obtain an infinite-order process with a convergent causal representation, the partial copula sequence (Ck)k∈N{\left({C}_{k})}_{k\in {\mathbb{N}}}should converge to the independence copula C⊥{C}^{\perp }as k→∞k\to \infty . However, in view of Example 4.3.4, this is not a sufficient condition and the speed of convergence of the copula sequence is also important. Ideally, we require conditions on the speed of convergence Ck→C⊥{C}_{k}\to {C}^{\perp }so that the marginal copula C(k){C}^{\left(k)}in Eq. (8) also tends to C⊥{C}^{\perp }; in that case, the variables Ut{U}_{t}and Ut−k{U}_{t-k}are asymptotically independent as k→∞k\to \infty and mixing behaviour follows.5.2A practical approach to non-Gaussian s-vinesSuppose we take a sequence of pair copulas (Ck)k∈N{\left({C}_{k})}_{k\in {\mathbb{N}}}from some parametric family and parameterize them in such a way that (i) the copulas converge uniformly to the independence copula as k→∞k\to \infty and (ii) the level of dependence of each copula Ck{C}_{k}is identical to that of a Gaussian pair copula sequence that gives rise to an ergodic Gaussian process. The intuition here is that by sticking close to the pattern of decay of dependence in a well-behaved Gaussian process, we might hope to construct a stable causal process that is both mixing and ergodic.A natural way of making “level of dependence” concrete is to consider the Kendall rank correlation function of the copula sequence, defined in the following way.Definition 4The Kendall partial autocorrelation function (kpacf) (τk)k∈N{\left({\tau }_{k})}_{k\in {\mathbb{N}}}associated with a copula sequence (Ck)k∈N{\left({C}_{k})}_{k\in {\mathbb{N}}}is given by τk=τ(Ck){\tau }_{k}=\tau \left({C}_{k}), for k∈Nk\in {\mathbb{N}}, where τ(C)\tau \left(C)denotes the Kendall’s tau coefficient for a copula CC.For a Gaussian copula sequence with Ck=CαkGa{C}_{k}={C}_{{\alpha }_{k}}^{\hspace{0.1em}\text{Ga}\hspace{0.1em}}, we have (24)τk=2πarcsin(αk).{\tau }_{k}=\frac{2}{\pi }\arcsin \left({\alpha }_{k}).As in Section 4.2, suppose that (αk(θ))k∈N{\left({\alpha }_{k}\left({\boldsymbol{\theta }}))}_{k\in {\mathbb{N}}}is the pacf of a stationary and ergodic model Gaussian process parametrized by the parameters θ{\boldsymbol{\theta }}, such as an ARMA or ARFIMA model; this implies a parametric form for the kpacf (τk(θ))k∈N{\left({\tau }_{k}\left({\boldsymbol{\theta }}))}_{k\in {\mathbb{N}}}. The idea is to choose a sequence of non-Gaussian pair copulas that shares this kpacf.A practical problem that may arise is that τk=τk(θ){\tau }_{k}={\tau }_{k}\left({\boldsymbol{\theta }})can take any value in (−1,1)\left(-1,1)in practice; only certain copula families, such as Gauss and Frank, are said to be comprehensive and yield any value for τk{\tau }_{k}. If we wish to use, for example, a sequence of Gumbel copulas to build our model, then we need to find a solution for negative values of Kendall’s tau. One possibility is to allow 90 or 270 degree rotations of the copula at negative values of τk{\tau }_{k}and another is to substitute a comprehensive copula at any position kkin the sequence such that τk{\tau }_{k}is negative.Remark 2Note that the assumption that the pair copulas Ck{C}_{k}converge to the independence copula has implications for using ttcopulas Cν,αt{C}_{\nu ,\alpha }^{t}in this approach. The terms of the copula sequence Ck=Cνk,αkt{C}_{k}={C}_{{\nu }_{k},{\alpha }_{k}}^{t}would have to satisfy νk→∞{\nu }_{k}\to \infty and αk→0{\alpha }_{k}\to 0as k→∞k\to \infty ; the sequence given by Ck=Cν,αkt{C}_{k}={C}_{\nu ,{\alpha }_{k}}^{t}for fixed ν\nu does not converge to the independence copula as αk→0{\alpha }_{k}\to 0. While the sequence (αk)k∈N{\left({\alpha }_{k})}_{k\in {\mathbb{N}}}can be connected to the kpacf by the same formula (24), the sequence (νk)k∈N{\left({\nu }_{k})}_{k\in {\mathbb{N}}}is not fixed by the kpacf. It is simpler in this approach to work with copula families with a single parameter so that there is a one-to-one relationship between Kendall’s tau and the copula parameter.To compare the speed of convergence of the copula filter for different copula sequences sharing the same kpacf, we conduct some simulation experiments. For fixed nnand for a fixed realization z1,…,zn{z}_{1},\ldots ,{z}_{n}of independent uniform noise we plot the points (k,Sk(z[n−k,n−1],zn))\left(k,{S}_{k}\left({{\boldsymbol{z}}}_{\left[n-k,n-1]},{z}_{n}))for k∈{1,…,n−1}k\in \left\{1,\ldots ,n-1\right\}. We expect the points to converge to a fixed value as k→n−1k\to n-1, provided we take a sufficiently large value of nn. When the copula sequence consists of Clayton copulas we will refer to the model as a Clayton copula filter; similarly, Gumbel copulas yield a Gumbel copula filter; and so on. The following examples suggest that there are some differences in the convergence rates of the copula filters. This appears to relate to the tail dependence characteristics of the copulas [25,27]. We recall that the Gumbel and Joe copulas are upper tail dependent, while the Clayton copula is lower tail dependent; the Gauss and Frank copulas are tail independent. The filters based on sequences of tail-dependent copulas generally show slower convergence.Example 4(Non-Gaussian ARMA(1,1) models). In this example, we consider s-vine copula processes sharing the kpacf of the ARMA(1,1) model with autoregressive parameter 0.95 and moving-average parameter -0.85. Fixing n=201n=201, we obtain Figure 2. Convergence appears to be fastest for the Gaussian and Frank copula filters and slowest for the Clayton filter, followed by the Joe filter; the Gumbel filter is an intermediate case. We can also discern a tendency for jumps in the value of Sk(z[n−k,n−1],zn){S}_{k}\left({{\boldsymbol{z}}}_{\left[n-k,n-1]},{z}_{n})to be upward for the upper tail-dependent Gumbel and Joe copulas and downward for the lower tail-dependent Clayton copula.Figure 2Plots of (k,Sk(z[n−k,n−1],zn))\left(k,{S}_{k}\left({{\boldsymbol{z}}}_{\left[n-k,n-1]},{z}_{n}))for k∈{1,…,n−1}k\in \left\{1,\ldots ,n-1\right\}for the copula filters of ARMA(1,1) models; see Example 4. Horizontal lines show ultimate values Sn−1(z[1,n−1],zn){S}_{n-1}\left({{\boldsymbol{z}}}_{\left[1,n-1]},{z}_{n}).Example 5(Non-Gaussian ARFIMA(1, dd, 1) models) In this example, we consider s-vine copula processes sharing the kpacf of the ARFIMA(1, dd, 1) model with autoregressive parameter 0.95, moving-average parameter −0.85-0.85and fractional differencing parameter d=0.02d=0.02. The latter implies that the pacf of the Gaussian process satisfies ∣αk∣∼0.02/k| {\alpha }_{k}| \hspace{0.33em} \sim \hspace{0.33em}0.02\hspace{0.1em}\text{/}\hspace{0.1em}kas k→∞k\to \infty [23]. The lack of absolute summability means that the Gaussian copula process does not satisfy the conditions of Theorem 1. It is an unresolved question whether any of these processes is causal. Fixing n=701n=701, we obtain Figure 3. For the realized series of innovations used in the picture, convergence appears to take place, but it is extremely slow. The tail-dependent Clayton and Joe copulas appear to take longest to settle down.Figure 3Plots of (k,Sk(z[n−k,n−1],zn))\left(k,{S}_{k}\left({{\boldsymbol{z}}}_{\left[n-k,n-1]},{z}_{n}))for k∈{1,…,n−1}k\in \left\{1,\ldots ,n-1\right\}for the copula filters of ARFIMA(1, dd, 1) models; see Example 5. Horizontal lines show ultimate values Sn−1(z[1,n−1],zn){S}_{n-1}\left({{\boldsymbol{z}}}_{\left[1,n-1]},{z}_{n}).An obvious practical solution that circumvents the issue of whether the infinite-order process has a convergent causal representation is to truncate the copula sequence (Ck)k∈N{\left({C}_{k})}_{k\in {\mathbb{N}}}so that Ck=C⊥{C}_{k}={C}^{\perp }for k>pk\gt pfor some relatively large but fixed value pp. This places us back in the setting of ergodic Markov chains but, by parameterizing models through the kpacf, we preserve the advantages of parsimony.5.3An example with real dataFor this example, we have used data on the US CPI (consumer price index) taken from the OECD webpage. We analyse the log-differenced time series of quarterly CPI values from the first quarter of 1960 to the 4th quarter of 2020, which can be interpreted as measuring the rate of inflation ([46], Sections 14.2–14.4). The inflation data are shown in the upper-left panel of Figure 4; there are n=244n=244observations.Figure 4Top row: log-differenced CPI data and estimated kpacf of s-vine copula process using Gumbel copula sequence. Middle row: QQ-plots for residuals from models based on Gaussian (left) and Gumbel (right) copula sequences. Bottom row: QQ-plots of the data against fitted normal (left) and skewed Student (right) marginal distributions.To establish a baseline model, we use an automatic ARMA selection algorithm, and this selects an ARMA(5,1) model. We first address the issue of whether the implied Gaussian copula sequence in an ARMA(5,1) model can be replaced by Gumbel, Clayton, Frank, or Joe copula sequences (or 180 degree rotations thereof); for any lag kkat which the estimated kpacf τk{\tau }_{k}is negative, we retain a Gaussian copula and so the non-Gaussian copula sequences are actually hybrid sequences with some Gaussian terms. The data (x1,…,xn)\left({x}_{1},\ldots ,{x}_{n})are transformed to pseudo-observations (u1,…,un)\left({u}_{1},\ldots ,{u}_{n})on the copula scale using the empirical distribution function, and the s-vine copula process is estimated by maximum-likelihood; this is the commonly used pseudo-maximum-likelihood method [12,19].The best model results from replacing Gaussian copulas with Gumbel copulas, and the improvements in AIC and BIC are shown in the upper panel of Table 1; the improvement in fit is strikingly large. While the presented results relate to infinite-order processes, we note that very similar result (not tabulated) are obtained by fitting s-vine copula processes of finite order, where the kpacf is truncated at lag 30. Parameter estimates for the infinite-order models are presented in Table 2.Table 1Comparison of models by AIC and BIC: the top two lines relate to models for the pseudo-copula data (u1,…,un)\left({u}_{1},\ldots ,{u}_{n}), while the bottom three lines relate to full models of the original data (x1,…,xn)\left({x}_{1},\ldots ,{x}_{n})No. parsAICBICGaussian copula process6−184.62−163.64Gumbel copula process6−209.28−188.30Gaussian process8372.73400.71Gaussian copula process + skewed Student margin10352.50387.47Gumbel copula process + skewed Student margin10319.17354.14Table 2Parameter estimates and standard errors for s-vine copula processes with Gaussian and Gumbel copula sequences fitted to the pseudo-copula data (u1,…,un)\left({u}_{1},\ldots ,{u}_{n})θ(Ga){{\boldsymbol{\theta }}}^{\text{(Ga)}}s.e.θ(Gu){{\boldsymbol{\theta }}}^{\text{(Gu)}}s.e.ϕ1{\phi }_{1}−0.3810.104−0.2320.130ϕ2{\phi }_{2}0.1440.0810.1360.094ϕ3{\phi }_{3}0.1970.0630.1800.061ϕ4{\phi }_{4}0.4620.0750.4100.077ϕ5{\phi }_{5}0.3240.0630.2660.061ψ1{\psi }_{1}0.8700.0980.7710.118The residual QQ-plots in the middle row of Figure 4 give further insight into the improved fit of the process with Gumbel copulas. In the usual manner, residuals are reconstructions of the unobserved innovation variables. If (R^k)k∈N{\left({\widehat{R}}_{k})}_{k\in {\mathbb{N}}}denotes the sequence of estimated Rosenblatt forward functions, implied by the sequence (C^k)k∈N{\left({\widehat{C}}_{k})}_{k\in {\mathbb{N}}}of estimated copulas, then residuals (z1,…,zn)\left({z}_{1},\ldots ,{z}_{n})are constructed by setting z1=u1{z}_{1}={u}_{1}and zt=R^t−1(u[1,t−1],ut){z}_{t}={\widehat{R}}_{t-1}\left({{\boldsymbol{u}}}_{\left[1,t-1]},{u}_{t})for t>1t\gt 1. To facilitate graphical analysis, these are transformed onto the standard normal scale so that the QQ-plots in the middle row of Figure 4 relate to the values (Φ−1(z1),…,Φ−1(zn))\left({\Phi }^{-1}\left({z}_{1}),\ldots ,{\Phi }^{-1}\left({z}_{n}))and are against a standard normal reference distribution. The residuals from the baseline Gaussian copula appear to deviate from normality, whereas the residuals from the Gumbel copula model are much better behaved; the latter pass a Shapiro-Wilk test of normality (pp-value = 0.97), whereas the former do not (pp-value = 0.01).The picture of the kpacf in the top right panel of Figure 4 requires further comment. This plot attempts to show how well the kpacf of the fitted copula sequence matches the empirical Kendall partial autocorrelations of the data. The continuous line is the kpacf of the Gumbel/Gaussian copula sequence used in the best-fitting vine copula model of (u1,…,un)\left({u}_{1},\ldots ,{u}_{n}). The vertical bars show the empirical Kendall partial autocorrelations of the data at each lag kk. However, the method should really be considered as “semi-empirical” as it uses the fitted parametric copulas at lags 1,…,k−11,\ldots ,k-1in order to construct the necessary data for lag kk. The data used to estimate an empirical lag kkrank correlation are the points {(R^k−1(2)(u[j−k+1,j−1],uj−k),R^k−1(u[j−k+1,j−1],uj)),j=k+1,…,n},\{({\widehat{R}}_{k-1}^{\left(2)}\left({{\boldsymbol{u}}}_{\left[j-k+1,j-1]},{u}_{j-k}),{\widehat{R}}_{k-1}\left({{\boldsymbol{u}}}_{\left[j-k+1,j-1]},{u}_{j})),\hspace{1.0em}j=k+1,\ldots ,n\},where Rk^\widehat{{R}_{k}}and R^k(2){\widehat{R}}_{k}^{\left(2)}denote the estimates of forward and backward Rosenblatt functions; it may be noted that these data are precisely the points at which the copula density ck{c}_{k}is evaluated when the model likelihood based on c(n){c}_{\left(n)}in Eq. (5) is maximized.The kpacf shows positive dependence between inflation rates at the first 5 lags; moreover, the choice of Gumbel copula suggests asymmetry and upper tail dependence in the bivariate distribution of inflation rates at time points that are close together; in other words, large values of inflation are particularly strongly associated with large values of inflation in previous quarters, while low values are more weakly associated.We next consider composite models for the original data (x1,…,xn)\left({x}_{1},\ldots ,{x}_{n})consisting of a marginal distribution and an s-vine copula process. The baseline model is simply a Gaussian process with Gaussian copula sequence and Gaussian marginal distribution. We experimented with a number of alternatives to the normal marginal and obtained good results with the skewed Student distribution from the family of skewed distributions proposed by Fernandez and Steel [18]. Table 1 contains results for models which combine the Gaussian and Gumbel copula sequences with the skewed Student margin; the improvement obtained by using a Gumbel sequence with a skewed Student margin is clear from the AIC and BIC values. The QQ-plots of the data against the fitted marginal distributions in the bottom row of Figure 4 also show the superiority of the skewed Student to the Gaussian distribution for this dataset.The fitting method used for the composite model results in Table 1 is the two-stage IFM (inference functions for margins) method [25] in which the margin is estimated first, the data are transformed to approximately uniform using the marginal model, and the copula process is estimated by ML in a second step.The estimated values of the degree of freedom and skewness parameters in the skewed Student t marginal distribution are ν=3.19\nu =3.19and γ=1.47\gamma =1.47, respectively. These suggest that inflation rates (changes in log CPI) follow a heavy tailed, infinite-kurtosis distribution (tail index = 3.19) that is skewed to the right.6ConclusionThe s-vine processes provide a class of tractable stationary models that can capture non-linear and non-Gaussian serial dependence behaviour as well as any continuous marginal behaviour. By defining models of infinite order and using the approach based on the Kendall partial autocorrelation function (kpacf), we obtain a very natural generalization of classical Gaussian processes, such as Gaussian ARMA or ARFIMA.The models are straightforward to apply. The parsimonious parametrization based on the kpacf makes maximum likelihood inference feasible. Analogues of many of the standard tools for time series analysis in the time domain are available, including estimation methods for the kpacf and residual plots that shed light on the quality of the fit of the copula model. By separating the issues of serial dependence and marginal modelling, we can obtain bespoke descriptions of both aspects that avoid the compromises of the more “off-the-shelf” classical approach. The example of Section 5.3 indicates the kind of gains that can be obtained; it seems likely that many empirical applications of classical ARMA could be substantially improved by the use of models in the general s-vine class. In combination with v-transforms [33], s-vine models could also be used to model data showing stochastic volatility following the approach developed by Bladt and McNeil [9].To increase the practical options for model building it would be of interest to consider how copulas with more than one parameter, such as the t copula or the symmetrized Joe-Clayton copula [37] could be incorporated into the methodology. The parameters would have to be allowed to change in a smooth parsimonious manner such that the partial copula sequence (Ck)k∈N{\left({C}_{k})}_{k\in {\mathbb{N}}}converged to the independence copula while the Kendall correlations (τk)k∈N{\left({\tau }_{k})}_{k\in {\mathbb{N}}}followed the chosen form of kpacf for every kk. This is a topic for further research.The approach we have adopted should also be of interest to theoreticians as there are a number of challenging open questions to be addressed. While we have proposed definitions of causality and invertibility for general s-vine processes, we currently lack a mathematical methodology for checking convergence of causal and invertible representations for sequences of non-Gaussian pair copulas.There are some very interesting questions to address about the relationship between the partial copula sequence (Ck)k∈N{\left({C}_{k})}_{k\in {\mathbb{N}}}, the rate of convergence of causal representations and the rate of ergodic mixing of the resulting processes. The example of Figure 1 indicates that, even for a finite-order process, some very extreme models can be constructed that mix extremely slowly. Moreover, Example 5 suggests that non-Gaussian copula sequences serve to further elongate memory in long-memory processes, and this raises questions about the effect of the tail dependence properties of the copula sequence on rates of convergence and length of memory.It would also be of interest to confirm our conjecture that the pragmatic approach adopted in Section 5.2, in which the kpacf of the (infinite) partial copula sequence (Ck)k∈N{\left({C}_{k})}_{k\in {\mathbb{N}}}is matched to that of a stationary and ergodic Gaussian process, always yields a stationary and ergodic s-vine model, regardless of the choice of copula sequence. However, for practical applications, the problem can be obviated by truncating the copula sequence at some large finite lag kk, so that we are dealing with an ergodic Markov chain as shown in Section 3.

Journal

Dependence Modelingde Gruyter

Published: Jan 1, 2022

Keywords: time series; vine copulas; Gaussian processes; ARMA processes; ARFIMA processes; 62M10; 62M05; 62H05; 60G10; 60G15; 60G22

There are no references for this article.