Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Real-time data assimilation and control on mechanical systems under uncertainties

Real-time data assimilation and control on mechanical systems under uncertainties ludovic.chamoin@ens-paris- saclay.fr This research work deals with the implementation of so-called Dynamic Data-Driven Université Paris-Saclay, ENS Application Systems (DDDAS) in structural mechanics activities. It aims at designing a Paris-Saclay, LMT, 4 Avenue des Sciences, 91190 Gif-sur-Yvette, real-time numerical feedback loop between a physical system of interest and its France numerical simulator, so that (i) the simulation model is dynamically updated from Full list of author information is sequential and in situ observations on the system; (ii) the system is appropriately driven available at the end of the article and controlled in service using predictions given by the simulator. In order to build such a feedback loop and take various uncertainties into account, a suitable stochastic framework is considered for both data assimilation and control, with the propagation of these uncertainties from model updating up to command synthesis by using a specific and attractive sampling technique. Furthermore, reduced order modeling based on the Proper Generalized Decomposition (PGD) technique is used all along the process in order to reach the real-time constraint. This permits fast multi-query evaluations and predictions, by means of the parametrized physics-based model, in the online phase of the feedback loop. The control of a fusion welding process under various scenarios is considered to illustrate the proposed methodology and to assess the performance of the associated numerical architecture. Keywords: Data assimilation, Real-time control, Model reduction, Uncertainty quantification and propagation, Bayesian inference, Proper generalized decomposition Introduction The continuous interaction between physical systems and high-fidelity simulation tools (i.e. virtual twins) has become a key enabler for industry as well as an appealing research topic along the last decade (see for instance [11]). This is at the heart of the Dynamic Data Driven Application System (DDDAS) concept [12], in which a simulation model is used to make decisions and drive an evolving physical system, and is in the same time fed by data collected on this system in order to update parameters and ensure the con- tinual consistency between numerical predictions and physical reality. In other words, the DDDAS concept aims at building a numerical feedback loop between the physical system and its simulator, with on-the-fly data assimilation and control (Fig. 1). Neverthe- less, there are two main numerical challenges in the implementation of such a loop for © The Author(s) 2021. This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. 0123456789().,–: volV Rubio et al. Adv. Model. and Simul. in Eng. Sci. (2021) 8:4 Page 2 of 25 Fig. 1 Scheme of the DDDAS feedback control loop structural mechanics applications. On the one hand, the dialog between numerical mod- els and physical systems is in practice subject to several sources of uncertainty, including measurement noise, modeling errors, or variabilities in the system properties and environ- ment. On the other hand, a relevant feedback loop requires effective numerical methods such that real-time computations and interactions can be performed. The paper presents a general strategy, addressing the two previous challenges, for the design of an effective numerical feedback loop between a physical system and its simu- lator. It considers a stochastic framework for sequential data assimilation and control, that uses Bayesian inference for model updating from in situ data as well as uncertainty propagation to make predictions from the model and synthesize control laws. Such a framework considers parameters to be inferred as random variables, and it naturally takes all uncertainty sources into account [2,6,17,22,30,31]. The proposed strategy also leans on two ingredients which permit to achieve the real-time constraint. First, Transport Map sampling [13] is used as an alternative to Markov Chain Monte-Carlo (MCMC) [14,25] or Sequential Monte-Carlo [1] techniques in order to perform fast Bayesian inference with convenient sampling of multi-dimensional posterior densities and associated adaptive strategies. The Transport Map technique consists in building a deterministic polynomial mapping between the posterior probability measure of interest and a simple reference measure (e.g. Gaussian distribution) [21,23,29]. It thus permits an automatic exploration, from the constructed mapping, of the multi-parametric stochastic space in order to effectively derive useful information such as means, standard deviations, maxima, or marginals on model parameters. Such pieces of information can then be propagated to model outputs in order to quantify uncertainty, synthesize the appropriate command in a stochastic context, and thus make safe decision on the evolving system. Second, model reduction by means of the Proper Generalized Decomposition (PGD) tech- nique [9,10] is introduced in order to reduce the computational effort for the evaluation of multi-parametric numerical models, and therefore further speed up the overall process. The PGD approximation builds a modal representation of the multi-parametric model solution with separated variables and explicit dependency on model parameters. This Rubio et al. Adv. Model. and Simul. in Eng. Sci. (2021) 8:4 Page 3 of 25 Fig. 2 Illustration of the considered welding model representation is computed in an offline phase with controlled accuracy [8] before being evaluated at low cost in the online phase. It is shown in the paper that the PGD tech- nique (i) facilitates the computation of the likelihood function involved in the Bayesian inference framework [3,26]; (ii) can be effectively coupled with Transport Map sampling for the calculation of the maps, as it directly provides information on solution derivatives [27,28]; (iii) is a particularly effective tool for performing uncertainty propagation through the forward model as well as command law synthesis. A particular focus is made here on the latter point dealing with effective command in a stochastic framework; this has been investigated in very few works of the literature, even though it is a major aspect of the DDDAS procedure. The dynamic command synthesis we propose, using advantages of Transport Map sampling and PGD model reduction, is the main novelty of the paper. It permits the construction and implementation of the full DDDAS feedback loop. The constructed feedback loop is here illustrated in the context of a fusion welding process. It involves a simplified welding model introduced in [16] (and described in Fig. 2), which is supposed to be an accurate enough representation of the physical phenomena of interest. In this two-dimensional model, two metal plates are welded by a heat source whose center is moving along the geometry. The problem unknown is the dimensionless tem- perature field T in the space domain  and over the time domain I; T = 0 when the temperature is equal to the room temperature, and T = 1 when the temperature is equal to the melting temperature of the material. On the right-hand side boundary  (see Fig. 2), the temperature is supposed to be equal to the room temperature (T = 0). The other boundaries are supposed to be insulated. To solve the problem, the system of coordinates is made moving at the same speed as the heat source. Thus, the model problem is described by the following heat equation with convective term: ∂T + v(Pe) · gradT − κT = s(σ)(1) ∂t where v = [Pe; 0] is the advection velocity, Pe = v ·L /κ is the Peclet number (L being the c c characteristic length of the problem), and κ is the thermal diffusivity of the material. The volume heat source term s is defined by the following Gaussian repartition in the space domain: 2 2 u x − x + y − y ( ) ( ) c c s(x, y; σ ) = exp − (2) 2 2 2πσ 2σ Rubio et al. Adv. Model. and Simul. in Eng. Sci. (2021) 8:4 Page 4 of 25 Fig. 3 Illustration of the two model parameters (left and center), and time evolution of the command (right) where coordinates (x ,y ) represent the location of the heat source center, u is the mag- c c nitude, and σ is a scalar parameter that drives the source expansion. From the integration of (1) over , the weak formulation in space of the problem is of the form: find T ∈ T such that ∗ ∗ ∗ a(T, T ) = l(T ) ∀T ∈ T (3) with: ∂T ∗ ∗ ∗ a(T, T ) = ( + v · gradT) · T + κ · gradT · gradT d ∂t (4) ∗ ∗ l(T ) = s · T d 2 1 The functional space T is the Bochner space L (I; S)  S ⊗I,with S = H the Sobolev 0| space of H functions on  satisfying homogeneous Dirichlet boundary conditions on  , and I = L (I) the Lebesgue space. The model parameters to be updated from indirect noisy data are p ={σ,Pe}, which are respectively related to the spatial spreading and speed of the heat source as illustrated in Fig. 3. They may be varying over the time domain. Data consist in the measurement of temperatures T and T at two points in  (see Fig. 2). From these data assimilated 1 2 sequentially in time, the purpose is twofold: (i) to dynamically update the model parame- ters p; (ii) to control from the updated model the temperature T at another point in , which is the output of interest assumed to be unreachable by direct measurement, and perform corrections on the welding process if necessary. The control variable is the mag- nitude u of the heat source, that is supposed to be piecewise constant in time as illustrated in Fig. 3. The paper outline is as follows: in “Reduced order modeling using PGD” section, the PGD model reduction applied to the above reference model is detailed. It is then employed in association with Bayesian inference and Transport Map sampling for fast data assim- ilation and model updating in “Real-time data assimilation with Bayesian inference and Transport Map sampling” section. All these tools are beneficially reused for on-the-fly command synthesis and system control in “Real-time control” section. Several numerical experiments are reported in “Results and discussion” section, which show the interest and performance of the proposed feedback loop by considering various welding scenar- ios. Sequential data assimilation, uncertainty propagation up to the output of interest, and real-time control of the welding process are illustrated for each of these scenarios. Eventually, conclusions and prospects are drawn in “Conclusions” section. Rubio et al. Adv. Model. and Simul. in Eng. Sci. (2021) 8:4 Page 5 of 25 Methods Reduced order modeling using PGD Due to the increasing number of high-dimensional approximation problems, which nat- urally arise in many situations such as optimization or uncertainty quantification, model reduction techniques have been the object of a growing interest and are now a mature tech- nology [19,24]. Tensor methods are among the most prominent tools for the construction of model reduction techniques as in many practical applications, the approximation of high-dimensional solutions of Partial Differential Equations (PDEs) is made computa- tionally tractable by using low-rank tensor formats. In particular, an appealing technique based on a canonical format and referred to as Proper Generalized Decomposition (PGD) was introduced and successfully used in many applications of computational mechanics dealing with multiparametric problems [5,7,9,10,15,18,20]. Contrary to POD, the PGD approximation does not require any knowledge on the solution, and it operates in an iter- ative strategy in which basis functions (or modes) are computed from scratch by solving eigenvalue problems. In the classical PGD framework, the reduced model is built directly from the weak for- mulation (here (3)) of the considered PDE, integrated over the parametric space. The approximate reduced solution T at order m is then is then searched in a in a sepa- rated form with respect to space, time, and model parameters p ={p ,p , ... ,p } seen as 1 2 extra-coordinates [10]: m d m i T (x,t, p) = (x)λ (t) α (p)(5) k k k=1 i=1 The computation of the PGD modal representation is performed in an offline phase by using an iterative method [10], before being evaluated in an online phase at any space-time location and any parameter value from products and sums of one-parameter functions. For the multi-parametric problem of interest, the construction of the PGD solution is detailed in [26]. It reads: m 1 2 T (x,t, σ,Pe) = (x)λ (t)α (σ )α (Pe)(6) k k k k k=1 Considering a heat source term with u = 1, the first four PGD modes are represented in Fig. 4 (spatial modes), Fig. 5 (parameter modes), and Fig. 6 (time modes). Real-time data assimilation with Bayesian inference and Transport Map sampling Basics on Bayesian inference The purpose of Bayesian inference is to characterize the posterior probability density obs function (pdf) π(p|d ) of some model parameters p given some indirect and noisy obs observations d . In this context, the Bayesian formulation of the inverse problem reads [17]: obs obs π(p|d ) = π(d |p).π (p)(7) where π (p) is the prior pdf, related to the a priori knowledge on the parameters before obs obs the consideration of data d , π(d |p) is the likelihood function that corresponds to the Rubio et al. Adv. Model. and Simul. in Eng. Sci. (2021) 8:4 Page 6 of 25 Fig. 4 First four spatial modes of the PGD solution 0.28 0.26 0.24 0.22 Mode 1 Mode 1 0.2 −2 Mode 2 Mode 2 Mode 3 Mode 3 0.18 −4 Mode 4 Mode 4 0.16 −70 −65 −60 −55 −50 0.30.35 0.40.45 0.5 σ Pe (a) Modes in σ (b) Modes in Pe Fig. 5 First four parametric modes of the PGD solution obs probability for the model M to predict observations d given values of the parameters obs p,and C = π(d |p) · π(p)dp is a normalization constant. No assumption is made on the probability densities (prior, measurement noise) or on the linearity of the model. We consider here the classical case of an additive measurement noise with density π . meas We also consider that there is no modeling error, even though such an error source could be easily taken into account in the Bayesian inference framework (provided quantitative information on this error source is available). The likelihood function thus reads: obs obs π(d |p) = π (d − M(p)) (8) meas obs Furthermore, when considering sequential assimilation of measurements d at time steps t , i ∈{1, ... ,N }, the Bayesian formulation is such that the prior at time t corre- i t i sponds to the posterior at time t : i−1 α Rubio et al. Adv. Model. and Simul. in Eng. Sci. (2021) 8:4 Page 7 of 25 −2 Mode 1 Mode 2 −4 Mode 3 Mode 4 −6 0 100 200 300 400 500 Time steps Fig. 6 First four time modes of the PGD solution ⎛ ⎞ obs obs obs obs obs ⎝ ⎠ π(p|d , ... , d ) ∝ π (d |p) · π (p); π (d |p) = π d − M p,t t 0 t meas j 1 j j i j j j j=1 (9) Once the PGD approximation T (x,t, p) is built (see “Reduced order modeling using PGD” section), an explicit formulation of the non-normalized posterior density can be derived. Indeed, owing to the observation operator O,the output d (p,t) = O (T (x,t, p)) can be easily computed for any value of the parameter set p. The non- normalized posterior density π thus reads: obs obs obs m π p|d , ... , d = π d − d p,t .π(p)(10) meas j 1 i j j=1 obs obs obs From the expression of π(p|d )(or π(p|d , ... , d )), stochastic features such as 1 i means, variances, or first-order marginals on parameters may be computed. These quan- tities are based on large dimension integrals, and classical Monte-Carlo integration-based techniques such as Markov Chain Monte-Carlo (MCMC) require in practice to sample the posterior density a large number of times. This multiquery procedure is much time consuming and incompatible with fast computations; we thus deal with an alternative approach in the following section. Transport Map sampling The principle of the Transport Map strategy is to build a deterministic mapping M between a reference probability measure ν and a target measure ν .The purposeistofind the ρ π change of variables such that: gdν = g ◦ Mdν (11) π ρ λ Rubio et al. Adv. Model. and Simul. in Eng. Sci. (2021) 8:4 Page 8 of 25 M (x ) -11 -2 -3 -3 -2 -110 1 2 3 -3 -2 -1 32 2 1 0 1 2 3 Fig. 7 Illustration of the Transport Map principle for sampling a target density In this framework, samples drawn according to the reference density are transported to become samples drawn according to the target density (Fig. 7). For the considered obs inference methodology, the target density corresponds to the posterior density π(p|d ) derived from the Bayesian formulation, while a standard normal Gaussian density may be chosen as the reference density; for more details, we refer to [29] with effective computa- tion tools (see http://transportmaps.mit.edu). d d From the reference density ρ, the purpose is thus to build the map M : R → R such that: −1 −1 ν ≈ M ν = ρ ◦ M |det∇M | (12) π  ρ where denotes the push forward operator. Once the map M is found, it can be used for sampling purposes by transporting samples drawn from ρ to samples drawn from π. Simi- N N larly, Gaussian quadrature (ω , p ) for ρ can be transported to quadrature (ω ,M(p )) i i i i i=1 i=1 for π. Such a (deterministic) numerical integration with quadrature rule from the refer- ence Gaussian density is therefore a technique of choice used in the present work for the calculation of statistics, marginals, or any other information from the posterior pdf. Maps M are searched among Knothe–Rosenblatt rearrangements (i.e lower triangular and monotonic maps). This particular choice of structure is motivated by the following properties (see [4,21,29] for all details): • Uniqueness and existence under mild conditions on ν and ν ; π ρ • Easily invertible map and Jacobian ∇M simple to evaluate; • Optimality regarding the weighted quadratic cost; • Monotonicity essentially one-dimensional (∂ M > 0). k Rubio et al. Adv. Model. and Simul. in Eng. Sci. (2021) 8:4 Page 9 of 25 The maps M are therefore parametrized as: ⎡ ⎤ 1 1 1 M (a , a ,p ) c e ⎢ ⎥ 2 2 2 M (a , a ,p ,p ) ⎢ 1 2 ⎥ c e ⎢ ⎥ M(p) = (13) ⎢ ⎥ ⎣ . ⎦ d d d M (a , a ,p ,p , ... ,p ) 1 2 d c e k k k k k k 2 with M (a , a , p) =  (p)a + ( (p , ..., p , θ)a ) dθ. Functions  and c e 1 k−1 c e c e c e are chosen as Hermite polynomials with coefficients a et a . This integrated squared c e parametrization is a classical choice that automatically ensures the monotonicity of the map, and using Hermite polynomials leads to an integration that can be performed ana- lytically. With this parametrization, the optimal map M is found by minimizing the following Kullback–Leibler (K–L) divergence: D (M ν ||ν ) = E log KL  ρ π ρ −1 M ν = log(ρ(p)) − log([π ◦ M](p)) − log(| det ∇M(p)|) ρ(p)dp (14) that quantifies the difference between the two distributions ν and M ν . Still using a π  ρ Gaussian quadrature rule (ω , p ) over the reference probability space associated with i i i=1 ρ, the minimization problem reads: 1,...,d 1,...,d 1,...,d 1,...,d min ω − log( π ◦ M(a , a , p ) − log( det ∇M(a , a , p )) ) i i i c e c e 1,...,d 1,...,d a ,a c e i=1 (15) where π is the non-normalized version of the target density. This minimization problem is fully deterministic and may be solved using classical algorithms (such as BFGS) using gradient or Hessian information on the density π(p). It is important to notice that the reduced PGD representation (6) of the solution is highly beneficial to solve (15). Partial derivatives of the model with respect to parameters p can indeed be easily computed as: m d n m ∂ T ∂ α k i (x,t, p) = (x)λ (t) (p ) α (p ) (16) k k j i n n ∂p ∂p j j i=1 k=1 i =j and stored in the offline phase. Thanks to the separated representation of the PGD, cross- derivatives are computed by combination of univariate modes derivatives. As a result, the use of PGD also speeds up the computation of transport maps. The quality of the approximation M ν of the measure ν can be estimated by the ρ π convergence criterion  (variance diagnostic) defined in [29]as: 1 ν = Var log (17) σ ρ −1 M ν Rubio et al. Adv. Model. and Simul. in Eng. Sci. (2021) 8:4 Page 10 of 25 The numerical cost for computing this criterion is very low as the integration is performed using the reference density and with the same quadrature rule as the one used in the computation of the K–L divergence. Therefore, an adaptive strategy regarding the order of the map can be used to derive an automatic algorithm that guarantees the quality of the approximation M ν . In the case of sequential inference, the Transport Map method exploits the Markov structure of the posterior density (9). Indeed, instead of being fully computed, the map between the reference density ρ and the posterior density at time t is obtained by com- position of low-order maps (see Fig. 8): obs obs (M ◦ ... ◦ M ) ρ(p) = (M ) ρ(p) ≈ π(p|d , ... , d ) (18) 1 i i 1 i Therefore, at each assimilation step t , only the last map component M is computed i i between ρ and the density π defined as: ∗ obs π (p) = π (d |M (p)) · ρ(p) (19) t i−1 i i i which leads to a process with almost constant CPU effort. Real-time control In addition to the mean, maximum a posteriori (MAP), or other estimates on model parameters, another major post-processing in the DDDAS feedback loop is the prediction of some quantities of interest from the model, such as the temperature T at remote point x in the present context (see Fig. 2). Once parameters p (σ and Pe here) are inferred in a probabilistic way at each assimilation time point t (1 ≤ i ≤ N ), it is indeed valuable i t to propagate uncertainties a posteriori in order to know their impact on the output of interest T during the process, and consequently to assess the welding quality. As the PGD model gives an explicit prediction of the temperature field over the whole space-time-parametric domain, the output T can be easily computed for all values of the parameter samples and at each physical time point τ , j ∈{1, ... ,N }. For a given j τ physical time point τ , the pdf π(T |p,t ) of the value of the temperature T knowing j 3|τ i 3 uncertainties on the parameter set p from data assimilation up to time point t can thus be computed in real-time and used to determine if the plates are correctly welded and with which confidence. In practice, this computation may be performed for all physical time points τ ≥ t , and the density π(T |p,t ) is characterized by a (Gaussian) quadrature j i 3|τ i rule using the Transport Map method. With this knowledge, a stochastic computation of the predicted temperature evolution can be obtained, and the control of the welding process from the numerical model can be performed. We detail below the procedure to dynamically determine the value of the control variable u (magnitude of the heat source) in the case where the welding objective is to satisfy a sufficient welding depth. The quantity of interest is then the maximal value of the temperature T obtained at final time τ , which is an indicator of the welding quality. When T ∗ ≥ 1, the welding depth is supposed to be sufficient. Other welding objectives 3|τ will be considered in “Results and discussion” section, associated with similar strategies for command synthesis. Due to the stochastic framework which is employed, the quantity of interest is actually a random variable with pdf π(T |p,t ) evolving at each data assimilation time t . 3|τ i i Rubio et al. Adv. Model. and Simul. in Eng. Sci. (2021) 8:4 Page 11 of 25 Fig. 8 Flowchart of sequential inference using transport maps (L is a normalizing linear map) Rubio et al. Adv. Model. and Simul. in Eng. Sci. (2021) 8:4 Page 12 of 25 The proposed quantity q to monitor is: q = mean(T ∗) − 3 · std(T ∗) = Q(T ∗)(20) 3|τ 3|τ 3|τ where Q is an operator defined in the stochastic space. This way, setting the objective q = 1 ensures that the temperature T ∗ is larger than the melting temperature with a obj 3|τ confidence of 99%, and using the minimal energy (no overheating). Using the PGD solution computed in “Reduced order modeling using PGD” section for a unit magnitude of the heat source (u = 1) and zero initial conditions, the predicted (stochastic) maximal value T for a given constant magnitude u and for fixed pdfs of p reads: m d m ∗ ∗ i T ≈ u · T (x , τ , p) = u · (x )λ (τ ) α (p ) (21) 3|τ 3 k 3 k i i=1 k=1 m ∗ so that q = u · Q (T (x , τ , p)) can be obtained in a straightforward manner. This way, m ∗ setting the source magnitude u to u = q /Q (T (x , τ , p)) would enable to reach the 0 obj 3 welding objective. Nevertheless, in practice the pdfs on parameters p are updated at each assimilation time point t , based on additional experimental information, so that the value of u needs to be tuned with time accordingly. In order to do so, the control variable u(t) is made piecewise constant in time, under the form: u(t) = u · H(t) + δu · H(t − t ) (22) 0 i i i=1 where H is the Heaviside function, u is the initial command on the source magnitude (defined from the prior pdfs on p), and δu is the correction to the current command at each assimilation time t . Using the linearity of the problem with respect to the loading, a PGD solution associated with the command is made of a series of PGD solutions translated in time; it reads: m m u · T (x,t, p) + δu · T (x,t − t , p) (23) 0 i i n=1 Therefore, after each assimilation time point t , the new prediction of the quantity of interest T ∗ can be easily obtained from PGD: 3|τ m ∗ m ∗ T ≈ u · T (x , τ , p) + δu · T (x , τ − t , p) 3|τ 0 3 n 3 n (24) n=1 pred,[0,i−1] m ∗ = T (p) + δu · T (x , τ − t , p) i 3 i 3|τ pred,[0,i−1] i−1 m ∗ m ∗ where T (p) = u · T (x , τ , p) + δu · T (x , τ − t , p) is the prediction 0 3 n 3 n 3|τ n=1 on T ∗ considering the history of the control variable u(t) until time t . Consequently, the 3|τ i correction δu is defined such that Q(T ∗) = q ,using (24) and considering the current i 3|τ obj pdfs of the parameter set p (i.e. those obtained after the last Bayesian data assimilation at time t ). i Rubio et al. Adv. Model. and Simul. in Eng. Sci. (2021) 8:4 Page 13 of 25 0.58 0.95 0.56 0.9 0.54 0.85 0.52 Numerical output Numerical output 0.8 0.5 Measurements Measurements 0 10203040 010 20 30 40 Time steps (assimilation) Time steps (assimilation) (a) Output T (b) Output T 1 2 Fig. 9 Measurements simulated with the numerical model, when the control is not activated - Case 1 Results and discussion We now implement the DDDAS procedure proposed in “Methods” section on the model problem defined in “Introduction” section. We investigate three test cases involving dif- ferent welding scenarios, in order to illustrate the flexibility of the approach and show obs obs its performance. For all scenarios, two temperature data T and T are assimilated 1 2 at each assimilation time point t in order to refine the knowledge on parameters σ and Pe, and further predict the value of the quantity of interest for control purpose. Without any limitation, we assume that assimilation time points t , i ∈{1, ... ,N }, coincide with i t discretization time points τ . Case 1: control of the welding depth with constant physical process parameters In this first test case, the control objective is the one mentioned in “Real-time control” section, that is Q(T ∗) = 1, with Q the operator defined in (20)and τ = 45. This 3|τ ensures that the temperature T at final time τ is larger than the melting temperature with a confidence of 99%, while using the minimal source energy. We use synthetic data, measurements being simulated using the PGD model with refer- ence parameter values (σ = 0.4,Pe =−60) that are supposed to be constant in time ref ref in this section. An independent random normal noise is added with zero mean and stan- meas meas dard deviations σ = 0.01925 and σ = 0.01245. Figure 9 shows the model outputs 1 2 T and T at each time step as well as the perturbed outputs which provide the measure- 1 2 ments used for the considered example, in the case where the control on the system is not activated (i.e. u = 1). When this control is implemented (see “On-the-fly control of the welding process” section), synthetic data are generated by taking into account the applied control law. The goal of the test case is to perform a detailed analysis of the proposed DDDAS approach, in terms of dynamical model updating, uncertainty propagation on the quantity of interest, and on-the-fly command synthesis. Dynamical updating of model parameters The prior density on the parameters (σ,Pe) is chosen as the product of two independent 2 2 Gaussian densities with means (μ = 0.4, μ =−60) and variances (σ = 0.003, σ = σ Pe Pe 7). The Transport Map strategy detailed in “Real-time data assimilation with Bayesian 2 Rubio et al. Adv. Model. and Simul. in Eng. Sci. (2021) 8:4 Page 14 of 25 Table 1 Computation costs of the transport maps depending on the derivatives order information given to the minimization algorithm Derivatives order information 01 2 Number of iterations for step 1 107 33 10 Computation time for step 1 33.85 s 6.18 s 4.60 s Average number of iterations for steps {2, ... , 45} 4.24.16 4.13 Average computation time for steps {2, ... , 45} 1.24 s 0.92 s 0.90 s inference and Transport Map sampling” section and coupled with PGD is then applied for sequential data assimilation, assuming for the moment a constant magnitude u = 1of the heat source. The solution of the heat equation (1) is used in its PGD form and deriva- tives of the approximate solution T with respect to the parameters to be inferred are computed in order to derive the transport maps (i.e. successive maps M , ... ,M ) effec- 1 N tively. In Table 1 we represent the computation time required to compute the transport maps at each assimilation step. We compare computation times when different infor- mation on derivative orders is provided to the minimization algorithm. With order 0, the minimization problem (15) is solved using a BFGS algorithm where the gradient is computed numerically. With order 1, the minimization is also performed using a BFGS algorithm but with the gradient given explicitly with respect to the PGD modes deriva- tives. With order 2, a conjugate gradient algorithm is used with an explicit formulation of −3 both gradient and Hessian. The stopping criterion is a tolerance of 10 on the variance diagnostic (17), and the complexity of the maps (order of the Hermite polynomials) is increased until this tolerance is fulfilled. It appears that the first assimilation step is the most expensive as the complexity of the transformation between the reference and the first posterior density is large (a 4th order map is required to fulfill the variance diagnostic criterion). The other transformations computed at other assimilation time steps are much less expensive (time less than 1 s) as they are built between intermediate posteriors which slightly differ at each step and can thus be easily represented by a linear (i.e. first order) transformation. The speed-up for the first iteration is about 5.5 between zeroth-order information and first-order information. Between the first-order information and the second-order information, the speed-up is about 1.34. For the other time steps, the speed-up is very small as the computed map is very simple. We observe that using gradient and Hessian information to solve the minimization problem related to the computation of the transport maps leads to low computation times. In Fig. 10, information on the computation cost over the time steps and using both gradient and Hessian information (order 2 information) is provided: Fig. 10a shows the computation time to build each map M , i ∈{1, ... ,N }, while the cost in terms of model i t evaluations to compute each map is displayed in Fig. 10b. A level 10 Gauss–Hermite quadrature is used. From the second step to the final step, we observe that the computation time slowly increases (Fig. 10a) while the evaluation cost slowly decreases (Fig. 10b). This is due to the fact that the evaluation of the composition of maps grows with the number of steps. One way to circumvent this issue would consist in performing regression on the map composition. Figures 11 and 12 represent the marginals at each time step and for both parameters σ and Pe, respectively. The color map informs on the probability density function values. Rubio et al. Adv. Model. and Simul. in Eng. Sci. (2021) 8:4 Page 15 of 25 010 20 30 40 0 10203040 Time steps (assimilation) Time steps (assimilation) (a) Computation time for each time step (b) Number of iterations of the minimiza- tion algorithm for each time step Fig. 10 Cost of the transport maps computations using Hessian information for each assimilation activated - case 1 Fig. 11 Marginals on σ computed with 20,000 samples and kernel density estimation for each assimilation time step - Case 1 During the iterations over the time steps, we observe that marginals become thinner with larger maximal pdf values giving more confidence on the parameters estimation. We also observe that the parameter σ is less sensitive than the parameter Pe regarding the inference process. After 45 assimilation time steps, the algorithm gives a maximum estimator [0.394, −60.193] and a mean estimator [0.392, −59.949]. These values are very close to the reference values [0.40, −60] used to simulate the measurements. Uncertainty propagation on the quantity of interest Still assuming a constant magnitude u = 1 of the heat source, uncertainty propagation is performed in real-time in order to predict the evolution of the temperature T (in terms of pdf) in the region of interest. Knowing the uncertainties on the parameters, the goal is to predict at each assimilation time point the evolution of the temperature T during the next physical time steps. This is easily done owing to the PGD model, as the temperature field is then globally and explicitly known over the time domain and with respect to the values of σ and Pe. The computation is performed after each assimilation time point t and for all the physical time points τ ≥ t . j i Computation time (s) Number of iterations Rubio et al. Adv. Model. and Simul. in Eng. Sci. (2021) 8:4 Page 16 of 25 Fig. 12 Marginals on Pe computed with 20,000 samples and kernel density estimation for each assimilation time step - Case 1 Fig. 13 Prediction of the output T for all time steps after the considered assimilation step - Case 1 Figure 13a shows the prediction result with uncertainty propagation after the first assim- ilation time point t for all the physical steps τ , j > 1. To that end, samples are drawn 1 j obs,1 obs,1 obs,1 obs,1 according to the first posterior π(σ,Pe|T ,T ) = π (T ,T |σ,Pe).π(σ,Pe). 1 2 1 1 2 The slice [τ , τ ] represents the guess on the temperature T from the prior uncertainty 0 1 3 knowledge on the parameters (σ,Pe), before the first assimilation step t .For τ >τ the 1 j 1 graph represents the prediction of the output T considering the current knowledge on the obs,1 parameters uncertainty (i.e. with the assimilation of the first set of measurements T obs,1 and T alone). The discontinuous line represents the evolution of the temperature T with the true value of parameters (σ = 0.4,Pe =−60). Rubio et al. Adv. Model. and Simul. in Eng. Sci. (2021) 8:4 Page 17 of 25 Uncertainty map Numerical ouptut 1.05 0.95 10 20 30 40 Time steps (assimilation) Fig. 14 Prediction of temperature T at physical time step τ = 45 after each assimilation time step t ,i ∈ 3 i {1, ... , 45} - Case 1 Other graphs (Fig. 13b–d) show the refinement of the prediction with the improvement on the parameters uncertainty knowledge. The current measurement assimilation step is indicated by the vertical cursor. On the right of the cursor τ = t , the graphs represent the prediction of the temperature T from the model after the assimilation of the measure- obs,1:i obs,1:i ments T and T . On the left of the cursor, each slice [t ,t ](j ≤ i) represents j−1 j 1 2 the prediction made at the assimilation time t (the predictions of the temperature T for j 3 physical time steps anterior to the assimilation time step t are not updated). Figure 14 shows the convergence of the prediction on the quantity of interest T at 3|τ the steady state regime (τ = 45) with respect to the assimilation steps. We observe that, as foreseen, more confidence is given to this output along the real-time data assimilation process. On-the-fly control of the welding process The previously described assimilation procedure, performed in situ and in real-time, can be used in the context of welding control. If the stochastic prediction on the quantity ∗ ∗ of interest T is not satisfying with regards to the criterion Q(T ) = 1, a change in 3|τ 3|τ the command u(t) can be implemented as described in “Real-time control” section. This implementation is performed here. In Fig. 15, we show the time evolution of the pdf associated with the prediction on T , 3|t with or without control. In the case without control, the sharp time evolution is due to changes in the pdfs of σ and Pe along the data assimilation steps. We observe that the quantity Q(T ) is much larger than 1, indicating overheating and wasted energy. On the 3|τ contrary, implementing the control by varying the magnitude u of the heat source enables to reach the criterion Q(T ) = 1 perfectly, and it also speeds up the convergence of the 3|τ pdf on T to the target. 3|t In Fig. 16, we indicate the evolution of the command variable along the welding process (in terms of corrections δu at each assimilation time point t ). We again observe that i i the feedback loop is effective and quickly (i.e. much before the final time τ )leads to an asymptotic regime in which the command remains almost constant (i.e. δu ≈ 0). We also show in Fig. 16 the map orders which are used along the data assimilation process when the control is performed. This indicates that an order 1 map is still usually sufficient, but 3 Rubio et al. Adv. Model. and Simul. in Eng. Sci. (2021) 8:4 Page 18 of 25 Fig. 15 Evolutionintimeof T without control (left) and with control (right) - Case 1 3|t −2 ·10 −2 0 10 20 30 40 010 20 30 40 Time steps Time steps Fig. 16 Evolution of the command variable in terms of incremental corrections (left), and map order required at each assimilation time step in the case of system control (right) - Case 1 that a few more maps with higher order are required compared to the case with no control (where only the first map was order 4). Eventually, we display in Fig. 17 the evolution in time of the overall CPU cost required to implement the feedback loop, which includes both data assimilation and command synthesis steps. As foreseen, this cost is higher during the first assimilation times when the pdfs on parameters σ and Pe significantly evolve (i.e. when much is learnt from measurement data). Once the asymptotic regime is reached in the model updating procedure, the CPU cost is low (< 1 s) which is compatible with real-time contraints for the considered welding application. Case 2: control of the welding depth with evolving physical process parameters This second test case has many similarities with the previous one, the control objective still being Q(T ∗) = 1. Nevertheless, we now take τ = 100 and we assume that the 3|τ welding process experiences an unexpected change in the Peclet number value during service (e.g. due to change in the source velocity or material thermal properties), at t = 40. Consequently, the reference parameters values which are now used to get synthetic (noisy) data are: −60 for t < 40 σ = 0.4; Pe = (25) ref ref −55 for t ≥ 40 δu(t) Map order Rubio et al. Adv. Model. and Simul. in Eng. Sci. (2021) 8:4 Page 19 of 25 3.5 2.5 1.5 0.5 10 20 30 40 Time steps Fig. 17 Computation time including the computation of the transport maps and the command synthesis - Case 1 Fig. 18 Marginals on σ (left) and Pe (right) at each assimilation time time step - Case 2 Starting from the same prior distribution of parameters as in the test case 1, sequen- tial data assimilation using Transport Map sampling and PGD is again performed. The minimization problem associated with the computation of the maps is solved with order 1 information on the derivatives, that is a BFGS algorithm with explicit computation of the gradient from the PGD representation. The complexity of the maps (that is the degree −3 of employed Hermite polynomials) is increased until reaching a tolerance of 10 on the variance diagnostic. We represent in Fig. 18 the evolution in time of the marginals on both parameters σ and Pe. Again, we observe that they become thinner with larger maximal pdf values when the number of data assimilation times increases. We also observe that after the change of the reference value for Pe, the data assimilation algorithm is able to detect this change and infers a mean value that slowly tends to the new reference value (even though right after t = 40, the reference parameter value Pe =−55 appears in ref the tail of the pdf). Meanwhile, during this transient regime, it seems that no additional knowledge is brought for the inference of σ as the associated marginals are stagnating. We also show in Fig. 19 the map orders which are used along the data assimilation process. This particularly indicates that an order 1 map remains sufficient to follow the sudden change in the reference value for Pe. Computation time (s) Rubio et al. Adv. Model. and Simul. in Eng. Sci. (2021) 8:4 Page 20 of 25 020 40 60 80 100 Time steps Fig. 19 Map order required at each assimilation time step - Case 2 Fig. 20 Evolutionintimeof T when the control is implemented - Case 2 3|t From the dynamical updating of model parameters and with respect to the objective, the control of the process with on-the-fly command synthesis is implemented. We show in Fig. 20 the time evolution of the pdf of T in the case of a controlled welding process. We 3|t observe that the control objective is reached even though pdfs of model parameters have not converged yet around the reference parameter values. This illustrates the interest of the control in a stochastic framework, in which uncertainty on the inferred parameters is taken into account in the synthesis of the command in order to make safe decision. We also plot in Fig. 21 the evolution of the command variable u(t) along the process as well as its incremental corrections δu at each time point t ; we clearly observe the change in i i the command when the physical value of the Peclet number drops at t = 40. Case 3: control of the welding temperature evolution with prescribed time path In this last test case, the control objective is to make the temperature T follow a prede- 3|t fined time path, which comes down to imposing the welding history along the process. We set the final time τ = 100 and we assume that reference parameter values are σ = 0.4 ref and Pe =−60 (constant in time). Synthetic measurement data are simulated from these ref values, with additive measurement noise. Map order Rubio et al. Adv. Model. and Simul. in Eng. Sci. (2021) 8:4 Page 21 of 25 Fig. 21 Evolution of the command variable (left) and its incremental corrections (right) along the controlled welding process - Case 2 Fig. 22 Target (dashed red line) and free system (solid black line) evolution curves for T - Case 3 3|t The prescribed evolution curve for T is showninFig. 22 (dashed red line). It is a ramp 3|t increase up to t = 20, then a plateau evolution. In our stochastic framework, the command law is designed so that the predicted mean value of T follows this target evolution. In 3|t practice, at each assimilation time point t , and from the inferred pdfs on model parameters at this time, a command correction δu is computed so that the prediction on mean(T ) i 3|t i+1 coincides with the target value at the next assimilation time point t . The evolution of i+1 T predicted from the model with reference parameter values, and without any control, 3|t is also shown in Fig. 22 (solid black line). Starting from the same prior distribution of parameters as in the previous test cases, sequential data assimilation using Transport Map sampling and PGD is performed. The minimization problem associated with the computation of the maps is solved with order 1 information on the derivatives, and the complexity of the maps is increased until reaching −3 a tolerance of 10 on the variance diagnostic. We represent in Fig. 23 the evolution in time of the marginals on both parameters σ and Pe. As expected, we observe that they become thinner with larger maximal pdf values tending to reference parameter values along the data assimilation process. The map orders which are used along this process are shown in Rubio et al. Adv. Model. and Simul. in Eng. Sci. (2021) 8:4 Page 22 of 25 Fig. 23 Marginals on σ (left) and Pe (right) at each assimilation time step - Case 3 Fig. 24 Map order required at each assimilation time step - Case 3 Fig. 24; they again indicate that an order 1 is sufficient, except for first assimilation steps where the complexity of the transformation between the reference density and the first posterior densities is higher. From the dynamical updating of model parameters and with respect to the objective, the control of the process with on-the-fly command synthesis is implemented. We show in Fig. 25 the resulting time evolution of the pdf of T . We observe that mean(T ) 3|t 3|t quite perfectly matches the target evolution. We also plot in Fig. 26 the evolution of the command variable u(t) along the process as well as its incremental corrections δu at each time point t . We observe that during the transient phase (ramp evolution of the target), fast modifications in the command are required while command increments tend to zero once the steady-state target regime is reached. Anyhow, this test case shows that the proposed DDDAS strategy is capable of generating complex and effective command laws. Conclusions In this work we presented a procedure to build a numerical feedback loop for the control of a fusion welding process from modeling and simulation, while taking uncertainties into account. In order to perform fast computations and permit real-time exchanges between the physical system and its virtual twin, PGD model reduction and Transport Map sam- Rubio et al. Adv. Model. and Simul. in Eng. Sci. (2021) 8:4 Page 23 of 25 Fig. 25 Evolutionintimeof T when the control is implemented- Case 3 3|t Fig. 26 Evolution of the command variable (left) and its incremental corrections (right) along the controlled welding process- Case 3 pling were used in several numerical tasks along the feedback loop. In particular, the explicit dependency on the model parameters inside the PGD model as well as the suit- able sampling and integration framework offered by transport maps enabled to effectively perform data assimilation, uncertainty quantification, and predictive control. The imple- mentation of the feedback loop for various control scenarios illustrated the interest and performance of the proposed approach. This approach thus appears to be a relevant tool for real-time feedback control in the DDDAS framework. Future works should focus on the extension of the approach to more complex (e.g. nonlinear) models, associated with modeling errors that may be a priori considered in the Bayesian framework but also a pos- teriori corrected from data-based learning and enrichment. Dealing with a larger number of model parameters and control variables in the DDDAS context is also a research topic of interest that will be investigated in forthcoming works. Authors’ contributions All authors discussed the content of the article, and were involved in the definition of techniques and algorithms. All authors read and approved the final manuscript. Funding No specific funding has to be declared for this work. Rubio et al. Adv. Model. and Simul. in Eng. Sci. (2021) 8:4 Page 24 of 25 Availability of data and material The datasets used during the current study are available from the corresponding author on reasonable request. The interested reader is thus invited to contact the corresponding author. Competing interests The authors declare that they have no competing interests. Author details 1 2 Université Paris-Saclay, ENS Paris-Saclay, LMT, 4 Avenue des Sciences, 91190 Gif-sur-Yvette, France, Institut Universitaire de France (IUF), 1 rue Descartes, 75005 Paris, France. Received: 11 December 2019 Accepted: 22 January 2021 References 1. Arulampalam MS, Maskell S, Gordon N, Clapp T. A tutorial on particle filters for online nonlinear/non-gaussian bayesian tracking. IEEE Transactions on Signal Processing. 2002;50(2):174–88. 2. Beck JL. Bayesian system identification based on probability logic. Structural Control and Health Monitoring. 2010;17(7):825–47. 3. Berger J, Orlande HRB, Mendes N. Proper Generalized Decomposition model reduction in the Bayesian framework for solving inverse heat transfer problems. Inverse Problems in Science and Engineering. 2017;25(2):260–78. 4. Bogachev VI, Kolesnikov AV, Medvedev KV. Triangular transformations of measures, Sbornik:Mathematics 2005;196:309. 5. Bouclier R, Louf F, Chamoin L. Real-time validation of mechanical models coupling PGD and constitutive relation error. Computational Mechanics. 2013;52(4):861–83. 6. Calvetti D, Dunlop M, Somersalo E, Stuart A. Iterative updating of model error for Bayesian inversion, Inverse Problems 2018;34(2). 7. Chamoin L, Allier PE, Marchand B, Synergies between the Constitutive Relation Error concept and PGD model reduction for simplified V&V procedures, Advanced Modeling and Simulation in Engineering Sciences 2016;3:18. 8. Chamoin L, Pled F, Allier PE, Ladevèze P. A posteriori error estimation and adaptive strategy for PGD model reduction applied to parametrized linear parabolic problems. Computer Methods in Applied Mechanics and Engineering. 2017;327:118–46. 9. Chinesta F, Ladevèze P, Cueto E. A short review on model order reduction based on Proper Generalized Decomposi- tion. Archives of Computational Methods in Engineering. 2011;18(4):395–404. 10. Chinesta F, Keunings R, Leygue A. The Proper Generalized Decomposition for Advanced Numerical Simulations: A Primer, SpringerBriefs in Applied Sciences and Technology 2014. 11. Chinesta F, Cueto E, Abisset-Chavanne E, Duval J-L, Khaldi FE. Virtual, digital and hybrid twins: A new paradigm in data-based engineering and engineered data. Archives of Computational Methods in Engineering. 2020;27:105–34. 12. Darema F. Dynamic Data Driven Applications Systems: A new paradigm for application simulations and measure- ments, Computational Science - ICCS: 2004;662–669. 13. El Moselhy TA, Marzouk Y. Bayesian inference with optimal maps. Journal of Computational Physics. 2012;231(23):7815–50. 14. Gamerman D, Lopes HF. Markov Chain Monte Carlo-Stochastic Simulation for Bayesian Inference. : CRC Press; 2006. 15. Gonzalez D, Masson F, Poulhaon F, Leygue A, Cueto E, Chinesta F. Proper generalized decomposition based dynamic data driven inverse identification. Mathematics and Computers in Simulation. 2012;82(9):1677–95. 16. Grepl M. Reduced-Basis Approximation and A Posteriori Error Estimation, PhD Thesis. 2005. 17. Kaipio J, Somersalo E. Statistical and Computational Inverse Problems. New York: Springer-Verlag; 2004. 18. Ladevèze P. On reduced models in nonlinear solid mechanics. European Journal of Mechanics - A/Solids. 2016;60:227– 19. Manzoni A, Pagani S, Lassila T. Accurate Solution of Bayesian Inverse Uncertainty Quantification Problems Combining Reduced Basis Methods and Reduction Error Models. SIAM/ASA Journal on Uncertainty Quantification. 2016;4(1):380– 20. Marchand B, Chamoin L, Rey C. Real-time updating of structural mechanics models using Kalman filtering, modified Constitutive Relation Error and Proper Generalized Decomposition. International Journal for Numerical Methods in Engineering. 2016;107(9):786–810. 21. Marzouk Y, Moselhy T, Parno M, Spantini A. Sampling via measure transport: an introduction, Handbook of Uncertainty Quantification, 2016;1–41. 22. Matthies HG, Zander E, Rosic BV, Litvinenko A, Pajonk O. Inverse problems in a Bayesian setting. Computational Methods for Solids and Fluids. 2016;41:245–86. 23. Parno MD, Marzouk YM. Transport map accelerated Markov Chain Monte-Carlo. SIAM-ASA Journal on Uncertainty Quantification. 2018;6(2):645–82. 24. Peherstorfer B, Willcox K. Dynamic data-driven reduced-order models. Computer Methods in Applied Mechanics and Engineering. 2015;291:21–41. 25. Robert CP, Casella G. Monte Carlo Statistical Methods. New York: Springer Texts in Statistics; 2004. 26. Rubio PB, Louf F, Chamoin L. Fast model updating coupling Bayesian inference and PGD model reduction. Compu- tational Mechanics. 2018;62(6):1485–509. 27. Rubio PB, Louf F, Chamoin L. Transport Map sampling with PGD model reduction for fast dynamical Bayesian data assimilation. International Journal in Numerical Methods in Engineering. 2019;120(4):447–72. 28. Rubio PB, Chamoin L, Louf F. Real-time Bayesian data assimilation with data selection, correction of model bias, and on-the-fly uncertainty propagation, Comptes Rendus Mécanique. Paris. 2019;347:762–79. Rubio et al. Adv. Model. and Simul. in Eng. Sci. (2021) 8:4 Page 25 of 25 29. Spantini A, Bigoni D, Marzouk Y. Inference via low-dimensional couplings. Journal of Machine Learning Research. 2018;19:1–71. 30. Stuart AM. Inverse problems: a Bayesian perspective. Acta Numerica. 2010;19:451–559. 31. Tarantola A. Inverse Problem Theory and Methods for Model Parameter Estimation, Society for Industrial and Applied Mathematics 2005. Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png "Advanced Modeling and Simulation in Engineering Sciences" Springer Journals

Real-time data assimilation and control on mechanical systems under uncertainties

Loading next page...
 
/lp/springer-journals/real-time-data-assimilation-and-control-on-mechanical-systems-under-erjIeT08F5

References (37)

Publisher
Springer Journals
Copyright
Copyright © The Author(s) 2021
eISSN
2213-7467
DOI
10.1186/s40323-021-00188-3
Publisher site
See Article on Publisher Site

Abstract

ludovic.chamoin@ens-paris- saclay.fr This research work deals with the implementation of so-called Dynamic Data-Driven Université Paris-Saclay, ENS Application Systems (DDDAS) in structural mechanics activities. It aims at designing a Paris-Saclay, LMT, 4 Avenue des Sciences, 91190 Gif-sur-Yvette, real-time numerical feedback loop between a physical system of interest and its France numerical simulator, so that (i) the simulation model is dynamically updated from Full list of author information is sequential and in situ observations on the system; (ii) the system is appropriately driven available at the end of the article and controlled in service using predictions given by the simulator. In order to build such a feedback loop and take various uncertainties into account, a suitable stochastic framework is considered for both data assimilation and control, with the propagation of these uncertainties from model updating up to command synthesis by using a specific and attractive sampling technique. Furthermore, reduced order modeling based on the Proper Generalized Decomposition (PGD) technique is used all along the process in order to reach the real-time constraint. This permits fast multi-query evaluations and predictions, by means of the parametrized physics-based model, in the online phase of the feedback loop. The control of a fusion welding process under various scenarios is considered to illustrate the proposed methodology and to assess the performance of the associated numerical architecture. Keywords: Data assimilation, Real-time control, Model reduction, Uncertainty quantification and propagation, Bayesian inference, Proper generalized decomposition Introduction The continuous interaction between physical systems and high-fidelity simulation tools (i.e. virtual twins) has become a key enabler for industry as well as an appealing research topic along the last decade (see for instance [11]). This is at the heart of the Dynamic Data Driven Application System (DDDAS) concept [12], in which a simulation model is used to make decisions and drive an evolving physical system, and is in the same time fed by data collected on this system in order to update parameters and ensure the con- tinual consistency between numerical predictions and physical reality. In other words, the DDDAS concept aims at building a numerical feedback loop between the physical system and its simulator, with on-the-fly data assimilation and control (Fig. 1). Neverthe- less, there are two main numerical challenges in the implementation of such a loop for © The Author(s) 2021. This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. 0123456789().,–: volV Rubio et al. Adv. Model. and Simul. in Eng. Sci. (2021) 8:4 Page 2 of 25 Fig. 1 Scheme of the DDDAS feedback control loop structural mechanics applications. On the one hand, the dialog between numerical mod- els and physical systems is in practice subject to several sources of uncertainty, including measurement noise, modeling errors, or variabilities in the system properties and environ- ment. On the other hand, a relevant feedback loop requires effective numerical methods such that real-time computations and interactions can be performed. The paper presents a general strategy, addressing the two previous challenges, for the design of an effective numerical feedback loop between a physical system and its simu- lator. It considers a stochastic framework for sequential data assimilation and control, that uses Bayesian inference for model updating from in situ data as well as uncertainty propagation to make predictions from the model and synthesize control laws. Such a framework considers parameters to be inferred as random variables, and it naturally takes all uncertainty sources into account [2,6,17,22,30,31]. The proposed strategy also leans on two ingredients which permit to achieve the real-time constraint. First, Transport Map sampling [13] is used as an alternative to Markov Chain Monte-Carlo (MCMC) [14,25] or Sequential Monte-Carlo [1] techniques in order to perform fast Bayesian inference with convenient sampling of multi-dimensional posterior densities and associated adaptive strategies. The Transport Map technique consists in building a deterministic polynomial mapping between the posterior probability measure of interest and a simple reference measure (e.g. Gaussian distribution) [21,23,29]. It thus permits an automatic exploration, from the constructed mapping, of the multi-parametric stochastic space in order to effectively derive useful information such as means, standard deviations, maxima, or marginals on model parameters. Such pieces of information can then be propagated to model outputs in order to quantify uncertainty, synthesize the appropriate command in a stochastic context, and thus make safe decision on the evolving system. Second, model reduction by means of the Proper Generalized Decomposition (PGD) tech- nique [9,10] is introduced in order to reduce the computational effort for the evaluation of multi-parametric numerical models, and therefore further speed up the overall process. The PGD approximation builds a modal representation of the multi-parametric model solution with separated variables and explicit dependency on model parameters. This Rubio et al. Adv. Model. and Simul. in Eng. Sci. (2021) 8:4 Page 3 of 25 Fig. 2 Illustration of the considered welding model representation is computed in an offline phase with controlled accuracy [8] before being evaluated at low cost in the online phase. It is shown in the paper that the PGD tech- nique (i) facilitates the computation of the likelihood function involved in the Bayesian inference framework [3,26]; (ii) can be effectively coupled with Transport Map sampling for the calculation of the maps, as it directly provides information on solution derivatives [27,28]; (iii) is a particularly effective tool for performing uncertainty propagation through the forward model as well as command law synthesis. A particular focus is made here on the latter point dealing with effective command in a stochastic framework; this has been investigated in very few works of the literature, even though it is a major aspect of the DDDAS procedure. The dynamic command synthesis we propose, using advantages of Transport Map sampling and PGD model reduction, is the main novelty of the paper. It permits the construction and implementation of the full DDDAS feedback loop. The constructed feedback loop is here illustrated in the context of a fusion welding process. It involves a simplified welding model introduced in [16] (and described in Fig. 2), which is supposed to be an accurate enough representation of the physical phenomena of interest. In this two-dimensional model, two metal plates are welded by a heat source whose center is moving along the geometry. The problem unknown is the dimensionless tem- perature field T in the space domain  and over the time domain I; T = 0 when the temperature is equal to the room temperature, and T = 1 when the temperature is equal to the melting temperature of the material. On the right-hand side boundary  (see Fig. 2), the temperature is supposed to be equal to the room temperature (T = 0). The other boundaries are supposed to be insulated. To solve the problem, the system of coordinates is made moving at the same speed as the heat source. Thus, the model problem is described by the following heat equation with convective term: ∂T + v(Pe) · gradT − κT = s(σ)(1) ∂t where v = [Pe; 0] is the advection velocity, Pe = v ·L /κ is the Peclet number (L being the c c characteristic length of the problem), and κ is the thermal diffusivity of the material. The volume heat source term s is defined by the following Gaussian repartition in the space domain: 2 2 u x − x + y − y ( ) ( ) c c s(x, y; σ ) = exp − (2) 2 2 2πσ 2σ Rubio et al. Adv. Model. and Simul. in Eng. Sci. (2021) 8:4 Page 4 of 25 Fig. 3 Illustration of the two model parameters (left and center), and time evolution of the command (right) where coordinates (x ,y ) represent the location of the heat source center, u is the mag- c c nitude, and σ is a scalar parameter that drives the source expansion. From the integration of (1) over , the weak formulation in space of the problem is of the form: find T ∈ T such that ∗ ∗ ∗ a(T, T ) = l(T ) ∀T ∈ T (3) with: ∂T ∗ ∗ ∗ a(T, T ) = ( + v · gradT) · T + κ · gradT · gradT d ∂t (4) ∗ ∗ l(T ) = s · T d 2 1 The functional space T is the Bochner space L (I; S)  S ⊗I,with S = H the Sobolev 0| space of H functions on  satisfying homogeneous Dirichlet boundary conditions on  , and I = L (I) the Lebesgue space. The model parameters to be updated from indirect noisy data are p ={σ,Pe}, which are respectively related to the spatial spreading and speed of the heat source as illustrated in Fig. 3. They may be varying over the time domain. Data consist in the measurement of temperatures T and T at two points in  (see Fig. 2). From these data assimilated 1 2 sequentially in time, the purpose is twofold: (i) to dynamically update the model parame- ters p; (ii) to control from the updated model the temperature T at another point in , which is the output of interest assumed to be unreachable by direct measurement, and perform corrections on the welding process if necessary. The control variable is the mag- nitude u of the heat source, that is supposed to be piecewise constant in time as illustrated in Fig. 3. The paper outline is as follows: in “Reduced order modeling using PGD” section, the PGD model reduction applied to the above reference model is detailed. It is then employed in association with Bayesian inference and Transport Map sampling for fast data assim- ilation and model updating in “Real-time data assimilation with Bayesian inference and Transport Map sampling” section. All these tools are beneficially reused for on-the-fly command synthesis and system control in “Real-time control” section. Several numerical experiments are reported in “Results and discussion” section, which show the interest and performance of the proposed feedback loop by considering various welding scenar- ios. Sequential data assimilation, uncertainty propagation up to the output of interest, and real-time control of the welding process are illustrated for each of these scenarios. Eventually, conclusions and prospects are drawn in “Conclusions” section. Rubio et al. Adv. Model. and Simul. in Eng. Sci. (2021) 8:4 Page 5 of 25 Methods Reduced order modeling using PGD Due to the increasing number of high-dimensional approximation problems, which nat- urally arise in many situations such as optimization or uncertainty quantification, model reduction techniques have been the object of a growing interest and are now a mature tech- nology [19,24]. Tensor methods are among the most prominent tools for the construction of model reduction techniques as in many practical applications, the approximation of high-dimensional solutions of Partial Differential Equations (PDEs) is made computa- tionally tractable by using low-rank tensor formats. In particular, an appealing technique based on a canonical format and referred to as Proper Generalized Decomposition (PGD) was introduced and successfully used in many applications of computational mechanics dealing with multiparametric problems [5,7,9,10,15,18,20]. Contrary to POD, the PGD approximation does not require any knowledge on the solution, and it operates in an iter- ative strategy in which basis functions (or modes) are computed from scratch by solving eigenvalue problems. In the classical PGD framework, the reduced model is built directly from the weak for- mulation (here (3)) of the considered PDE, integrated over the parametric space. The approximate reduced solution T at order m is then is then searched in a in a sepa- rated form with respect to space, time, and model parameters p ={p ,p , ... ,p } seen as 1 2 extra-coordinates [10]: m d m i T (x,t, p) = (x)λ (t) α (p)(5) k k k=1 i=1 The computation of the PGD modal representation is performed in an offline phase by using an iterative method [10], before being evaluated in an online phase at any space-time location and any parameter value from products and sums of one-parameter functions. For the multi-parametric problem of interest, the construction of the PGD solution is detailed in [26]. It reads: m 1 2 T (x,t, σ,Pe) = (x)λ (t)α (σ )α (Pe)(6) k k k k k=1 Considering a heat source term with u = 1, the first four PGD modes are represented in Fig. 4 (spatial modes), Fig. 5 (parameter modes), and Fig. 6 (time modes). Real-time data assimilation with Bayesian inference and Transport Map sampling Basics on Bayesian inference The purpose of Bayesian inference is to characterize the posterior probability density obs function (pdf) π(p|d ) of some model parameters p given some indirect and noisy obs observations d . In this context, the Bayesian formulation of the inverse problem reads [17]: obs obs π(p|d ) = π(d |p).π (p)(7) where π (p) is the prior pdf, related to the a priori knowledge on the parameters before obs obs the consideration of data d , π(d |p) is the likelihood function that corresponds to the Rubio et al. Adv. Model. and Simul. in Eng. Sci. (2021) 8:4 Page 6 of 25 Fig. 4 First four spatial modes of the PGD solution 0.28 0.26 0.24 0.22 Mode 1 Mode 1 0.2 −2 Mode 2 Mode 2 Mode 3 Mode 3 0.18 −4 Mode 4 Mode 4 0.16 −70 −65 −60 −55 −50 0.30.35 0.40.45 0.5 σ Pe (a) Modes in σ (b) Modes in Pe Fig. 5 First four parametric modes of the PGD solution obs probability for the model M to predict observations d given values of the parameters obs p,and C = π(d |p) · π(p)dp is a normalization constant. No assumption is made on the probability densities (prior, measurement noise) or on the linearity of the model. We consider here the classical case of an additive measurement noise with density π . meas We also consider that there is no modeling error, even though such an error source could be easily taken into account in the Bayesian inference framework (provided quantitative information on this error source is available). The likelihood function thus reads: obs obs π(d |p) = π (d − M(p)) (8) meas obs Furthermore, when considering sequential assimilation of measurements d at time steps t , i ∈{1, ... ,N }, the Bayesian formulation is such that the prior at time t corre- i t i sponds to the posterior at time t : i−1 α Rubio et al. Adv. Model. and Simul. in Eng. Sci. (2021) 8:4 Page 7 of 25 −2 Mode 1 Mode 2 −4 Mode 3 Mode 4 −6 0 100 200 300 400 500 Time steps Fig. 6 First four time modes of the PGD solution ⎛ ⎞ obs obs obs obs obs ⎝ ⎠ π(p|d , ... , d ) ∝ π (d |p) · π (p); π (d |p) = π d − M p,t t 0 t meas j 1 j j i j j j j=1 (9) Once the PGD approximation T (x,t, p) is built (see “Reduced order modeling using PGD” section), an explicit formulation of the non-normalized posterior density can be derived. Indeed, owing to the observation operator O,the output d (p,t) = O (T (x,t, p)) can be easily computed for any value of the parameter set p. The non- normalized posterior density π thus reads: obs obs obs m π p|d , ... , d = π d − d p,t .π(p)(10) meas j 1 i j j=1 obs obs obs From the expression of π(p|d )(or π(p|d , ... , d )), stochastic features such as 1 i means, variances, or first-order marginals on parameters may be computed. These quan- tities are based on large dimension integrals, and classical Monte-Carlo integration-based techniques such as Markov Chain Monte-Carlo (MCMC) require in practice to sample the posterior density a large number of times. This multiquery procedure is much time consuming and incompatible with fast computations; we thus deal with an alternative approach in the following section. Transport Map sampling The principle of the Transport Map strategy is to build a deterministic mapping M between a reference probability measure ν and a target measure ν .The purposeistofind the ρ π change of variables such that: gdν = g ◦ Mdν (11) π ρ λ Rubio et al. Adv. Model. and Simul. in Eng. Sci. (2021) 8:4 Page 8 of 25 M (x ) -11 -2 -3 -3 -2 -110 1 2 3 -3 -2 -1 32 2 1 0 1 2 3 Fig. 7 Illustration of the Transport Map principle for sampling a target density In this framework, samples drawn according to the reference density are transported to become samples drawn according to the target density (Fig. 7). For the considered obs inference methodology, the target density corresponds to the posterior density π(p|d ) derived from the Bayesian formulation, while a standard normal Gaussian density may be chosen as the reference density; for more details, we refer to [29] with effective computa- tion tools (see http://transportmaps.mit.edu). d d From the reference density ρ, the purpose is thus to build the map M : R → R such that: −1 −1 ν ≈ M ν = ρ ◦ M |det∇M | (12) π  ρ where denotes the push forward operator. Once the map M is found, it can be used for sampling purposes by transporting samples drawn from ρ to samples drawn from π. Simi- N N larly, Gaussian quadrature (ω , p ) for ρ can be transported to quadrature (ω ,M(p )) i i i i i=1 i=1 for π. Such a (deterministic) numerical integration with quadrature rule from the refer- ence Gaussian density is therefore a technique of choice used in the present work for the calculation of statistics, marginals, or any other information from the posterior pdf. Maps M are searched among Knothe–Rosenblatt rearrangements (i.e lower triangular and monotonic maps). This particular choice of structure is motivated by the following properties (see [4,21,29] for all details): • Uniqueness and existence under mild conditions on ν and ν ; π ρ • Easily invertible map and Jacobian ∇M simple to evaluate; • Optimality regarding the weighted quadratic cost; • Monotonicity essentially one-dimensional (∂ M > 0). k Rubio et al. Adv. Model. and Simul. in Eng. Sci. (2021) 8:4 Page 9 of 25 The maps M are therefore parametrized as: ⎡ ⎤ 1 1 1 M (a , a ,p ) c e ⎢ ⎥ 2 2 2 M (a , a ,p ,p ) ⎢ 1 2 ⎥ c e ⎢ ⎥ M(p) = (13) ⎢ ⎥ ⎣ . ⎦ d d d M (a , a ,p ,p , ... ,p ) 1 2 d c e k k k k k k 2 with M (a , a , p) =  (p)a + ( (p , ..., p , θ)a ) dθ. Functions  and c e 1 k−1 c e c e c e are chosen as Hermite polynomials with coefficients a et a . This integrated squared c e parametrization is a classical choice that automatically ensures the monotonicity of the map, and using Hermite polynomials leads to an integration that can be performed ana- lytically. With this parametrization, the optimal map M is found by minimizing the following Kullback–Leibler (K–L) divergence: D (M ν ||ν ) = E log KL  ρ π ρ −1 M ν = log(ρ(p)) − log([π ◦ M](p)) − log(| det ∇M(p)|) ρ(p)dp (14) that quantifies the difference between the two distributions ν and M ν . Still using a π  ρ Gaussian quadrature rule (ω , p ) over the reference probability space associated with i i i=1 ρ, the minimization problem reads: 1,...,d 1,...,d 1,...,d 1,...,d min ω − log( π ◦ M(a , a , p ) − log( det ∇M(a , a , p )) ) i i i c e c e 1,...,d 1,...,d a ,a c e i=1 (15) where π is the non-normalized version of the target density. This minimization problem is fully deterministic and may be solved using classical algorithms (such as BFGS) using gradient or Hessian information on the density π(p). It is important to notice that the reduced PGD representation (6) of the solution is highly beneficial to solve (15). Partial derivatives of the model with respect to parameters p can indeed be easily computed as: m d n m ∂ T ∂ α k i (x,t, p) = (x)λ (t) (p ) α (p ) (16) k k j i n n ∂p ∂p j j i=1 k=1 i =j and stored in the offline phase. Thanks to the separated representation of the PGD, cross- derivatives are computed by combination of univariate modes derivatives. As a result, the use of PGD also speeds up the computation of transport maps. The quality of the approximation M ν of the measure ν can be estimated by the ρ π convergence criterion  (variance diagnostic) defined in [29]as: 1 ν = Var log (17) σ ρ −1 M ν Rubio et al. Adv. Model. and Simul. in Eng. Sci. (2021) 8:4 Page 10 of 25 The numerical cost for computing this criterion is very low as the integration is performed using the reference density and with the same quadrature rule as the one used in the computation of the K–L divergence. Therefore, an adaptive strategy regarding the order of the map can be used to derive an automatic algorithm that guarantees the quality of the approximation M ν . In the case of sequential inference, the Transport Map method exploits the Markov structure of the posterior density (9). Indeed, instead of being fully computed, the map between the reference density ρ and the posterior density at time t is obtained by com- position of low-order maps (see Fig. 8): obs obs (M ◦ ... ◦ M ) ρ(p) = (M ) ρ(p) ≈ π(p|d , ... , d ) (18) 1 i i 1 i Therefore, at each assimilation step t , only the last map component M is computed i i between ρ and the density π defined as: ∗ obs π (p) = π (d |M (p)) · ρ(p) (19) t i−1 i i i which leads to a process with almost constant CPU effort. Real-time control In addition to the mean, maximum a posteriori (MAP), or other estimates on model parameters, another major post-processing in the DDDAS feedback loop is the prediction of some quantities of interest from the model, such as the temperature T at remote point x in the present context (see Fig. 2). Once parameters p (σ and Pe here) are inferred in a probabilistic way at each assimilation time point t (1 ≤ i ≤ N ), it is indeed valuable i t to propagate uncertainties a posteriori in order to know their impact on the output of interest T during the process, and consequently to assess the welding quality. As the PGD model gives an explicit prediction of the temperature field over the whole space-time-parametric domain, the output T can be easily computed for all values of the parameter samples and at each physical time point τ , j ∈{1, ... ,N }. For a given j τ physical time point τ , the pdf π(T |p,t ) of the value of the temperature T knowing j 3|τ i 3 uncertainties on the parameter set p from data assimilation up to time point t can thus be computed in real-time and used to determine if the plates are correctly welded and with which confidence. In practice, this computation may be performed for all physical time points τ ≥ t , and the density π(T |p,t ) is characterized by a (Gaussian) quadrature j i 3|τ i rule using the Transport Map method. With this knowledge, a stochastic computation of the predicted temperature evolution can be obtained, and the control of the welding process from the numerical model can be performed. We detail below the procedure to dynamically determine the value of the control variable u (magnitude of the heat source) in the case where the welding objective is to satisfy a sufficient welding depth. The quantity of interest is then the maximal value of the temperature T obtained at final time τ , which is an indicator of the welding quality. When T ∗ ≥ 1, the welding depth is supposed to be sufficient. Other welding objectives 3|τ will be considered in “Results and discussion” section, associated with similar strategies for command synthesis. Due to the stochastic framework which is employed, the quantity of interest is actually a random variable with pdf π(T |p,t ) evolving at each data assimilation time t . 3|τ i i Rubio et al. Adv. Model. and Simul. in Eng. Sci. (2021) 8:4 Page 11 of 25 Fig. 8 Flowchart of sequential inference using transport maps (L is a normalizing linear map) Rubio et al. Adv. Model. and Simul. in Eng. Sci. (2021) 8:4 Page 12 of 25 The proposed quantity q to monitor is: q = mean(T ∗) − 3 · std(T ∗) = Q(T ∗)(20) 3|τ 3|τ 3|τ where Q is an operator defined in the stochastic space. This way, setting the objective q = 1 ensures that the temperature T ∗ is larger than the melting temperature with a obj 3|τ confidence of 99%, and using the minimal energy (no overheating). Using the PGD solution computed in “Reduced order modeling using PGD” section for a unit magnitude of the heat source (u = 1) and zero initial conditions, the predicted (stochastic) maximal value T for a given constant magnitude u and for fixed pdfs of p reads: m d m ∗ ∗ i T ≈ u · T (x , τ , p) = u · (x )λ (τ ) α (p ) (21) 3|τ 3 k 3 k i i=1 k=1 m ∗ so that q = u · Q (T (x , τ , p)) can be obtained in a straightforward manner. This way, m ∗ setting the source magnitude u to u = q /Q (T (x , τ , p)) would enable to reach the 0 obj 3 welding objective. Nevertheless, in practice the pdfs on parameters p are updated at each assimilation time point t , based on additional experimental information, so that the value of u needs to be tuned with time accordingly. In order to do so, the control variable u(t) is made piecewise constant in time, under the form: u(t) = u · H(t) + δu · H(t − t ) (22) 0 i i i=1 where H is the Heaviside function, u is the initial command on the source magnitude (defined from the prior pdfs on p), and δu is the correction to the current command at each assimilation time t . Using the linearity of the problem with respect to the loading, a PGD solution associated with the command is made of a series of PGD solutions translated in time; it reads: m m u · T (x,t, p) + δu · T (x,t − t , p) (23) 0 i i n=1 Therefore, after each assimilation time point t , the new prediction of the quantity of interest T ∗ can be easily obtained from PGD: 3|τ m ∗ m ∗ T ≈ u · T (x , τ , p) + δu · T (x , τ − t , p) 3|τ 0 3 n 3 n (24) n=1 pred,[0,i−1] m ∗ = T (p) + δu · T (x , τ − t , p) i 3 i 3|τ pred,[0,i−1] i−1 m ∗ m ∗ where T (p) = u · T (x , τ , p) + δu · T (x , τ − t , p) is the prediction 0 3 n 3 n 3|τ n=1 on T ∗ considering the history of the control variable u(t) until time t . Consequently, the 3|τ i correction δu is defined such that Q(T ∗) = q ,using (24) and considering the current i 3|τ obj pdfs of the parameter set p (i.e. those obtained after the last Bayesian data assimilation at time t ). i Rubio et al. Adv. Model. and Simul. in Eng. Sci. (2021) 8:4 Page 13 of 25 0.58 0.95 0.56 0.9 0.54 0.85 0.52 Numerical output Numerical output 0.8 0.5 Measurements Measurements 0 10203040 010 20 30 40 Time steps (assimilation) Time steps (assimilation) (a) Output T (b) Output T 1 2 Fig. 9 Measurements simulated with the numerical model, when the control is not activated - Case 1 Results and discussion We now implement the DDDAS procedure proposed in “Methods” section on the model problem defined in “Introduction” section. We investigate three test cases involving dif- ferent welding scenarios, in order to illustrate the flexibility of the approach and show obs obs its performance. For all scenarios, two temperature data T and T are assimilated 1 2 at each assimilation time point t in order to refine the knowledge on parameters σ and Pe, and further predict the value of the quantity of interest for control purpose. Without any limitation, we assume that assimilation time points t , i ∈{1, ... ,N }, coincide with i t discretization time points τ . Case 1: control of the welding depth with constant physical process parameters In this first test case, the control objective is the one mentioned in “Real-time control” section, that is Q(T ∗) = 1, with Q the operator defined in (20)and τ = 45. This 3|τ ensures that the temperature T at final time τ is larger than the melting temperature with a confidence of 99%, while using the minimal source energy. We use synthetic data, measurements being simulated using the PGD model with refer- ence parameter values (σ = 0.4,Pe =−60) that are supposed to be constant in time ref ref in this section. An independent random normal noise is added with zero mean and stan- meas meas dard deviations σ = 0.01925 and σ = 0.01245. Figure 9 shows the model outputs 1 2 T and T at each time step as well as the perturbed outputs which provide the measure- 1 2 ments used for the considered example, in the case where the control on the system is not activated (i.e. u = 1). When this control is implemented (see “On-the-fly control of the welding process” section), synthetic data are generated by taking into account the applied control law. The goal of the test case is to perform a detailed analysis of the proposed DDDAS approach, in terms of dynamical model updating, uncertainty propagation on the quantity of interest, and on-the-fly command synthesis. Dynamical updating of model parameters The prior density on the parameters (σ,Pe) is chosen as the product of two independent 2 2 Gaussian densities with means (μ = 0.4, μ =−60) and variances (σ = 0.003, σ = σ Pe Pe 7). The Transport Map strategy detailed in “Real-time data assimilation with Bayesian 2 Rubio et al. Adv. Model. and Simul. in Eng. Sci. (2021) 8:4 Page 14 of 25 Table 1 Computation costs of the transport maps depending on the derivatives order information given to the minimization algorithm Derivatives order information 01 2 Number of iterations for step 1 107 33 10 Computation time for step 1 33.85 s 6.18 s 4.60 s Average number of iterations for steps {2, ... , 45} 4.24.16 4.13 Average computation time for steps {2, ... , 45} 1.24 s 0.92 s 0.90 s inference and Transport Map sampling” section and coupled with PGD is then applied for sequential data assimilation, assuming for the moment a constant magnitude u = 1of the heat source. The solution of the heat equation (1) is used in its PGD form and deriva- tives of the approximate solution T with respect to the parameters to be inferred are computed in order to derive the transport maps (i.e. successive maps M , ... ,M ) effec- 1 N tively. In Table 1 we represent the computation time required to compute the transport maps at each assimilation step. We compare computation times when different infor- mation on derivative orders is provided to the minimization algorithm. With order 0, the minimization problem (15) is solved using a BFGS algorithm where the gradient is computed numerically. With order 1, the minimization is also performed using a BFGS algorithm but with the gradient given explicitly with respect to the PGD modes deriva- tives. With order 2, a conjugate gradient algorithm is used with an explicit formulation of −3 both gradient and Hessian. The stopping criterion is a tolerance of 10 on the variance diagnostic (17), and the complexity of the maps (order of the Hermite polynomials) is increased until this tolerance is fulfilled. It appears that the first assimilation step is the most expensive as the complexity of the transformation between the reference and the first posterior density is large (a 4th order map is required to fulfill the variance diagnostic criterion). The other transformations computed at other assimilation time steps are much less expensive (time less than 1 s) as they are built between intermediate posteriors which slightly differ at each step and can thus be easily represented by a linear (i.e. first order) transformation. The speed-up for the first iteration is about 5.5 between zeroth-order information and first-order information. Between the first-order information and the second-order information, the speed-up is about 1.34. For the other time steps, the speed-up is very small as the computed map is very simple. We observe that using gradient and Hessian information to solve the minimization problem related to the computation of the transport maps leads to low computation times. In Fig. 10, information on the computation cost over the time steps and using both gradient and Hessian information (order 2 information) is provided: Fig. 10a shows the computation time to build each map M , i ∈{1, ... ,N }, while the cost in terms of model i t evaluations to compute each map is displayed in Fig. 10b. A level 10 Gauss–Hermite quadrature is used. From the second step to the final step, we observe that the computation time slowly increases (Fig. 10a) while the evaluation cost slowly decreases (Fig. 10b). This is due to the fact that the evaluation of the composition of maps grows with the number of steps. One way to circumvent this issue would consist in performing regression on the map composition. Figures 11 and 12 represent the marginals at each time step and for both parameters σ and Pe, respectively. The color map informs on the probability density function values. Rubio et al. Adv. Model. and Simul. in Eng. Sci. (2021) 8:4 Page 15 of 25 010 20 30 40 0 10203040 Time steps (assimilation) Time steps (assimilation) (a) Computation time for each time step (b) Number of iterations of the minimiza- tion algorithm for each time step Fig. 10 Cost of the transport maps computations using Hessian information for each assimilation activated - case 1 Fig. 11 Marginals on σ computed with 20,000 samples and kernel density estimation for each assimilation time step - Case 1 During the iterations over the time steps, we observe that marginals become thinner with larger maximal pdf values giving more confidence on the parameters estimation. We also observe that the parameter σ is less sensitive than the parameter Pe regarding the inference process. After 45 assimilation time steps, the algorithm gives a maximum estimator [0.394, −60.193] and a mean estimator [0.392, −59.949]. These values are very close to the reference values [0.40, −60] used to simulate the measurements. Uncertainty propagation on the quantity of interest Still assuming a constant magnitude u = 1 of the heat source, uncertainty propagation is performed in real-time in order to predict the evolution of the temperature T (in terms of pdf) in the region of interest. Knowing the uncertainties on the parameters, the goal is to predict at each assimilation time point the evolution of the temperature T during the next physical time steps. This is easily done owing to the PGD model, as the temperature field is then globally and explicitly known over the time domain and with respect to the values of σ and Pe. The computation is performed after each assimilation time point t and for all the physical time points τ ≥ t . j i Computation time (s) Number of iterations Rubio et al. Adv. Model. and Simul. in Eng. Sci. (2021) 8:4 Page 16 of 25 Fig. 12 Marginals on Pe computed with 20,000 samples and kernel density estimation for each assimilation time step - Case 1 Fig. 13 Prediction of the output T for all time steps after the considered assimilation step - Case 1 Figure 13a shows the prediction result with uncertainty propagation after the first assim- ilation time point t for all the physical steps τ , j > 1. To that end, samples are drawn 1 j obs,1 obs,1 obs,1 obs,1 according to the first posterior π(σ,Pe|T ,T ) = π (T ,T |σ,Pe).π(σ,Pe). 1 2 1 1 2 The slice [τ , τ ] represents the guess on the temperature T from the prior uncertainty 0 1 3 knowledge on the parameters (σ,Pe), before the first assimilation step t .For τ >τ the 1 j 1 graph represents the prediction of the output T considering the current knowledge on the obs,1 parameters uncertainty (i.e. with the assimilation of the first set of measurements T obs,1 and T alone). The discontinuous line represents the evolution of the temperature T with the true value of parameters (σ = 0.4,Pe =−60). Rubio et al. Adv. Model. and Simul. in Eng. Sci. (2021) 8:4 Page 17 of 25 Uncertainty map Numerical ouptut 1.05 0.95 10 20 30 40 Time steps (assimilation) Fig. 14 Prediction of temperature T at physical time step τ = 45 after each assimilation time step t ,i ∈ 3 i {1, ... , 45} - Case 1 Other graphs (Fig. 13b–d) show the refinement of the prediction with the improvement on the parameters uncertainty knowledge. The current measurement assimilation step is indicated by the vertical cursor. On the right of the cursor τ = t , the graphs represent the prediction of the temperature T from the model after the assimilation of the measure- obs,1:i obs,1:i ments T and T . On the left of the cursor, each slice [t ,t ](j ≤ i) represents j−1 j 1 2 the prediction made at the assimilation time t (the predictions of the temperature T for j 3 physical time steps anterior to the assimilation time step t are not updated). Figure 14 shows the convergence of the prediction on the quantity of interest T at 3|τ the steady state regime (τ = 45) with respect to the assimilation steps. We observe that, as foreseen, more confidence is given to this output along the real-time data assimilation process. On-the-fly control of the welding process The previously described assimilation procedure, performed in situ and in real-time, can be used in the context of welding control. If the stochastic prediction on the quantity ∗ ∗ of interest T is not satisfying with regards to the criterion Q(T ) = 1, a change in 3|τ 3|τ the command u(t) can be implemented as described in “Real-time control” section. This implementation is performed here. In Fig. 15, we show the time evolution of the pdf associated with the prediction on T , 3|t with or without control. In the case without control, the sharp time evolution is due to changes in the pdfs of σ and Pe along the data assimilation steps. We observe that the quantity Q(T ) is much larger than 1, indicating overheating and wasted energy. On the 3|τ contrary, implementing the control by varying the magnitude u of the heat source enables to reach the criterion Q(T ) = 1 perfectly, and it also speeds up the convergence of the 3|τ pdf on T to the target. 3|t In Fig. 16, we indicate the evolution of the command variable along the welding process (in terms of corrections δu at each assimilation time point t ). We again observe that i i the feedback loop is effective and quickly (i.e. much before the final time τ )leads to an asymptotic regime in which the command remains almost constant (i.e. δu ≈ 0). We also show in Fig. 16 the map orders which are used along the data assimilation process when the control is performed. This indicates that an order 1 map is still usually sufficient, but 3 Rubio et al. Adv. Model. and Simul. in Eng. Sci. (2021) 8:4 Page 18 of 25 Fig. 15 Evolutionintimeof T without control (left) and with control (right) - Case 1 3|t −2 ·10 −2 0 10 20 30 40 010 20 30 40 Time steps Time steps Fig. 16 Evolution of the command variable in terms of incremental corrections (left), and map order required at each assimilation time step in the case of system control (right) - Case 1 that a few more maps with higher order are required compared to the case with no control (where only the first map was order 4). Eventually, we display in Fig. 17 the evolution in time of the overall CPU cost required to implement the feedback loop, which includes both data assimilation and command synthesis steps. As foreseen, this cost is higher during the first assimilation times when the pdfs on parameters σ and Pe significantly evolve (i.e. when much is learnt from measurement data). Once the asymptotic regime is reached in the model updating procedure, the CPU cost is low (< 1 s) which is compatible with real-time contraints for the considered welding application. Case 2: control of the welding depth with evolving physical process parameters This second test case has many similarities with the previous one, the control objective still being Q(T ∗) = 1. Nevertheless, we now take τ = 100 and we assume that the 3|τ welding process experiences an unexpected change in the Peclet number value during service (e.g. due to change in the source velocity or material thermal properties), at t = 40. Consequently, the reference parameters values which are now used to get synthetic (noisy) data are: −60 for t < 40 σ = 0.4; Pe = (25) ref ref −55 for t ≥ 40 δu(t) Map order Rubio et al. Adv. Model. and Simul. in Eng. Sci. (2021) 8:4 Page 19 of 25 3.5 2.5 1.5 0.5 10 20 30 40 Time steps Fig. 17 Computation time including the computation of the transport maps and the command synthesis - Case 1 Fig. 18 Marginals on σ (left) and Pe (right) at each assimilation time time step - Case 2 Starting from the same prior distribution of parameters as in the test case 1, sequen- tial data assimilation using Transport Map sampling and PGD is again performed. The minimization problem associated with the computation of the maps is solved with order 1 information on the derivatives, that is a BFGS algorithm with explicit computation of the gradient from the PGD representation. The complexity of the maps (that is the degree −3 of employed Hermite polynomials) is increased until reaching a tolerance of 10 on the variance diagnostic. We represent in Fig. 18 the evolution in time of the marginals on both parameters σ and Pe. Again, we observe that they become thinner with larger maximal pdf values when the number of data assimilation times increases. We also observe that after the change of the reference value for Pe, the data assimilation algorithm is able to detect this change and infers a mean value that slowly tends to the new reference value (even though right after t = 40, the reference parameter value Pe =−55 appears in ref the tail of the pdf). Meanwhile, during this transient regime, it seems that no additional knowledge is brought for the inference of σ as the associated marginals are stagnating. We also show in Fig. 19 the map orders which are used along the data assimilation process. This particularly indicates that an order 1 map remains sufficient to follow the sudden change in the reference value for Pe. Computation time (s) Rubio et al. Adv. Model. and Simul. in Eng. Sci. (2021) 8:4 Page 20 of 25 020 40 60 80 100 Time steps Fig. 19 Map order required at each assimilation time step - Case 2 Fig. 20 Evolutionintimeof T when the control is implemented - Case 2 3|t From the dynamical updating of model parameters and with respect to the objective, the control of the process with on-the-fly command synthesis is implemented. We show in Fig. 20 the time evolution of the pdf of T in the case of a controlled welding process. We 3|t observe that the control objective is reached even though pdfs of model parameters have not converged yet around the reference parameter values. This illustrates the interest of the control in a stochastic framework, in which uncertainty on the inferred parameters is taken into account in the synthesis of the command in order to make safe decision. We also plot in Fig. 21 the evolution of the command variable u(t) along the process as well as its incremental corrections δu at each time point t ; we clearly observe the change in i i the command when the physical value of the Peclet number drops at t = 40. Case 3: control of the welding temperature evolution with prescribed time path In this last test case, the control objective is to make the temperature T follow a prede- 3|t fined time path, which comes down to imposing the welding history along the process. We set the final time τ = 100 and we assume that reference parameter values are σ = 0.4 ref and Pe =−60 (constant in time). Synthetic measurement data are simulated from these ref values, with additive measurement noise. Map order Rubio et al. Adv. Model. and Simul. in Eng. Sci. (2021) 8:4 Page 21 of 25 Fig. 21 Evolution of the command variable (left) and its incremental corrections (right) along the controlled welding process - Case 2 Fig. 22 Target (dashed red line) and free system (solid black line) evolution curves for T - Case 3 3|t The prescribed evolution curve for T is showninFig. 22 (dashed red line). It is a ramp 3|t increase up to t = 20, then a plateau evolution. In our stochastic framework, the command law is designed so that the predicted mean value of T follows this target evolution. In 3|t practice, at each assimilation time point t , and from the inferred pdfs on model parameters at this time, a command correction δu is computed so that the prediction on mean(T ) i 3|t i+1 coincides with the target value at the next assimilation time point t . The evolution of i+1 T predicted from the model with reference parameter values, and without any control, 3|t is also shown in Fig. 22 (solid black line). Starting from the same prior distribution of parameters as in the previous test cases, sequential data assimilation using Transport Map sampling and PGD is performed. The minimization problem associated with the computation of the maps is solved with order 1 information on the derivatives, and the complexity of the maps is increased until reaching −3 a tolerance of 10 on the variance diagnostic. We represent in Fig. 23 the evolution in time of the marginals on both parameters σ and Pe. As expected, we observe that they become thinner with larger maximal pdf values tending to reference parameter values along the data assimilation process. The map orders which are used along this process are shown in Rubio et al. Adv. Model. and Simul. in Eng. Sci. (2021) 8:4 Page 22 of 25 Fig. 23 Marginals on σ (left) and Pe (right) at each assimilation time step - Case 3 Fig. 24 Map order required at each assimilation time step - Case 3 Fig. 24; they again indicate that an order 1 is sufficient, except for first assimilation steps where the complexity of the transformation between the reference density and the first posterior densities is higher. From the dynamical updating of model parameters and with respect to the objective, the control of the process with on-the-fly command synthesis is implemented. We show in Fig. 25 the resulting time evolution of the pdf of T . We observe that mean(T ) 3|t 3|t quite perfectly matches the target evolution. We also plot in Fig. 26 the evolution of the command variable u(t) along the process as well as its incremental corrections δu at each time point t . We observe that during the transient phase (ramp evolution of the target), fast modifications in the command are required while command increments tend to zero once the steady-state target regime is reached. Anyhow, this test case shows that the proposed DDDAS strategy is capable of generating complex and effective command laws. Conclusions In this work we presented a procedure to build a numerical feedback loop for the control of a fusion welding process from modeling and simulation, while taking uncertainties into account. In order to perform fast computations and permit real-time exchanges between the physical system and its virtual twin, PGD model reduction and Transport Map sam- Rubio et al. Adv. Model. and Simul. in Eng. Sci. (2021) 8:4 Page 23 of 25 Fig. 25 Evolutionintimeof T when the control is implemented- Case 3 3|t Fig. 26 Evolution of the command variable (left) and its incremental corrections (right) along the controlled welding process- Case 3 pling were used in several numerical tasks along the feedback loop. In particular, the explicit dependency on the model parameters inside the PGD model as well as the suit- able sampling and integration framework offered by transport maps enabled to effectively perform data assimilation, uncertainty quantification, and predictive control. The imple- mentation of the feedback loop for various control scenarios illustrated the interest and performance of the proposed approach. This approach thus appears to be a relevant tool for real-time feedback control in the DDDAS framework. Future works should focus on the extension of the approach to more complex (e.g. nonlinear) models, associated with modeling errors that may be a priori considered in the Bayesian framework but also a pos- teriori corrected from data-based learning and enrichment. Dealing with a larger number of model parameters and control variables in the DDDAS context is also a research topic of interest that will be investigated in forthcoming works. Authors’ contributions All authors discussed the content of the article, and were involved in the definition of techniques and algorithms. All authors read and approved the final manuscript. Funding No specific funding has to be declared for this work. Rubio et al. Adv. Model. and Simul. in Eng. Sci. (2021) 8:4 Page 24 of 25 Availability of data and material The datasets used during the current study are available from the corresponding author on reasonable request. The interested reader is thus invited to contact the corresponding author. Competing interests The authors declare that they have no competing interests. Author details 1 2 Université Paris-Saclay, ENS Paris-Saclay, LMT, 4 Avenue des Sciences, 91190 Gif-sur-Yvette, France, Institut Universitaire de France (IUF), 1 rue Descartes, 75005 Paris, France. Received: 11 December 2019 Accepted: 22 January 2021 References 1. Arulampalam MS, Maskell S, Gordon N, Clapp T. A tutorial on particle filters for online nonlinear/non-gaussian bayesian tracking. IEEE Transactions on Signal Processing. 2002;50(2):174–88. 2. Beck JL. Bayesian system identification based on probability logic. Structural Control and Health Monitoring. 2010;17(7):825–47. 3. Berger J, Orlande HRB, Mendes N. Proper Generalized Decomposition model reduction in the Bayesian framework for solving inverse heat transfer problems. Inverse Problems in Science and Engineering. 2017;25(2):260–78. 4. Bogachev VI, Kolesnikov AV, Medvedev KV. Triangular transformations of measures, Sbornik:Mathematics 2005;196:309. 5. Bouclier R, Louf F, Chamoin L. Real-time validation of mechanical models coupling PGD and constitutive relation error. Computational Mechanics. 2013;52(4):861–83. 6. Calvetti D, Dunlop M, Somersalo E, Stuart A. Iterative updating of model error for Bayesian inversion, Inverse Problems 2018;34(2). 7. Chamoin L, Allier PE, Marchand B, Synergies between the Constitutive Relation Error concept and PGD model reduction for simplified V&V procedures, Advanced Modeling and Simulation in Engineering Sciences 2016;3:18. 8. Chamoin L, Pled F, Allier PE, Ladevèze P. A posteriori error estimation and adaptive strategy for PGD model reduction applied to parametrized linear parabolic problems. Computer Methods in Applied Mechanics and Engineering. 2017;327:118–46. 9. Chinesta F, Ladevèze P, Cueto E. A short review on model order reduction based on Proper Generalized Decomposi- tion. Archives of Computational Methods in Engineering. 2011;18(4):395–404. 10. Chinesta F, Keunings R, Leygue A. The Proper Generalized Decomposition for Advanced Numerical Simulations: A Primer, SpringerBriefs in Applied Sciences and Technology 2014. 11. Chinesta F, Cueto E, Abisset-Chavanne E, Duval J-L, Khaldi FE. Virtual, digital and hybrid twins: A new paradigm in data-based engineering and engineered data. Archives of Computational Methods in Engineering. 2020;27:105–34. 12. Darema F. Dynamic Data Driven Applications Systems: A new paradigm for application simulations and measure- ments, Computational Science - ICCS: 2004;662–669. 13. El Moselhy TA, Marzouk Y. Bayesian inference with optimal maps. Journal of Computational Physics. 2012;231(23):7815–50. 14. Gamerman D, Lopes HF. Markov Chain Monte Carlo-Stochastic Simulation for Bayesian Inference. : CRC Press; 2006. 15. Gonzalez D, Masson F, Poulhaon F, Leygue A, Cueto E, Chinesta F. Proper generalized decomposition based dynamic data driven inverse identification. Mathematics and Computers in Simulation. 2012;82(9):1677–95. 16. Grepl M. Reduced-Basis Approximation and A Posteriori Error Estimation, PhD Thesis. 2005. 17. Kaipio J, Somersalo E. Statistical and Computational Inverse Problems. New York: Springer-Verlag; 2004. 18. Ladevèze P. On reduced models in nonlinear solid mechanics. European Journal of Mechanics - A/Solids. 2016;60:227– 19. Manzoni A, Pagani S, Lassila T. Accurate Solution of Bayesian Inverse Uncertainty Quantification Problems Combining Reduced Basis Methods and Reduction Error Models. SIAM/ASA Journal on Uncertainty Quantification. 2016;4(1):380– 20. Marchand B, Chamoin L, Rey C. Real-time updating of structural mechanics models using Kalman filtering, modified Constitutive Relation Error and Proper Generalized Decomposition. International Journal for Numerical Methods in Engineering. 2016;107(9):786–810. 21. Marzouk Y, Moselhy T, Parno M, Spantini A. Sampling via measure transport: an introduction, Handbook of Uncertainty Quantification, 2016;1–41. 22. Matthies HG, Zander E, Rosic BV, Litvinenko A, Pajonk O. Inverse problems in a Bayesian setting. Computational Methods for Solids and Fluids. 2016;41:245–86. 23. Parno MD, Marzouk YM. Transport map accelerated Markov Chain Monte-Carlo. SIAM-ASA Journal on Uncertainty Quantification. 2018;6(2):645–82. 24. Peherstorfer B, Willcox K. Dynamic data-driven reduced-order models. Computer Methods in Applied Mechanics and Engineering. 2015;291:21–41. 25. Robert CP, Casella G. Monte Carlo Statistical Methods. New York: Springer Texts in Statistics; 2004. 26. Rubio PB, Louf F, Chamoin L. Fast model updating coupling Bayesian inference and PGD model reduction. Compu- tational Mechanics. 2018;62(6):1485–509. 27. Rubio PB, Louf F, Chamoin L. Transport Map sampling with PGD model reduction for fast dynamical Bayesian data assimilation. International Journal in Numerical Methods in Engineering. 2019;120(4):447–72. 28. Rubio PB, Chamoin L, Louf F. Real-time Bayesian data assimilation with data selection, correction of model bias, and on-the-fly uncertainty propagation, Comptes Rendus Mécanique. Paris. 2019;347:762–79. Rubio et al. Adv. Model. and Simul. in Eng. Sci. (2021) 8:4 Page 25 of 25 29. Spantini A, Bigoni D, Marzouk Y. Inference via low-dimensional couplings. Journal of Machine Learning Research. 2018;19:1–71. 30. Stuart AM. Inverse problems: a Bayesian perspective. Acta Numerica. 2010;19:451–559. 31. Tarantola A. Inverse Problem Theory and Methods for Model Parameter Estimation, Society for Industrial and Applied Mathematics 2005. Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Journal

"Advanced Modeling and Simulation in Engineering Sciences"Springer Journals

Published: Feb 24, 2021

There are no references for this article.