Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Evidence-Theory-Based Robust Optimization and Its Application in Micro-Electromechanical Systems

Evidence-Theory-Based Robust Optimization and Its Application in Micro-Electromechanical Systems applied sciences Article Evidence-Theory-Based Robust Optimization and Its Application in Micro-Electromechanical Systems 1 , 2 1 1 3 , 1 Zhiliang Huang , Jiaqi Xu , Tongguang Yang , Fangyi Li * and Shuguang Deng School of Mechanical and Electrical Engineering, Hunan City University, Yiyang 413002, China; 13787181710@163.com (Z.H.); 18552122421@163.com (J.X.); yangtongguang1@163.com (T.Y.); shuguangdeng@163.com (S.D.) College of Mechanical and Vehicle Engineering, Hunan University, Changsha 40082, China School of Vehicle and Mechanical Engineering, Changsha University of Science and Technology, Changsha 410076, China * Correspondence: fangyi.li@csust.edu.cn Received: 21 February 2019; Accepted: 1 April 2019; Published: 7 April 2019 Featured Application: This paper develops an evidence-theory-based robustness optimization (EBRO) method, which aims to provide a potential computational tool for engineering problems with epistemic uncertainty. This method is especially suitable for robust designing of micro-electromechanical systems (MEMS). On one hand, unlike traditional engineering structural problems, the design of MEMS usually involves micro structure, novel materials, and extreme operating conditions, where multi-source uncertainties inevitably exist. Evidence theory is well suited to deal with such uncertainties. On the other hand, high performance and insensitivity to uncertainties are the fundamental requirements for MEMS design. The robust optimization can improve performance by minimizing the effects of uncertainties without eliminating these causes. Abstract: The conventional engineering robustness optimization approach considering uncertainties is generally based on a probabilistic model. However, a probabilistic model faces obstacles when handling problems with epistemic uncertainty. This paper presents an evidence-theory-based robustness optimization (EBRO) model and a corresponding algorithm, which provide a potential computational tool for engineering problems with multi-source uncertainty. An EBRO model with the twin objectives of performance and robustness is formulated by introducing the performance threshold. After providing multiple target belief measures (Bel), the original model is transformed into a series of sub-problems, which are solved by the proposed iterative strategy driving the robustness analysis and the deterministic optimization alternately. The proposed method is applied to three problems of micro-electromechanical systems (MEMS), including a micro-force sensor, an image sensor, and a capacitive accelerometer. In the applications, finite element simulation models and surrogate models are both given. Numerical results show that the proposed method has good engineering practicality due to comprehensive performance in terms of efficiency, accuracy, and convergence. Keywords: epistemic uncertainty; evidence theory; robust optimization; sensor design 1. Introduction In practical engineering problems, various uncertainties exist in terms of the operating environment, manufacturing process, material properties, etc. Under the combined action of these uncertainties, the performance of engineering structures or products may fluctuate greatly. Robust Appl. Sci. 2019, 9, 1457; doi:10.3390/app9071457 www.mdpi.com/journal/applsci Appl. Sci. 2019, 9, 1457 2 of 19 optimization [1,2] is a methodology, and its fundamental principle is to improve the performance of a product by minimizing the effects of uncertainties without eliminating these causes. The concept of robustness optimization has long been embedded in engineering design. In recent years, thanks to the rapid development of computer technology, it has been widely applied to many engineering fields, such as electronic [3], vehicle [4], aerospace [5], and civil engineering [6]. The core of robustness optimization lies in understanding, measuring, and controlling the uncertainty in the product design process. In mechanical engineering disciplines, uncertainty is usually differentiated into objective and subjective from an epistemological perspective [7]. The former, also called aleatory uncertainty, comes from an inherently irreducible physical nature, e.g., material properties (elasticity modulus, thermal conductivity, expansion coefficient) and operating conditions (temperature, humidity, wind load). A probabilistic model [8–10] is an appropriate way to describe such uncertain parameters, provided that sufficient samples are obtained for the construction of accurate random distribution. Conventional robustness optimization methods [11–13] are based on probabilistic models, in which the statistical moments (e.g., mean, variance) are employed to formulate the robustness function for the performance assessment under uncertainties. On the other hand, designers may lack knowledge about the issues of concern in practice, which leads to subjective uncertainty, also known as epistemic uncertainty. The uncertainty is caused by cognitive limitation or a lack of information, which could be reduced theoretically as effort is increased. At present, the methods of dealing with epistemic uncertainty mainly include possibility theory [14,15], the fuzzy set [16,17], convex model [18,19], and evidence theory [20,21]. Among them, evidence theory is an extension of probability theory, which can properly model the information of incompleteness, uncertainty, unreliability and even conflict [22]. When evidence theory treats a general structural problem, all possible values of an uncertain variable are assigned to several sub-intervals, and the corresponding probability is assigned to each sub-interval according to existing statistics and expert experience. After synthesizing the probability of all the sub-intervals, the belief measure and plausibility measure are obtained, which constitute the confidence interval of the proposition and show that the structural performance satisfies a given requirement. Compared with other uncertainty analysis theories, evidence theory may be more general. For example, when the sub-interval of each uncertain variable is infinitely small, evidence theory is equivalent to probability theory; when the sub-interval is unique, it is equivalent to convex model theory; when no conflict occurs to the information from different sources, it is equivalent to possibility theory [23]. In the past decade, some progress has been made in evidence-theory-based robust optimization (EBRO). For instance, Vasile [24] employed evidence theory to model the uncertainties of spacecraft subsystems and trajectory parameters in the robust design of space trajectory and presented a hybrid co-evolutionary algorithm to obtain the optimal results. For the preliminary design of a space mission, Croisard et al. [25] formulated the robust optimization model using evidence theory and proposed three practical solving technologies. Their features of efficiency and accuracy were discussed through the application of a space mission. Zuiani et al. [26] presented a multi-objective robust optimization approach for the deflection action design of near-Earth objects, and the uncertainties involved in the orbital and system were qualified by evidence theory. A deflection design application of a spacecraft swarm with Apophis verified the effectiveness of this approach. Hou et al. [27] introduced evidence-theory-based robust optimization (EBRO) into multidisciplinary aerospace design, and the strategy of an artificial neural network was used to establish surrogate models for the balance of efficiency and accuracy during the optimization. This method was applied to two preliminary designs of the micro entry probe and orbital debris removal system. The above studies employed evidence theory to measure the epistemic uncertainties involved in engineering design, and expanded robustness optimization into the design of complex systems. However, the studies of EBRO are still in a preliminary stage. The existing research has mainly aimed at the preliminary design of engineering systems. Most of them have been simplified and assumed to a great extent. In other words, the performance functions are based on surrogate models and even empirical formulas. So far, Appl. Sci. 2019, 9, 1457 3 of 19 EBRO applications in actual product design, with a time-consuming simulation model being created for performance function, are actually quite few. After all, computational cost is a major technical bottleneck limiting EBRO applications. First, evidence theory describes uncertainty through a series of discontinuous sets, rather than a continuous function similar to a probability density function. This usually leads to a combination explosion in a multidimensional robustness analysis, and finally results in a heavy computational burden. Secondly, EBRO is essentially a nested optimization problem with performance optimization in the outer layer and robustness analysis in the inner layer. The direct solving strategy means a large number of robustness evaluations using evidence theory. As a result, the issue of EBRO efficiency is further exacerbated. Therefore, there is a great engineering significance in developing an efficient EBRO method in view of actual product design problems. In this paper, a general EBRO model and an efficient algorithm are proposed, which provide a computational tool for robust product optimization with epistemic uncertainty. The proposed method is applied to three design problems of MEMS, in which its engineering practicability is discussed. The remainder of this paper is organized as follows. Section 1 briefly introduces the basic concepts and principles of robustness analysis using evidence theory. The EBRO model is formulated Section 2. The corresponding algorithm is proposed in Section 3. In Section 4, this method is validated through the three applications of MEMS—a micro-force sensor, a low-noise image sensor and a capacitive accelerometer. Conclusions are drawn in Section 5. 2. Robustness Analysis Using Evidence Theory Consider that uncertainty problem is given as f (Z), where Z represents the n -dimensional uncertain vector, f is the performance function which is uncertain due to Z. Conventional methods [16–18] of robust optimization employ probability theory to deal with the uncertainties. The typical strategy is to consider the uncertain parameters of a problem as random variables, and thereby the performance value is also a random variable. The mean and variance are used to formulate the robustness model. In practical engineering, it is sometimes hard to construct accurate probability models due to the limited information. Thus, evidence theory [20,21] is adopted to model the robustness. In evidence theory, the frame of discernment (FD) needs to be established first, which contains several independent basic propositions. It is similar to the sample space of a random Q Q parameter in probability theory. Here, 2 denotes the power set of the FD (namely Q), and 2 consists of all possible propositions contained in Q. For example, for a FD with the two basic propositions of Q and Q , the corresponding power set is 2 = ?, Q , Q , Q , Q . Evidence theory adopts f f g f g f gg 1 2 1 2 1 2 a basic probability assignment (BPA) to measure the confidence level of each proposition. For a certain proposition A, the BPA is a mapping function that satisfies the following axioms: 0  m(A)  1, 8A 2 2 m F = 0 ( ) (1) m(A) = 1 A22 where if m(A)  0, A is called a focal element of m. The BPA of m(A) denotes the extent to which the evidence supports Proposition A. When the information comes from multiple sources, m(A) can be obtained by evidence combination rules [28]. Evidence theory uses an interval consisting of the belief measure (Bel) and the plausibility measure (Pl) to describe the true extent of the proposition. The two measures are defined as: Bel(A) = m(C) CA (2) Pl A = m C ( ) å ( ) C\A6=F As can be seen from Equation (2), Bel(A) is the summary of all the BPA that totally support Proposition A, while Pl(A) is the summary of the BPA that support Proposition A totally or partially. Appl. Sci. 2019, 9, 1457 4 of 19 A two-dimensional design problem is taken as the example to illustrate the process of robustness analysis using evidence theory. The performance function contains two uncertain parameters (a, b), which are both considered as evidence variables. The FDs of a, b are the two closed intervals, i.e., L R L R A = A , A and B = B , B . A contains n number of focal elements, and the subinterval of L R A = B , B represents the i-th focal element of A. The definitions of n and B are similar. Thus, i B j i i a Cartesian product can be constructed: D = A B = D = A , B , A 2 A, B 2 B (3) k i j i j where D is the k-th focal element of D, and the total number of focal elements is n  n . For ease of A B presentation, assuming that a, b are independent, a two-dimensional joint BPA is obtained: m(D ) = m(A ) m B (4) k i j More general problems with parametric correlation can be handled using the mathematical tool of copula functions [29]. As analyzed above, the performance function of f is uncertain. The performance threshold of v is given to evaluate its robustness. Given that the design objective is to minimize the value of f, the higher the trueness of Proposition f  v, the higher the robustness of f relative to v. Proposition f  v is defined as the feasible domain: F = f f : f (a, b)  vg, (a, b) 2 D , D = A , B  D (5) k k i j Substituting A, C with F, D in Equation (2), the belief measure and plausibility measure of Proposition f  v are expressed as follows: Bel(F) = å m(D ) D F (6) Pl(F) = m(D ) D \F6=F In evidence theory, the probabilistic interval composed by the two measures can describe the trueness of f  v, written as R(F) 2 [Bel(F), Pl(F)]. The accumulation of Bel, Pl needs to determine the positional relationship between each focal element and the F domain. As a result, the performance function extrema of each focal element must be searched. For this example, the n  n pairs of A B extremum problems are established as: min f = min f (a, b) (a,b)2D k = 1, 2, . . . , n  n (7) A B max f = max f (a, b) (a,b)2D min max where f , f are the minimum and maximum of the k-th focal element. The vertex method [30] k k max can efficiently solve the problems in Equation (7) one by one. If f  v, D  F, and m(D ) k k min is simultaneously accounted into Bel(F) and Pl(F); If f < v, D \ F 6= F, and m(D ) is only k k accounted into Pl(G). After calculating the extrema for all focal elements, the Bel and Pl can be totaled. 3. Formulation of the EBRO Model As mentioned above, evidence theory uses a pair of probabilistic values [Bel, Pl] to measure the robustness of the performance value related to the given threshold. However, engineers generally tend to adopt conservative strategies to deal with uncertainties in the product design process. Thus, the robustness objective of EBRO can be established as max Bel( f  v). Meanwhile, in order to improve Appl. Sci. 2019, 9, 1457 5 of 19 product performance, the performance threshold is minimized. The EBRO model is formulated as a double-objective optimization problem: min v, max Bel f d, X, P  v ( ( ) ) (8) l u l u s.t. d  d  d , X  X  X where d is the n -dimensional deterministic design vector; X is the n -dimensional uncertain design d X vector; P is the n -dimensional uncertain parameter vector; the superscripts of l, u represent the value range of a design variable; and X represents the nominal value of X. Note that the threshold of v is usually difficult to give a fixed value to, while it should be treated as a deterministic design variable. The proposed model is an improvement on the existing model [24] because it can handle more types of uncertainty, such as the perturbations of design variables resulting from production tolerances, and the variations of parameters due to changing operating conditions. As for the solving process, the EBDO involves the nested optimization of the double-objective optimization in the outer layer and the robustness assessment in the inner layer. Due to the discreteness introduced by the evidence variables, each of the robustness analyses need to calculate the performance extrema of all focal elements. Essentially, extremum evaluation is an optimization problem involving the performance function based on time-consuming simulation models, and therefore the robustness analysis bears a high computational cost. More seriously, the double-objective optimization in the outer layer requires a large number of robustness evaluations in the inner layer. Eventually, the EBDO solving becomes extremely inefficient. 4. The Proposed Algorithm To improve efficiency, this paper proposes a decoupling algorithm of EBRO, and its basic idea is to convert the nested optimization into the sequence iteration process. Firstly, the original problem is decomposed into a series of sub-problems. Secondly, the uncertainty analysis and the deterministic optimization are driven alternately until convergence. The framework of the proposed method is detailed below. 4.1. Decomposition into Sub-Problems Robust optimization is essentially a multi-objective problem that increases product performance at the expense of its robustness. Therefore, robust optimization generally does not have a unique solution, but a set of solutions called the Pareto optimal set [2]. It is a family of solutions that is optimal in the sense that no improvement can be achieved in any objective without degradation in others for a multi-objective problem. The Pareto-optimal solutions can be obtained by solving appropriately formulated single objective optimization problems on a one-at-a-time basis. At present, a number of multi-objective genetic algorithms have been suggested. The primary reason for this is their ability to find multiple Pareto-optimal solutions in parallel. From the viewpoint of mathematical optimization, genetic algorithms are a kind of suitable method for solving a general multi-objective optimization. However, the efficiency of a genetic algorithm is usually much lower than the gradient-based optimization algorithms, which has become the main technical bottleneck limiting its practical application [31,32]. Although a priori information is not required when using genetic algorithms, most designers have some engineering experience in practice. Therefore, for the specific problem shown in Equation(8), the robustness objective of Bel( f (d, X, P)  v) is often handled as a reliability constraint [2,33]. In this paper, the EBRO problem is transformed into a series of sub-problems under the given target belief measures: min v s.t. Bel( f (d, X, P)  v)  Bel j = 1, 2, . . . , n (9) j T l u l u d  d  d , X  X  X Appl. Sci. 2019, 9, 1457 6 of 19 T T where Bel represents the j-th target belief measure; and Bel f  v  Bel is the reliability constraint ( ) j j derived from the robust objective. In many cases, the designer may focus on the performance values under some given conditions based on the experience or quality standard. This condition is usually a certain probability of f  v, namely Bel . 4.2. Iteration Framework Theoretically, the EBDO problems in Equation (9) can be solved by existing methods [34]. However, the resulting computational burden will be extremely heavy. To address this issue, a novel iteration framework is developed, in which the uncertainty analysis and design optimization alternate until convergence. In the k-th iteration, each optimization problem in Equation (9) requires the performance of an uncertainty analysis at the previous design point: (k1) Bel f (Z)  v , Z = (X, P), j = 1, 2, . . . , n (10) This mainly consists of two steps, illustrated by the example in Figure 1. Step 1 is to search (k1) for the most probable focal element (MPFE) along the limit-state boundary of f (Z)  v . The MPFE [35] is similar to the most probable point (MPP) in probability theory, which is the point with the most probability density on the limit-state boundary. Compared to other points on the boundary, the minimal error of reliability analysis can be achieved by establishing the linear approximation for the performance function at the MPP [36]. Similarly, the MPFE contains the maximal BPA among the focal elements that are crossed by the limit-state boundary. The searching process of MPFE is formulated as: max m(D ) j = 1, 2, . . . , n (11) (k1) s.t. f (Z) = v where m(D ) represents the BPA of the focal element where the Z point is located. Note that there is a (k1) difference between v , j = 1, 2, . . . , n at each iteration step due to the minor difference of Bel . j j Consequently, different MPFEs may be obtained for Equation (11). However, the difference between the MPFEs is minor relative to the entire design domain. To ensure efficiency, the unique MPFE is investigated at each iteration. Equation (11) can be rewritten as: max m(D ) k (12) (k1) s.t. f (Z) = v (k1) where v represents the performance threshold that has not yet converged. Step 2 is to establish linear approximation for the performance function at the central point Z of MPFE: (k) M(k) M(k) M(k) L (Z) = f Z + Z Z r f Z (13) The L-function is used to replace the f -function to calculate Bel, and thereby the optimization processes in Equation (7) no longer requires the calculation of any performance function. The efficient calculation of Bel has been achieved in the iterative process, but the overall process of EBDO still requires dozens or even hundreds of Bel evaluations due to the nested optimization. To eliminate the nested optimization, a decoupling strategy is proposed similar to that in the probabilistic method [37]. At each iteration step, the reliability constraint is transformed (k) into a deterministic constraint by constructing the shifting vector of S ; and then a deterministic j Appl. Sci. 2019, 9, 1457 7 of 19 optimization is updated and solved to obtain the current solution. In the k-th iteration, the deterministic optimization can be written as: Appl. Sci. 2019, 9, x FOR PEER REVIEW 6 of 18 min v Theoretically, the EBDO problems in Equation (9) = can be solved by existing methods [34]. (k) s.t. f d, Z S  v j = 1, 2, . . . , n (14) However, the resulting computational burden will be extremely heavy. To address this issue, a l u novel iteration framework is developed, in which the uncertainty analysis and design optimization l u d  d  d , X  X  X alternate until convergence. In the k-th iteration, each optimization problem in Equation (9) requires the performance of an The shifting vector determines the deviation between the original reliability boundary and the uncertainty analysis at the previous design point: deterministic boundary at the k-th iteration step. For the j-th problem in Equation (14), the formulation k−1 ( ) of the shifting vector is explained as in Figure 2. For convenience of presentation, the constraint Bel f Z  v , Z = X , P , j = 1, 2,...,n ( ) ( ) ( ) (10) jT (k1) (k1) This mainly consists of two steps, illustrated by the example in Figure 1. Step 1 is to search for contains only two evidence variables Z = (a, b). F represents the domain of f  v . Z is the (k−1) (k1) (k1) the most probable focal element (MPFE) along the limit-state boundary of fv (Z) . The MPFE previous design point, which is based on the previous equivalent boundary of f Z S = v . j j [35] is similar to the most probable point (MPP) in probability theory, which is the point with the The rectangular domain represents the FD at the previous design point. F represents the domain of (k1) most probability density on the limit-state boundary. Compared to other points on the boundary, (k1) (k1) f  v . If the FD is entirely in the F domain, Bel f  v = 100%. In Figure 2, the FD of Z the minimal error of reliability analysis can be achieved by establishing the linear approximation j j (k1) (k1) for the performance function at the MPP [36]. Similarly, the MPFE contains the maximal BPA is partially in the F domain, and Bel f  v is still less than Bel . To satisfy Bel f (Z)  v j j j among the focal elements that are crossed by the limit-state boundary. The searching process of (k1) MPFE Bel , Z is form needs ulated toas move : further into the F domain. Therefore, the equivalent boundary needs to move further toward the F domain. The updated equivalent boundary is constructed as follows: maxmD  ( ) D  jn = 1, 2,...,  (11) (k−1) (k) (k1) (k) (k1) (k) s.t.fv Z =  ( ) f Z S = v , S = S + DS (15) j j j j j where mD ( ) represents the BPA of the focal element where the Z point is located. Note that there (k) (k) (k−1) T where DS denotes the increment of the previous shifting vector. The principle for calculating DS is a difference between v , j = 1, 2,...,n at each iteration step due to the minor difference of Bel . j j jT j (k1) (k) is set as Bel f (Z)  v  Bel and is just satisfied. Thus, the mathematical model of DS is Consequently, different MPFEs may be obtained for Equation (11). However, the difference between j j th cre eated MPFEs as: is minor relative to the entire design domain. To ensure efficiency, the unique MPFE is mink sk investigated at each iteration. Equation (11) can be rewritten as: (16) (k1) s.t. Bel f (m Za+ x mD s )  v = Bel ( ) j j (12) k−1 ( ) Equation (16) can be solved by multivariable optimization methods [38]. To further improve s.t.fv Z = ( ) efficiency, the f -function is replaced by the L-function formulated in Equation (13). k−1 ( ) where v represents the performance threshold that has not yet converged. Figure 1. Uncertainty analysis for the performance function. FD: frame of discernment; MPFE: most Figure 1. Uncertainty analysis for the performance function. FD: frame of discernment; MPFE: most probable focal element. probable focal element. Step 2 is to establish linear approximation for the performance function at the central point Z of MPFE: ( ) (k) M(k) M(k) M(k) L Z = f Z + Z− Z f Z (13) ( ) ( ) ( ) ( ) The L-function is used to replace the f-function to calculate Bel, and thereby the optimization processes in Equation (7) no longer requires the calculation of any performance function. Appl. Sci. 2019, 9, x FOR PEER REVIEW 7 of 18 The efficient calculation of Bel has been achieved in the iterative process, but the overall process of EBDO still requires dozens or even hundreds of Bel evaluations due to the nested optimization. To eliminate the nested optimization, a decoupling strategy is proposed similar to that in the probabilistic method [37]. At each iteration step, the reliability constraint is transformed into a (k) deterministic constraint by constructing the shifting vector of S ; and then a deterministic optimization is updated and solved to obtain the current solution. In the k-th iteration, the deterministic optimization can be written as: min v (k) s.t. f d , Z − S  v j = 1, 2,...,n ( jT )  (14) lu lu d  d  d , X  X  X The shifting vector determines the deviation between the original reliability boundary and the deterministic boundary at the k-th iteration step. For the j-th problem in Equation (14), the formulation of the shifting vector is explained as in Figure 2. For convenience of presentation, the k−1 ( ) constraint contains only two evidence variables . F represents the domain of fv  . Z =(ab , ) k−1 ( ) is the previous design point, which is based on the previous equivalent boundary of kk−− 11 ( ) ( ) fv Z−= S . The rectangular domain represents the FD at the previous design point. F ( ) jj (k−1) (k−1) represents the domain of fv  . If the FD is entirely in the F domain, Bel f= v 100% . In ( ) j j (k−1) (k−1) T Figure 2, the FD of Z is partially in the F domain, and Bel f  v is still less than Bel . To ( ) j j (k−1) T (k−1) satisfy Bel f (Z) v Bel , Z needs to move further into the F domain. Therefore, the ( ) jj equivalent boundary needs to move further toward the F domain. The updated equivalent boundary is constructed as follows: (k) (k−1) (k) (k−1) (k) fv Z− S = , S = S +S ( ) (15) j j j j j ( ) S where j denotes the increment of the previous shifting vector. The principle for calculating ( ) (k−1) T S Bel f (Z) v Bel and is just satisfied. Thus, the mathematical model of ( ) j jj is set as ( ) is created as: S min s (16) k−1 ( ) T s.t. Bel f Zs +  v = Bel ( ) ( ) jj Appl. Sci. 2019, 9, 1457 8 of 19 Equation (16) can be solved by multivariable optimization methods [38]. To further improve efficiency, the f -function is replaced by the L-function formulated in Equation (13). Figure 2. Formulation of the shifting vector. FD: frame of discernment. Figure 2. Formulation of the shifting vector. FD: frame of discernment. Uncertain analysis and design optimization are carried out alternatively until they meet the Uncertain analysis and design optimization are carried out alternatively until they meet the following convergence criteria: following convergence criteria: (k) Bel  Bel > j j = (k) (k1) v v j = 1, 2, . . . , n (17) j j (k) where # is the minimal error limit. The solutions of d , X , j = 1, 2, . . . , n form the final r j j optimal set. The flowchart of the EBRO algorithm is summarized as Figure 3. 5. Application Discussion In the previous sections, an EBDO method is developed for engineering problems with epistemic uncertainty. This method is especially suitable for the robust design of micro-electromechanical systems (MEMS). On one hand, unlike traditional engineering structural problems, the design of MEMS usually involves micro structure, novel materials, and extreme operating conditions, where epistemic uncertainties inevitably exist. Evidence theory is well suited to deal with such uncertainties. On the other hand, high performance and insensitivity to uncertainties are the fundamental requirements for MEMS design. Over the past two decades, robust optimization for MEMS has gradually attracted the attention of both academics and engineering practice [39–41]. In this section, this method is applied to three applications of MEMS: a micro-force sensor, a low-noise image sensor, and a capacitive accelerometer. The features of the proposed approach are investigated in terms of efficiency and accuracy. Performance function evaluations are accounted to indicate efficiency, and the reference solution is compared to verify accuracy. The reference solution is obtained by the double-loop method, where sequential quadratic programming [38] is employed for performance optimization, and a Monte-Carlo simulation [42] is used for robust assessment. Appl. Sci. 2019, 9, x FOR PEER REVIEW 8 of 18 ( ) T Bel  Bel jj (kk ) ( −1) jn = 1, 2,..., vv −  (17) jj  r  (k) j  ** where  is the minimal error limit. The solutions of dX , ,jn = 1, 2,..., form the final optimal ( ) r j j T Appl. Sci. 2019, 9, 1457 9 of 19 set. The flowchart of the EBRO algorithm is summarized as Figure 3. Formulate the EBRO model as Equation (8) Calculate the joint BPAs mD by Equation (4) ( ) Convert into a series of sub-problems as Equation (9) (4)Reference source not found.(9) 1 (11 ) ( ) ( ) Set k = 1 , , and solve Equation (14) to obtain dX , S = 0 ( ) j jj jn = 1, 2,..., kk:1 =+ Establish approximate function as Equation (13) Convert the constraints in Equation (9) into the deterministic constraints as Equation (15) kk ( ) ( ) Update Equation (14), and solve it to obtain dX , ( ) jj No Convergence ? Yes ** End and output dX , ( ) jj Figure 3. The flowchart of the proposed method. EBRO: evidence-theory-based robust optimization; Figure 3. The flowchart of the proposed method. EBRO: evidence-theory-based robust optimization; BPA: basic probability assignment. BPA: basic probability assignment. 5.1. A Micro-Force Sensor A piezoelectric micro-force sensor [43] has several advantages, including a reliable structure, fast response, and simple driving circuits. It has been extensively applied in the fields of precision positioning, ultrasonic devices, micro-force measurement, etc. Given that uncertainties are inevitable in structural sizes and material parameters, robust optimization is essential to ensure the performance of the sensor. As shown in Figure 4, the core part of the micro-force sensor is a piezoelectric cantilever beam, which consists of a piezoelectric film, a silicon-based layer, and two electrodes. The force at the free end causes bending deformation on the beam, which drives the piezoelectric film to output polarization charges through the piezoelectric effect. The charge is transmitted to the circuit by the electrodes and converted into a voltage signal. According to the theoretical model proposed by Smits et al. [43], this voltage can be formulated as: P Si P P P 3 d  S  S  h h  h + h  L F 31 11 11 U = (18) K #  w 33 Appl. Sci. 2019, 9, 1457 10 of 19 where Si P P Si P 3 P K = 4 S  S  h h + 4 S  S  h  h 11 11 11 11 (19) 2 2 4 2 2 P 4 Si P Si P Si P + S  h + S  h + 4 S  S  S  h 11 11 11 11 11 where F is the concentration force; L, w represent the length and width of the beam; h, h denote Si P the thickness of the silicon-base layer and piezoelectric film; S , S are the compliance coefficient 11 11 P P of the silicon-based layer and piezoelectric film; and d , # is the piezoelectric coefficient and 31 33 P 4 dielectric constant of the piezoelectric film. The constants in Equation (18) include h = 5 10 mm, P 12 2 Si 12 2 S = 18.97 10 m /N, and S = 7.70 10 m /N, The structural sizes of L, w, h and the 11 11 P P material parameter of d , # are viewed as evidence variables. The marginal BPAs of the variables P P are shown in Figure 5, and the nominal values of d , # are, respectively, 1.8 C/N, 1.6 F/m. Appl. Sci. 2019, 9, x FOR PEER REVIEW 31 33 10 of 18 Appl. Sci. 2019, 9, x FOR PEER REVIEW 10 of 18 Electrode Electrode Piezoelectric film Piezoelectric film Electrode F h Electrode Si-based layer Si-based layer Figure 4. A piezoelectric cantilever beam. Figure Figure 4. A4. piezoelectric A piezoelectri cantilever c cantilever beam. beam. (a) (b) (a) (b) (c) (d) (c) (d) (e) (e) Figure 5. Marginal BPAs of variables in the micro-force sensor problem. BPA: basic probability Figure 5. Marginal BPAs of variables in the micro-force sensor problem. BPA: basic probability Figure 5. Marginal BPAs of variables in the micro-force sensor problem. BPA: basic probability assignment; FD: frame of discernment. assignment; FD: frame of discernment. assignment; FD: frame of discernment. In engineering, the greater the output voltage, the higher the theoretical accuracy of the sensor. In engineering, the greater the output voltage, the higher the theoretical accuracy of the sensor. Thus, U is regarded as the objective function. The design variables are Lw , and h . The constraints Thus, U is regarded as the objective function. The design variables are Lw , and h . The constraints of shape, stiffness and strength are considered, which are expressed as  0.83 ,   2.5μm , and of shape, stiffness and strength are considered, which are expressed as  0.83 ,   2.5μm , and   32.0 MPa , where  is the ratio of w to h,  denotes the displacement at the free end of the   32.0 MPa , where  is the ratio of w to h,  denotes the displacement at the free end of the beam, and  denotes the maximum stress of the beam.  and  can be written as [43].   beam, and denotes the maximum stress of the beam.  and can be written as [43]. Si P Si P P Si P Si P P 6 F L S  S h+ S h  h+ h ( ) ( ) 6 F L 11 S 11 S h 1+ 1 S h  h+ h ( ) ( ) 11 11 11  =  = Kw  Kw  (20) (20) 3 Si P P Si P 3 Si P P Si P 4 F L  S  S  S h+ S h ( ) 11 11 11 11 4 F L  S  S  S h+ S h ( ) 11 11 11 11  =  = Kw  Kw  Appl. Sci. 2019, 9, 1457 11 of 19 In engineering, the greater the output voltage, the higher the theoretical accuracy of the sensor. Thus, U is regarded as the objective function. The design variables are L, w and h. The constraints of shape, stiffness and strength are considered, which are expressed as h  0.83, d  2.5 m, and s 32.0 MPa, where h is the ratio of w to h, d denotes the displacement at the free end of the beam, and s denotes the maximum stress of the beam. d and s can be written as [43]. Si P Si P P 6FLS (S h+S h )(h+h ) 11 11 11 s = Kw (20) 3 Si P P Si P 4FL S S  S h+S h ( ) 11 11 11 11 d = Kw Due to uncertainties in the structure, h, d and s are also uncertain. Theoretically, the three constraints should be modeled as reliability constraints. To focus on the topic of robust optimization, the constraints are considered as deterministic in this example. That is, the nominal values of the uncertain variables are used to calculate h, d and s. In summary, the EBRO problem is formulated as follows: max U , max Bel(U(X, P)  U ) 0 0 ¯ ¯ s.t.  0.83, d X  2.5m, s X  32.0MPa (21) 0.40mm  L  1.20mm, 0.06mm  w  0.10mm, 0.04mm  h  0.10mm P P where X = (L, w, h), P = d , # ; U represents the performance threshold, which is set as the 31 33 deterministic design variable. The steps to solve this problem using the proposed method are detailed below. Firstly, according to the marginal BPAs of the five variables in Figure 5, the joint BPAs of the focal elements (8 = 32768) are calculated by Equation (4). Secondly, Equation (21) is converted into a series of sub-problems, which are expressed as: max U j = 1, 2, . . . , 5 s.t. Bel(U(X, P)  U )  Bel ¯ ¯ (22) 0.83, d X  2.5m, s X  32.0MPa 0.40mm  L  1.20mm, 0.06mm  w  0.10mm, 0.04mm  h  0.10mm Bel = (80%, 85%, 90%, 95%, 99.9%) where Bel represent a series of target Bel for the proposition of U  U , which are given by the designer according to engineering experience or quality standards. Thirdly, the iteration starts from the (0) (0) (0) (0) (0) (0) (0) initial point of L , w , h , U = (0.60 mm, 0.08 mm, 0.06 mm, 35.6 mV), where L , w , h (0) are selected by the designer and U is calculated by Equation (18). At each iteration step, (k) the approximate function of U is established as Equation (13), and then 10 numbers of DS are obtained through Equation (16). Correspondingly, the 10 optimization problems as Equation (14) are updated. By solving them, the optimal set in the current iteration is obtained. After four iteration steps, the optimal set is converged as listed in Table 1. The results show that the performance threshold decreases gradually with increase in Bel. In engineering, a designer can intuitively select the optimal design option from the optimal set by balancing the product performance and robustness. In term of accuracy, the solutions of the proposed method are very close to the corresponding reference solutions, and the maximal error is only 2.5% under the condition of Bel = 95%. In efficiency, the proposed method calculates performance function only 248 times, and the computational cost is much less than that of evolutionary algorithms [31]. From a mathematical point of view, it is unfair to compare the efficiency of the proposed method with the evolutionary algorithms. From the view of engineering Appl. Sci. 2019, 9, 1457 12 of 19 practicality Appl. Sci. 2019 , however , 9, x FOR , P the EER solutions REVIEW of the proposed method may help the designers create a r12 elatively of 18 clear picture of the problem with high efficiency and acceptable accuracy. 5.2. An Ultra-Low-Noise Image Sensor Table 1. Optimal set of the micro-force sensor problem. Recently, a type of ultra-low-noise image sensor [44] was developed for applications requiring high-quality imaging under Textremely low Tlight conditionsT . Such a type of T sensor is ideally T suited Results Bel = 80% Bel = 85% Bel = 90% Bel = 95% Bel = 99.9% 1 2 3 4 5 for a variety of low light level cameras for surveillance, industrial, and medical applications. In 0.937, 0.086, 0.937, 0.086, 0.937, 0.086, 0.937, 0.086, 0.937, 0.086, X (mm) Proposed application, thje sensor and other components are assembled on a printed circuit board (PCB). Due to 0.072 0.072 0.072 0.072 0.072 method the mismatch in the thermal expansion coefficient of the various materials, thermal deformation v , Bel 22.1 mV, 80.4% 20.0 mV, 85.3% 18.2 mV, 90.6% 15.7 mV, 95.5% 9.6 mV, 99.9% j j Refer occurs ence on the PCB under the combined action of self-heating and thermal environment. As a result, r r v , Bel 22.4 mV, 80.0% 20.0 mV, 85.3% 18.5 mV, 90.0% 16.1 mV, 95.0% 9.6 mV, 99.9% j j solution the imaging quality of the sensor is reduced. Moreover, to acquire more image information under low-light conditions, the sensor is designed in a large format. Thus, the imaging quality is more 5.2. An Ultra-Low-Noise Image Sensor susceptible to deformation. This issue has become a challenging problem in this field and needs to be solved urgently. Recently, a type of ultra-low-noise image sensor [44] was developed for applications requiring A robust optimization problem is considered for the camera module as Figure 6, in which the high-quality imaging under extremely low light conditions. Such a type of sensor is ideally suited for a image-sensor-mounted PCB is fastened with the housing. The sensor is designed as a 4/3-inch variety of low light level cameras for surveillance, industrial, and medical applications. In application, optical format and features an array of five transistor pixels on a 6.5 μm pitch with an active imaging the sensor and other components are assembled on a printed circuit board (PCB). Due to the mismatch area of 2560 H  2160 V pixels. It delivers extreme low light sensitivity with a read noise of less ( ) ( ) in the thermal expansion coefficient of the various materials, thermal deformation occurs on the PCB than 2.0 electrons root mean square (RMS) and a quantum efficiency above 55%. In order to analyze under the combined action of self-heating and thermal environment. As a result, the imaging quality the thermal deformation of the sensor under the operating temperature 20 C~45 C , the finite ( ) of the sensor is reduced. Moreover, to acquire more image information under low-light conditions, element model (FEM) is created as shown in Figure 7, in which the power dissipation P= PP , of ( ) the sensor is designed in a large format. Thus, the imaging quality is more susceptible to deformation. the codec chip and the converter is given as 1.2 W and 0.2 W, according to the test data. It can be This issue has become a challenging problem in this field and needs to be solved urgently. observed that a certain deformation appears on the sensor die, and the peak–peak value (PPV) of the A robust optimization problem is considered for the camera module as Figure 6, in which the displacement response achieves about 3.0 μm. Consequently, the image quality of the sensor will image-sensor-mounted PCB is fastened with the housing. The sensor is designed as a 4/3-inch optical decrease. To address the issue, the design objective is set to minimize the PPV, and the design format and features an array of five transistor pixels on a 6.5 m pitch with an active imaging area variables of X = X,, X X are the normal positions of the PCB-fixed points. In engineering, ( ) 1 2 3 of 2560(H) 2160(V) pixels. It delivers extreme low light sensitivity with a read noise of less than manufacturing errors are unavoidable and power dissipation fluctuates with changing loads, and 2.0 electrons root mean square (RMS) and a quantum efficiency above 55%. In order to analyze thereby X and P are treated as evidence variables. Their BPAs are summarized on the basis of the thermal deformation of the sensor under the operating temperature (20 C  45 C), the finite limited samples, as listed in Table 2. This robust optimization is constructed as follows: element model (FEM) is created as shown in Figure 7, in which the power dissipation P = (P , P ) 1 2 min v, max Bel PPV XP ,  v ( ( ) ) (23) of the codec chip and the converter is given as 1.2 W and 0.2 W, according to the test data. It can be observed that a certain deformation appears on the sensor die, and the peak–peak value (PPV) of As mentioned above, the performance function of PPV is implicit and based on the thetime displacement -consuming FE response M, wh achieves ich consist about s of 88,2 3.0 89 m. 8-nod Consequently e thermally, co the upled image hexah quality edron el of the emen sensor ts. The will decr com ease. put T ation o addr al ess time the for issue, solving theth design e FEM objective is about is 0.1 set hours to minimize , if using the a com PPV put , and er with the design the i7-4710HQ variables CPU and 8 G of RAM. To realize the parameterization and reduce the computational cost of of X = (X , X , X ) are the normal positions of the PCB-fixed points. In engineering, manufacturing 1 2 3 obtaining reference solutions, a second-order polynomial response surface is created for the errors are unavoidable and power dissipation fluctuates with changing loads, and thereby X and P are performance function by sampling 200 times on the FEM. treated as evidence variables. Their BPAs are summarized on the basis of limited samples, as listed in PPV = 2.396− 9.924X − 6.495X + 14.178X + 0.311P − 0.226P + 8.564X + 16.960X Table 2. This robust optimization 1 is constr 2 ucted as 3 follows: 1 2 1 2 (24) 2 2 2 + 14.104X − 0.019P + 0.794P −1.540X X −13.168X X −13.822X X − 0.074PP 3 1 2 1 2 1 3 2 3 1 2 min v, max Bel(PPV(X, P)  v) (23) (a) (b) Figure 6. The camera module (a) with an ultra-low-noise image sensor (b). PCB: printed circuit Figure 6. The camera module (a) with an ultra-low-noise image sensor (b). PCB: printed circuit board. board. Appl. Sci. 2019, 9, x FOR PEER REVIEW 13 of 18 Appl. Sci. 2019, 9, 1457 13 of 19 Figure 7. The finite element model (FEM) of the image-sensor-mounted PCB. PCB: printed circuit board. Figure 7. The finite element model (FEM) of the image-sensor-mounted PCB. PCB: printed circuit Table 2. The marginal BPA of variables in the image sensor problem. board. Xi, i = 1, 2, 3 (mm) P (W) P (W) 1 2 Table 2. The marginal BPA of variables in the image sensor problem. Subinterval BPA Subinterval BPA Subinterval BPA Xi, i=1,2,3 (mm) P1(W) P2 (W) X 0.05, X 0.03 6.7% [0.95, 1.1.05] 37.5% [0.55, 0.65] 6.5% i i 24.2% [1.05, 1.15] 17.6% [0.65, 0.75] 23.8% X Subin 0.03, Xterv 0.01 al BPA Subinterval BPA Subinterval BPA i i X 0.01, X + 0.01 38.3% [1.15, 1.25] 5.4% [0.75, 0.85] 32.2% i i  XX−− 0.05, 0.03  0.95,1.1.05 0.55, 0.65 6.7%   37.5%   6.5% ii  X + 0.01, X + 0.03 24.2% [1.25, 1.35] 19.1% [0.85, 0.95] 12.5% i i X + 0.03, X + 0.05 6.7% [1.35, 1.45] 20.4% [0.95, 1.05] 24.9%  XX−− 0.03, 0.01 1.05,1.15 0.65, 0.75 i i 24.2%   17.6%   23.8% ii   XX−+ 0.01, 0.01 1.15,1.25 0.75, 0.85 38.3%   5.4%   32.2% ii  As mentioned above, the performance function of PPV is implicit and based on the  XX++ 0.01, 0.03 1.25,1.35 0.85, 0.95 24.2%   19.1%   12.5% ii  time-consuming FEM, which consists of 88,289 8-node thermally coupled hexahedron elements. The  XX++ 0.03, 0.05 1.35,1.45 0.95,1.05 computational time for solving 6.7% the FEM is about  0.1h, if using 20.4% a computer  withthe i7-4710HQ 24.9% CPU ii  and 8 G of RAM. To realize the parameterization and reduce the computational cost of obtaining In order to analyze the efficiency of the proposed method for problems with different reference solutions, a second-order polynomial response surface is created for the performance function dimensional uncertainty, three cases are considered: only P is uncertain in Case 1; only X is by sampling 200 times on the FEM. uncertain in Case 2; and both of them are uncertain in Case 3. The initial design option is selected as 0 0 0 v = 2.92μm X = 0.10 mm, 0.10 mm, 0.10 mm , and is obtained by PPV XP , . After giving ( ) ( 2 ) 2 PPV = 2.396 9.924X 6.495X + 14.178X + 0.311P 0.226P + 8.564X + 16.960X 2 3 2 1 1 1 2 (24) 2 2 2 Bel = 8+ 514.104 %, 95% X, 9 9.9 0.019 % , th P e + ori 0.794 ginal P pro 1.540 blem X is X converted 13.168 into X X three 13.822 sub-pro X X blems 0.074 , as in P EPquation ( ) j 1 2 1 3 2 3 1 2 3 1 2 (9). They are solved by the proposed method and the double-loop method; all results are listed in In order to analyze the efficiency of the proposed method for problems with different dimensional Table 3. Firstly, the results of the proposed method and the reference solutions are almost identical uncertainty, three cases are considered: only P is uncertain in Case 1; only X is uncertain in for all cases, and thus the validity of the results is presented. Secondly, each of the cases converges Case 2; and both of them are uncertain in Case 3. The initial design option is selected as into a stable optimal set after three or four iteration steps. For this problem, the convergence of the 0 0 ¯ ¯ ¯ proposed method is little affected by the number of uncertain variables. Thirdly, the performance X = (0.10 mm, 0.10 mm, 0.10 mm), and v = 2.92 mis obtained by PPV X , P . After giving function evaluations N increase with the increasing dimensional number of uncertainties, while ( ) Bel = (85%, 95%, 99.9%), the original problem is converted into three sub-problems, as in Equation overall the efficiency of the proposed method is relatively high. Case 3 is taken as an example. Even (9). They are solved by the proposed method and the double-loop method; all results are listed in if the FEM is called directly by EBRO, N = 198 means a computational time of only about 20 hours. Table 3. Firstly, the results of the proposed method and the reference solutions are almost identical for all cases, and thus the validity of the results is presented. Secondly, each of the cases converges Table 3. The optimal set of the micro-force sensor problem. into a stable optimal set after three or four iteration steps. For this problem, the convergence of the proposed method is little affected by the Ca number se 1:2 of uncertain Case 2: variables. 3 Case 3 Thir: dly 5 , the performance Results function evaluations (N ) increase withdimens the incr io easing ns dimens dimensional ions number dimensio ofns uncertainties, while overall the efficiency of the proposed method is relatively high. Case 3 is taken as an example. Even if 128 142 198 the FEM is called directly by EBRO, N = 198 means a computational time of only about 20 h. IterationsF 3 4 4 (0.200, 0.185, (0.200, 0.185, (0.200, 0.187, 0.000) 0.000) 0.000) Proposed (0.200, 0.186, (0.200, 0.186, (0.200, 0.188, X (mm) method 0.000) 0.000) 0.000) (0.200, 0.188, (0.200, 0.187, (0.200, 0.190, 0.000) 0.000) 0.000) ** v , Bel (1.35 μm, (1.41 μm, (1.74 μm, jj Appl. Sci. 2019, 9, 1457 14 of 19 Table 3. The optimal set of the micro-force sensor problem. Case 1:2 Case 2:3 Case 3:5 Results Dimensions Dimensions Dimensions N 128 142 198 Iterations 3 4 4 (0.200, 0.185, 0.000) (0.200, 0.185, 0.000) (0.200, 0.187, 0.000) X (mm) (0.200, 0.186, 0.000) (0.200, 0.186, 0.000) (0.200, 0.188, 0.000) Proposed method (0.200, 0.188, 0.000) (0.200, 0.187, 0.000) (0.200, 0.190, 0.000) (1.35 m, 85.1%) (1.41 m, 85.8%) (1.74 m, 86.4%) v , Bel (1.70 m, 96.5%) (1.55 m, 96.4%) (2.06 m, 96.1%) j j (2.00 m, 100.0%) (1.65 m, 100.0%) (2.62 m, 100.0%) (1.35 m, 85.1%) (1.41 m, 85.8%) (1.71 m, 85.8%) r r v , Bel Reference solution (1.69 m, 95.5%) (1.55 m, 96.4%) (2.02 m, 95.4%) j j (2.00 m, 100.0%) (1.65 m, 100.0%) (2.56 m, 99.9%) 5.3. A Capacitive Accelerometer The capacitive accelerometer [45] has become very attractive for high-precision applications due to its high sensitivity, low power consumption, wide dynamic range of operation, and simple structure. The capacitive accelerometer is not only the central element of inertial guidance systems, but also has applications in a wide variety of industrial and commercial problems, including crash detection for vehicles, vibration analysis for industrial machinery, and hovering control for unmanned aerial systems. Most of the capacitive accelerometers consist of two main modules: the sensing structure and the signal processing circuit; the former plays a critical role in the overall product performance. The sensing structure in this example, as in Figure 8, mainly includes five parts: a fixed electrode, a movable electrode, a coil, a counter weight, block 1, and block 2. The material they are made of is listed in Table 4. The capacitance between the two electrodes varies with the vertical displacement of the movable plate under the excitation of acceleration, which can be clearly presented through the finite element simulation, as in Figure 9. The nodes in the effective area on the movable electrode have been offset relative to the original position under the excitation of acceleration. The increment of capacitance is expressed as [44]: DC = # A (25) A = h+d h i=1 where # is the dielectric constant; h represents the original distance between the electrodes; S denotes the effective area on the movable electrode; d is the displacement response of the i-th node, and S i i is the area of corresponding element. Note that the performance function of A is based on the FEM, which contains 114,517 8-node thermally coupled hexahedron elements in total, and it takes about 1/3 h to solve each time, when using a personal computer. The displacement of the movable electrode, in addition to the response to acceleration, may be caused by varying ambient temperature. This can be found from the simulation result in Figure 9, where the load is changed from acceleration to varying temperature. Reducing the effect of thermal deformation on accuracy has become a problem that must be faced in the design process. Therefore, the EBRO model of the capacitive accelerometer is formulated as: A X, a ( ) min v, maxBel f =  v (26) DT where f represents the sensitivity of error to temperature at 35 C; v denotes the performance threshold; and X, P denote the design vector and parameter vector. The components of X, a are the structural sizes, as shown in Figure 8, where a , a , a are the mounting angle of the counter weight, block 1 2 3 1 and block 2, respectively. The value ranges are given as 6.0 mm  X , X  12.0 mm, and the 1 2 Appl. Sci. 2019, 9, x FOR PEER REVIEW 15 of 18 Appl. Sci. 2019, 9, 1457 15 of 19 Table 4. Material of accelerometer parts. Appl. Sci. 2019, 9, x FOR PEER REVIEW 15 of 18 Appl. Sci. 2019, 9, x FOR PEER REVIEW 15 of 18 Fixed Movable Counter Part Coil Block 1 Block 2 nominal values are a = 0 mm, i = 1, 2, 3. All of them are uncertain variables, respectively caused by Table 4. Material of accelerometer parts. Electrode Electrode Weight machining errors and assemblyTable errors. 4. Materi According al of accelerometer to the existing parts. samples, the marginal BPAs are listed Material Silicon Silicon Copper Wolfram Wolfram Aluminium Fixed Movable Counter in Figure 10. Part Coil Block 1 Block 2 Fixed Movable Counter Electrode Electrode Weight Part Coil Block 1 Block 2 Electrode Electrode Weight Material Silicon Silicon Copper Wolfram Wolfram Aluminium Material Silicon Silicon Copper Wolfram Wolfram Aluminium Figure 8. The sensing structure of a capacitive accelerometer. Figure 8. The sensing structure of a capacitive accelerometer. Table 4. Material of accelerometer parts. Fixed Movable Counter Part Coil Block 1 Block 2 Figure 8. The sensing structure of a capacitive accelerometer. Electrode Electrode Weight Figure 8. The sensing structure of a capacitive accelerometer. Material Silicon Silicon Copper Wolfram Wolfram Aluminium Figure 9. The FEM of the capacitive accelerometer. Figure 9. The FEM of the capacitive accelerometer. Figure 9. The FEM of the capacitive accelerometer. Figure 9. The FEM of the capacitive accelerometer. (a) (b) Figure 10. The marginal BPAs of variables in the accelerometer problem. BPA: basic probability (a) (b) assignment; FD: frame of discernment. (a) (b) Figu Figure re 10. 10. The The m mar arg ginal inal BP BPA As s of of variables variables i in n the the acceler accelero ometer meter pr problem. oblem. BPA: BPA: basic basic pr probabil obability ity After being given a series of Bel = 80%, 85%, 90%, 95%, 99.9% , Equation (26) can be rewritten Figure 10. The marginal BPAs of variabl j es in the accelerometer problem. BPA: basic probability ass assignment; ignment; FD: FD: frame frame of of discernment. discernment. as Eq assua ignm tion ent; (27): FD: frame of discernment. After being given a series of Bel = 80%, 85%, 90%, 95%, 99.9%, Equation (26) can be rewritten min v After being given a series of Bel = 80%, 85%, 90%,95%, 99.9% , Equation (26) can be rewritten as Equation After bein (27): g given a series of Bel = 80%, 85%, 90%, 95%, 99.9% , Equation (26) can be rewritten  A X ,α , j = 1, 2, ... , 6 as Equation (27): ( ) )  (27) st. Bel f =  v  Bel min v as Equation (27): min v T  , j = 1, 2, . . . , 6 (27)  A(X,a)  st. Bel f =  v  Bel min v DT j  A X ,α , j = 1, 2, ... , 6 ( ) For easy reproduction of results, the response surface  of is constructed as follows:(27 ) T A(X ,α) st. Bel f =  v  Bel  ,j = 1, 2, ... , 6  A( X ,α)  (27) T  st. Bel f =  v  Bel   j T   For easy reproduction of results, the response surface of A X ,α is constructed as follows: ( ) For easy reproduction of results, the response surface of A(X ,α) is constructed as follows: Appl. Sci. 2019, 9, 1457 16 of 19 For easy reproduction of results, the response surface of A X, a is constructed as follows: ( ) 2 2 A = 1.207X 0.430X X 18.06X + 1.004X 13.974X 1 2 1 2 1 2 (28) 2 2 2 +100.9a 9.0a (a 1) + 89.0a 7.2a + 40.9a 6.7a + 144.0 1 1 2 3 1 2 3 Next, the EBRO is performed by the proposed method and the double-loop method. The initial (0) (0) ¯ ¯ design point is selected as X = (9.6mm, 9.6mm), and f X = 0.565m/ C. All results are given in Table 5. The proposed method converges the optimal set after four iteration steps. Each element of the optimal set is very close to that of the reference solution. This indicates to some extent the convergence and accuracy of the proposed method. As for efficiency, the performance function evaluations of the proposed method are done 171 times. Compared to the double-loop method (12,842 times), the efficiency of this method has a definite advantage. Given that hundreds of simulations or dozens of hours of computation are acceptable for most engineering applications, it is feasible to directly call the time-consuming simulation model when performing the EBRO in practice. Table 5. The optimal set of the accelerometer problem. Results Proposed Method Reference Solution (9.0661, 8.9042) (9.0661, 8.9042) (9.0659, 8.9040) (9.0660, 8.9040) (9.0657, 8.9038) (9.0657, 8.9039) X (mm) (9.0653, 8.9035) (9.0654, 8.9036) (9.0653, 8.9035) (9.0648, 8.9031) (9.0653, 8.9035) (9.0644, 8.9028) (0.244 m, 81.8%) (0.244 m, 81.8%) (0.302 m, 86.3%) (0.279 m, 85.2%) (0.350 m, 90.8%) (0.338 m, 90.1%) v , Bel j j (0.449 m, 96.1%) (0.424 m, 95.5%) (0.607 m, 99.4%) (0.594 m, 99.2%) (0.776 m, 100.0%) (0.733 m, 99.9%) On the other hand, the proposed method provides six design options under different robustness requirements in this example. The higher the robustness, the greater the performance threshold. From a designer ’s point of view, choosing a lower yield (i.e., Bel = 81.8%) means that a higher cost and less error (i.e., v = 0.244 m/ C) are introduced by temperature varying. Usually, the final design option is selected from the optimal set after balancing the cost and performance of the accelerometer. Objectively speaking, the proposed method does not provide a complete Pareto optimal set, but rather solutions under the given conditions. However, for the design of an actual product, the information of Bel can usually be obtained on the basis of the engineering experience or the quality standards. Therefore, the proposed method is suitable for most product design problems. 6. Conclusions Due to inevitable uncertainties from various sources, the concept of robust optimization has been deeply rooted in engineering designs. Compared to traditional probability models, evidence theory may be an alternative to model uncertainties in robustness optimization, especially in the cases of limited samples or conflicting information. In this paper, an effective EBRO method is developed, which can provide a computational tool for engineering problems with epistemic uncertainty. The contribution of this study is summarized as follows. Firstly, the improved EBDO model is formulated by introducing performance threshold as a newly-added design variable, and this model can handle the uncertainties involved in design variables and parameters. Secondly, the original ERBO is transformed into a series of sub-problems to avoid double-objective optimization, and thus the difficulty of solving is reduced greatly. Thirdly, an iterative strategy is proposed to drive the robustness analysis and Appl. Sci. 2019, 9, 1457 17 of 19 the optimization solution alternately, resulting in nested optimization in the sub-problems achieving decoupling. The proposed method is applied to the three MEMS design problems, including a micro-force sensor, an image sensor, and a capacitive accelerometer. In the applications, the finite element simulation models and surrogate models are both given. Numerical results show that the proposed method has good engineering practicality due to comprehensive performance in terms of efficiency, accuracy, and convergence. Also, this work provides targeted engineering examples for peers to develop novel algorithms. In the future, the proposed method may be extended to more complex engineering problems with dynamic characteristics or coupled multiphysics. Author Contributions: Conceptualization, Z.H. and S.D.; Data curation, J.X.; Formal analysis, J.X.; Funding acquisition, T.Y.; Investigation, Z.H.; Methodology, Z.H.; Project administration, T.Y.; Resources, S.D. and F.L.; Software, J.X.; Validation, S.D. and F.L.; Visualization, J.X.; Writing–original draft, Z.H.; Writing–review & editing, S.D. Funding: This research was supported by the Major Program of National Natural Science Foundation of China (51490662); the Educational Commission of Hunan Province of China (18A403, 17A036, 17C0044); and the Natural Science Foundation of Hunan Province of China (2016JJ2012, 2017JJ2022, 2019JJ40296, 2019JJ40014). Conflicts of Interest: The authors declare no conflict of interest. References 1. Taguchi, G.; Phadke, M.S. Quality Control, Robust Design, and the Taguchi Method. In Quality Engineering through Design Optimization; Springer: Berlin, Germany, 1989; pp. 77–96. 2. Beyer, H.G.; Sendhoff, B. Robust optimization—A comprehensive survey. Comput. Methods Appl. Mech. Eng. 2007, 196, 3190–3218. 3. Fowlkes, W.Y.; Creveling, C.M.; Derimiggio, J. Engineering Methods for Robust Product Design: Using Taguchi Methods in Technology and Product Development; Addison-Wesley: Reading, MA, USA; Bosten, MA, USA, 1995; pp. 121–123. 4. Gu, X.; Sun, G.; Li, G.; Mao, L.; Li, Q. A comparative study on multiobjective reliable and robust optimization for crashworthiness design of vehicle structure. Struct. Multidiscip. Optim. 2013, 48, 669–684. [CrossRef] 5. Yao, W.; Chen, X.; Luo, W.; van Tooren, M.; Guo, J. Review of uncertainty-based multidisciplinary design optimization methods for aerospace vehicles. Prog. Aerosp. Sci. 2011, 47, 450–479. [CrossRef] 6. Nguyen, A.T.; Reiter, S.; Rigo, P. A review on simulation-based optimization methods applied to building performance analysis. Appl. Energy 2014, 113, 1043–1058. [CrossRef] 7. Hoffman, F.O.; Hammonds, J.S. Propagation of uncertainty in risk assessments: The need to distinguish between uncertainty due to lack of knowledge and uncertainty due to variability. Risk Anal. 1994, 14, 707–712. [CrossRef] [PubMed] 8. Grubbs, F. An introduction to probability theory and its applications. Technometrics 1958, 9, 342. [CrossRef] 9. Gao, W.; Chen, J.J.; Sahraee, S. Reliability-based optimization of trusses with random parameters under dynamic loads. Comput. Mech. 2011, 47, 627–640. 10. Haldar, A.; Mahadevan, S.; Haldar, A.; Mahadevan, S. Probability, reliability and statistical methods in engineering design (haldar, mahadevan). Bautechnik 2013, 77, 379. 11. Chen, W.; Wiecek, M.M.; Zhang, J. Quality utility—A compromise programming approach to robust design. J. Mech. Des. 1999, 121, 179–187. [CrossRef] 12. Lee, K.H.; Park, G.J. Robust optimization considering tolerances of design variables. Comput. Struct. 2001, 79, 77–86. [CrossRef] 13. Zheng, J.; Luo, Z.; Jiang, C.; Gao, J. Robust topology optimization for concurrent design of dynamic structures under hybrid uncertainties. Mech. Syst. Signal. Process. 2018, 120, 540–559. [CrossRef] 14. Tzvieli, A. Possibility theory: An approach to computerized processing of uncertainty. J. Assoc. Inf. Sci. Technol. 1990, 41, 153–154. [CrossRef] 15. Georgescu, I. Possibility Theory and the Risk; Springer: Berlin, Germany, 2012. 16. Gupta, M.M. Fuzzy set theory and its applications. Fuzzy Sets Syst. 1992, 47, 396–397. [CrossRef] 17. Sun, B.; Ma, W.; Zhao, H. Decision-theoretic rough fuzzy set model and application. Inf. Sci. 2014, 283, 180–196. [CrossRef] Appl. Sci. 2019, 9, 1457 18 of 19 18. Elishakoff, I.; Colombi, P. Combination of probabilistic and convex models of uncertainty when scarce knowledge is present on acoustic excitation parameters. Comput. Methods Appl. Mech. Eng. 1993, 104, 187–209. [CrossRef] 19. Ni, B.Y.; Jiang, C.; Huang, Z.L. Discussions on non-probabilistic convex modelling for uncertain problems. Appl. Math. Model. 2018, 59, 54–85. [CrossRef] 20. Dempster, A.P. Upper and lower probabilities induced by a multivalued mapping. Ann. Math. Stat. 1967, 38, 325–339. [CrossRef] 21. Shafer, G. A Mathematical Theory of Evidence; Princeton University Press: Princeton, NY, USA, 1976. 22. Caselton, W.F.; Luo, W. Decision making with imprecise probabilities: Dempster-shafer theory and application. Water Resour. Res. 1992, 28, 3071–3083. [CrossRef] 23. Du, X. Uncertainty analysis with probability and evidence theories. In Proceedings of the 2006 ASME International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, Philadelphia, PA, USA, 10–13 September 2006. 24. Vasile, M. Robust mission design through evidence theory and multiagent collaborative search. Ann. N. Y. Acad. Sci. 2005, 1065, 152–173. [CrossRef] 25. Croisard, N.; Vasile, M.; Kemble, S.; Radice, G. Preliminary space mission design under uncertainty. Acta Astronaut. 2010, 66, 654–664. [CrossRef] 26. Zuiani, F.; Vasile, M.; Gibbings, A. Evidence-based robust design of deflection actions for near Earth objects. Celest. Mech. Dyn. Astron. 2012, 114, 107–136. [CrossRef] 27. Hou, L.; Pirzada, A.; Cai, Y.; Ma, H. Robust design optimization using integrated evidence computation—With application to Orbital Debris Removal. In Proceedings of the IEEE Congress on Evolutionary Computation, Sendai, Japan, 25–28 May 2015. 28. Sentz, K.; Ferson, S. Combination of Evidence in Dempster-Shafer Theory; Sandia National Laboratories: Albuquerque, NM, USA, 2002; Volume 4015. 29. Jiang, C.; Zhang, W.; Han, X.; Ni, B.Y.; Song, L.J. A vine-copula-based reliability analysis method for structures with multidimensional correlation. J. Mech. Des. 2015, 137, 061405. [CrossRef] 30. Dong, W.; Shah, H.C. Vertex method for computing functions of fuzzy variables. Fuzzy Sets Syst. 1987, 24, 65–78. [CrossRef] 31. Jin, Y.; Branke, J. Evolutionary optimization in uncertain environments—A survey. IEEE Trans. Evol. Comput. 2005, 9, 303–317. [CrossRef] 32. Marler, R.T.; Arora, J.S. Survey of multi-objective optimization methods for engineering. Struct. Multidiscip. Optim. 2004, 26, 369–395. [CrossRef] 33. Park, G.J.; Lee, T.H.; Lee, K.H.; Hwang, K.H. Robust design: An overview. AIAA J. 2006, 44, 181–191. [CrossRef] 34. Huang, Z.L.; Jiang, C.; Zhang, Z.; Fang, T.; Han, X. A decoupling approach for evidence-theory-based reliability design optimization. Struct. Multidiscip. Optim. 2017, 56, 647–661. [CrossRef] 35. Zhang, Z.; Jiang, C.; Wang, G.G.; Han, X. First and second order approximate reliability analysis methods using evidence theory. Reliab. Eng. Syst. Saf. 2015, 137, 40–49. [CrossRef] 36. Breitung, K. Probability approximations by log likelihood maximization. J. Eng. Mech. 1991, 117, 457–477. [CrossRef] 37. Du, X.; Chen, W. Sequential optimization and reliability assessment method for efficient probabilistic design. J. Mech. Des. 2002, 126, 871–880. 38. Fletcher, R. Practical Methods of Optimization; John Wiley & Sons: Somerset, NJ, USA, 2013; pp. 127–156. 39. Coultate, J.K.; Fox, C.H.J.; Mcwilliam, S.; Malvern, A.R. Application of optimal and robust design methods to a MEMS accelerometer. Sens. Actuators A Phys. 2008, 142, 88–96. [CrossRef] 40. Akbarzadeh, A.; Kouravand, S. Robust design of a bimetallic micro thermal sensor using taguchi method. J. Optim. Theory Appl. 2013, 157, 188–198. [CrossRef] 41. Li, F.; Liu, J.; Wen, G.; Rong, J. Extending sora method for reliability-based design optimization using probability and convex set mixed models. Struct. Multidiscip. Optim. 2008, 59, 1–17. [CrossRef] 42. Fishman, G. Monte Carlo: Concepts, Algorithms, and Application; Springer Science & Business Media: Berlin, Germany, 2013; pp. 493–583. 43. Smits, J.G.; Dalke, S.I.; Cooney, T.K. The constituent equations of piezoelectric bimorphs. Sens. Actuators A Phys. 1991, 28, 41–61. [CrossRef] Appl. Sci. 2019, 9, 1457 19 of 19 44. Fossum, E.R.; Hondongwa, D.B. A review of the pinned photodiode for CCD and CMOS image sensors. IEEE J. Electron. Devices Soc. 2014, 2, 33–43. [CrossRef] 45. Benmessaoud, M.; Nasreddine, M.M. Optimization of MEMS capacitive accelerometer. Microsyst. Technol. 2013, 19, 713–720. [CrossRef] © 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/). http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Applied Sciences Multidisciplinary Digital Publishing Institute

Evidence-Theory-Based Robust Optimization and Its Application in Micro-Electromechanical Systems

Loading next page...
 
/lp/multidisciplinary-digital-publishing-institute/evidence-theory-based-robust-optimization-and-its-application-in-micro-XEcWp6V2N3

References

References for this paper are not available at this time. We will be adding them shortly, thank you for your patience.

Publisher
Multidisciplinary Digital Publishing Institute
Copyright
© 1996-2019 MDPI (Basel, Switzerland) unless otherwise stated
ISSN
2076-3417
DOI
10.3390/app9071457
Publisher site
See Article on Publisher Site

Abstract

applied sciences Article Evidence-Theory-Based Robust Optimization and Its Application in Micro-Electromechanical Systems 1 , 2 1 1 3 , 1 Zhiliang Huang , Jiaqi Xu , Tongguang Yang , Fangyi Li * and Shuguang Deng School of Mechanical and Electrical Engineering, Hunan City University, Yiyang 413002, China; 13787181710@163.com (Z.H.); 18552122421@163.com (J.X.); yangtongguang1@163.com (T.Y.); shuguangdeng@163.com (S.D.) College of Mechanical and Vehicle Engineering, Hunan University, Changsha 40082, China School of Vehicle and Mechanical Engineering, Changsha University of Science and Technology, Changsha 410076, China * Correspondence: fangyi.li@csust.edu.cn Received: 21 February 2019; Accepted: 1 April 2019; Published: 7 April 2019 Featured Application: This paper develops an evidence-theory-based robustness optimization (EBRO) method, which aims to provide a potential computational tool for engineering problems with epistemic uncertainty. This method is especially suitable for robust designing of micro-electromechanical systems (MEMS). On one hand, unlike traditional engineering structural problems, the design of MEMS usually involves micro structure, novel materials, and extreme operating conditions, where multi-source uncertainties inevitably exist. Evidence theory is well suited to deal with such uncertainties. On the other hand, high performance and insensitivity to uncertainties are the fundamental requirements for MEMS design. The robust optimization can improve performance by minimizing the effects of uncertainties without eliminating these causes. Abstract: The conventional engineering robustness optimization approach considering uncertainties is generally based on a probabilistic model. However, a probabilistic model faces obstacles when handling problems with epistemic uncertainty. This paper presents an evidence-theory-based robustness optimization (EBRO) model and a corresponding algorithm, which provide a potential computational tool for engineering problems with multi-source uncertainty. An EBRO model with the twin objectives of performance and robustness is formulated by introducing the performance threshold. After providing multiple target belief measures (Bel), the original model is transformed into a series of sub-problems, which are solved by the proposed iterative strategy driving the robustness analysis and the deterministic optimization alternately. The proposed method is applied to three problems of micro-electromechanical systems (MEMS), including a micro-force sensor, an image sensor, and a capacitive accelerometer. In the applications, finite element simulation models and surrogate models are both given. Numerical results show that the proposed method has good engineering practicality due to comprehensive performance in terms of efficiency, accuracy, and convergence. Keywords: epistemic uncertainty; evidence theory; robust optimization; sensor design 1. Introduction In practical engineering problems, various uncertainties exist in terms of the operating environment, manufacturing process, material properties, etc. Under the combined action of these uncertainties, the performance of engineering structures or products may fluctuate greatly. Robust Appl. Sci. 2019, 9, 1457; doi:10.3390/app9071457 www.mdpi.com/journal/applsci Appl. Sci. 2019, 9, 1457 2 of 19 optimization [1,2] is a methodology, and its fundamental principle is to improve the performance of a product by minimizing the effects of uncertainties without eliminating these causes. The concept of robustness optimization has long been embedded in engineering design. In recent years, thanks to the rapid development of computer technology, it has been widely applied to many engineering fields, such as electronic [3], vehicle [4], aerospace [5], and civil engineering [6]. The core of robustness optimization lies in understanding, measuring, and controlling the uncertainty in the product design process. In mechanical engineering disciplines, uncertainty is usually differentiated into objective and subjective from an epistemological perspective [7]. The former, also called aleatory uncertainty, comes from an inherently irreducible physical nature, e.g., material properties (elasticity modulus, thermal conductivity, expansion coefficient) and operating conditions (temperature, humidity, wind load). A probabilistic model [8–10] is an appropriate way to describe such uncertain parameters, provided that sufficient samples are obtained for the construction of accurate random distribution. Conventional robustness optimization methods [11–13] are based on probabilistic models, in which the statistical moments (e.g., mean, variance) are employed to formulate the robustness function for the performance assessment under uncertainties. On the other hand, designers may lack knowledge about the issues of concern in practice, which leads to subjective uncertainty, also known as epistemic uncertainty. The uncertainty is caused by cognitive limitation or a lack of information, which could be reduced theoretically as effort is increased. At present, the methods of dealing with epistemic uncertainty mainly include possibility theory [14,15], the fuzzy set [16,17], convex model [18,19], and evidence theory [20,21]. Among them, evidence theory is an extension of probability theory, which can properly model the information of incompleteness, uncertainty, unreliability and even conflict [22]. When evidence theory treats a general structural problem, all possible values of an uncertain variable are assigned to several sub-intervals, and the corresponding probability is assigned to each sub-interval according to existing statistics and expert experience. After synthesizing the probability of all the sub-intervals, the belief measure and plausibility measure are obtained, which constitute the confidence interval of the proposition and show that the structural performance satisfies a given requirement. Compared with other uncertainty analysis theories, evidence theory may be more general. For example, when the sub-interval of each uncertain variable is infinitely small, evidence theory is equivalent to probability theory; when the sub-interval is unique, it is equivalent to convex model theory; when no conflict occurs to the information from different sources, it is equivalent to possibility theory [23]. In the past decade, some progress has been made in evidence-theory-based robust optimization (EBRO). For instance, Vasile [24] employed evidence theory to model the uncertainties of spacecraft subsystems and trajectory parameters in the robust design of space trajectory and presented a hybrid co-evolutionary algorithm to obtain the optimal results. For the preliminary design of a space mission, Croisard et al. [25] formulated the robust optimization model using evidence theory and proposed three practical solving technologies. Their features of efficiency and accuracy were discussed through the application of a space mission. Zuiani et al. [26] presented a multi-objective robust optimization approach for the deflection action design of near-Earth objects, and the uncertainties involved in the orbital and system were qualified by evidence theory. A deflection design application of a spacecraft swarm with Apophis verified the effectiveness of this approach. Hou et al. [27] introduced evidence-theory-based robust optimization (EBRO) into multidisciplinary aerospace design, and the strategy of an artificial neural network was used to establish surrogate models for the balance of efficiency and accuracy during the optimization. This method was applied to two preliminary designs of the micro entry probe and orbital debris removal system. The above studies employed evidence theory to measure the epistemic uncertainties involved in engineering design, and expanded robustness optimization into the design of complex systems. However, the studies of EBRO are still in a preliminary stage. The existing research has mainly aimed at the preliminary design of engineering systems. Most of them have been simplified and assumed to a great extent. In other words, the performance functions are based on surrogate models and even empirical formulas. So far, Appl. Sci. 2019, 9, 1457 3 of 19 EBRO applications in actual product design, with a time-consuming simulation model being created for performance function, are actually quite few. After all, computational cost is a major technical bottleneck limiting EBRO applications. First, evidence theory describes uncertainty through a series of discontinuous sets, rather than a continuous function similar to a probability density function. This usually leads to a combination explosion in a multidimensional robustness analysis, and finally results in a heavy computational burden. Secondly, EBRO is essentially a nested optimization problem with performance optimization in the outer layer and robustness analysis in the inner layer. The direct solving strategy means a large number of robustness evaluations using evidence theory. As a result, the issue of EBRO efficiency is further exacerbated. Therefore, there is a great engineering significance in developing an efficient EBRO method in view of actual product design problems. In this paper, a general EBRO model and an efficient algorithm are proposed, which provide a computational tool for robust product optimization with epistemic uncertainty. The proposed method is applied to three design problems of MEMS, in which its engineering practicability is discussed. The remainder of this paper is organized as follows. Section 1 briefly introduces the basic concepts and principles of robustness analysis using evidence theory. The EBRO model is formulated Section 2. The corresponding algorithm is proposed in Section 3. In Section 4, this method is validated through the three applications of MEMS—a micro-force sensor, a low-noise image sensor and a capacitive accelerometer. Conclusions are drawn in Section 5. 2. Robustness Analysis Using Evidence Theory Consider that uncertainty problem is given as f (Z), where Z represents the n -dimensional uncertain vector, f is the performance function which is uncertain due to Z. Conventional methods [16–18] of robust optimization employ probability theory to deal with the uncertainties. The typical strategy is to consider the uncertain parameters of a problem as random variables, and thereby the performance value is also a random variable. The mean and variance are used to formulate the robustness model. In practical engineering, it is sometimes hard to construct accurate probability models due to the limited information. Thus, evidence theory [20,21] is adopted to model the robustness. In evidence theory, the frame of discernment (FD) needs to be established first, which contains several independent basic propositions. It is similar to the sample space of a random Q Q parameter in probability theory. Here, 2 denotes the power set of the FD (namely Q), and 2 consists of all possible propositions contained in Q. For example, for a FD with the two basic propositions of Q and Q , the corresponding power set is 2 = ?, Q , Q , Q , Q . Evidence theory adopts f f g f g f gg 1 2 1 2 1 2 a basic probability assignment (BPA) to measure the confidence level of each proposition. For a certain proposition A, the BPA is a mapping function that satisfies the following axioms: 0  m(A)  1, 8A 2 2 m F = 0 ( ) (1) m(A) = 1 A22 where if m(A)  0, A is called a focal element of m. The BPA of m(A) denotes the extent to which the evidence supports Proposition A. When the information comes from multiple sources, m(A) can be obtained by evidence combination rules [28]. Evidence theory uses an interval consisting of the belief measure (Bel) and the plausibility measure (Pl) to describe the true extent of the proposition. The two measures are defined as: Bel(A) = m(C) CA (2) Pl A = m C ( ) å ( ) C\A6=F As can be seen from Equation (2), Bel(A) is the summary of all the BPA that totally support Proposition A, while Pl(A) is the summary of the BPA that support Proposition A totally or partially. Appl. Sci. 2019, 9, 1457 4 of 19 A two-dimensional design problem is taken as the example to illustrate the process of robustness analysis using evidence theory. The performance function contains two uncertain parameters (a, b), which are both considered as evidence variables. The FDs of a, b are the two closed intervals, i.e., L R L R A = A , A and B = B , B . A contains n number of focal elements, and the subinterval of L R A = B , B represents the i-th focal element of A. The definitions of n and B are similar. Thus, i B j i i a Cartesian product can be constructed: D = A B = D = A , B , A 2 A, B 2 B (3) k i j i j where D is the k-th focal element of D, and the total number of focal elements is n  n . For ease of A B presentation, assuming that a, b are independent, a two-dimensional joint BPA is obtained: m(D ) = m(A ) m B (4) k i j More general problems with parametric correlation can be handled using the mathematical tool of copula functions [29]. As analyzed above, the performance function of f is uncertain. The performance threshold of v is given to evaluate its robustness. Given that the design objective is to minimize the value of f, the higher the trueness of Proposition f  v, the higher the robustness of f relative to v. Proposition f  v is defined as the feasible domain: F = f f : f (a, b)  vg, (a, b) 2 D , D = A , B  D (5) k k i j Substituting A, C with F, D in Equation (2), the belief measure and plausibility measure of Proposition f  v are expressed as follows: Bel(F) = å m(D ) D F (6) Pl(F) = m(D ) D \F6=F In evidence theory, the probabilistic interval composed by the two measures can describe the trueness of f  v, written as R(F) 2 [Bel(F), Pl(F)]. The accumulation of Bel, Pl needs to determine the positional relationship between each focal element and the F domain. As a result, the performance function extrema of each focal element must be searched. For this example, the n  n pairs of A B extremum problems are established as: min f = min f (a, b) (a,b)2D k = 1, 2, . . . , n  n (7) A B max f = max f (a, b) (a,b)2D min max where f , f are the minimum and maximum of the k-th focal element. The vertex method [30] k k max can efficiently solve the problems in Equation (7) one by one. If f  v, D  F, and m(D ) k k min is simultaneously accounted into Bel(F) and Pl(F); If f < v, D \ F 6= F, and m(D ) is only k k accounted into Pl(G). After calculating the extrema for all focal elements, the Bel and Pl can be totaled. 3. Formulation of the EBRO Model As mentioned above, evidence theory uses a pair of probabilistic values [Bel, Pl] to measure the robustness of the performance value related to the given threshold. However, engineers generally tend to adopt conservative strategies to deal with uncertainties in the product design process. Thus, the robustness objective of EBRO can be established as max Bel( f  v). Meanwhile, in order to improve Appl. Sci. 2019, 9, 1457 5 of 19 product performance, the performance threshold is minimized. The EBRO model is formulated as a double-objective optimization problem: min v, max Bel f d, X, P  v ( ( ) ) (8) l u l u s.t. d  d  d , X  X  X where d is the n -dimensional deterministic design vector; X is the n -dimensional uncertain design d X vector; P is the n -dimensional uncertain parameter vector; the superscripts of l, u represent the value range of a design variable; and X represents the nominal value of X. Note that the threshold of v is usually difficult to give a fixed value to, while it should be treated as a deterministic design variable. The proposed model is an improvement on the existing model [24] because it can handle more types of uncertainty, such as the perturbations of design variables resulting from production tolerances, and the variations of parameters due to changing operating conditions. As for the solving process, the EBDO involves the nested optimization of the double-objective optimization in the outer layer and the robustness assessment in the inner layer. Due to the discreteness introduced by the evidence variables, each of the robustness analyses need to calculate the performance extrema of all focal elements. Essentially, extremum evaluation is an optimization problem involving the performance function based on time-consuming simulation models, and therefore the robustness analysis bears a high computational cost. More seriously, the double-objective optimization in the outer layer requires a large number of robustness evaluations in the inner layer. Eventually, the EBDO solving becomes extremely inefficient. 4. The Proposed Algorithm To improve efficiency, this paper proposes a decoupling algorithm of EBRO, and its basic idea is to convert the nested optimization into the sequence iteration process. Firstly, the original problem is decomposed into a series of sub-problems. Secondly, the uncertainty analysis and the deterministic optimization are driven alternately until convergence. The framework of the proposed method is detailed below. 4.1. Decomposition into Sub-Problems Robust optimization is essentially a multi-objective problem that increases product performance at the expense of its robustness. Therefore, robust optimization generally does not have a unique solution, but a set of solutions called the Pareto optimal set [2]. It is a family of solutions that is optimal in the sense that no improvement can be achieved in any objective without degradation in others for a multi-objective problem. The Pareto-optimal solutions can be obtained by solving appropriately formulated single objective optimization problems on a one-at-a-time basis. At present, a number of multi-objective genetic algorithms have been suggested. The primary reason for this is their ability to find multiple Pareto-optimal solutions in parallel. From the viewpoint of mathematical optimization, genetic algorithms are a kind of suitable method for solving a general multi-objective optimization. However, the efficiency of a genetic algorithm is usually much lower than the gradient-based optimization algorithms, which has become the main technical bottleneck limiting its practical application [31,32]. Although a priori information is not required when using genetic algorithms, most designers have some engineering experience in practice. Therefore, for the specific problem shown in Equation(8), the robustness objective of Bel( f (d, X, P)  v) is often handled as a reliability constraint [2,33]. In this paper, the EBRO problem is transformed into a series of sub-problems under the given target belief measures: min v s.t. Bel( f (d, X, P)  v)  Bel j = 1, 2, . . . , n (9) j T l u l u d  d  d , X  X  X Appl. Sci. 2019, 9, 1457 6 of 19 T T where Bel represents the j-th target belief measure; and Bel f  v  Bel is the reliability constraint ( ) j j derived from the robust objective. In many cases, the designer may focus on the performance values under some given conditions based on the experience or quality standard. This condition is usually a certain probability of f  v, namely Bel . 4.2. Iteration Framework Theoretically, the EBDO problems in Equation (9) can be solved by existing methods [34]. However, the resulting computational burden will be extremely heavy. To address this issue, a novel iteration framework is developed, in which the uncertainty analysis and design optimization alternate until convergence. In the k-th iteration, each optimization problem in Equation (9) requires the performance of an uncertainty analysis at the previous design point: (k1) Bel f (Z)  v , Z = (X, P), j = 1, 2, . . . , n (10) This mainly consists of two steps, illustrated by the example in Figure 1. Step 1 is to search (k1) for the most probable focal element (MPFE) along the limit-state boundary of f (Z)  v . The MPFE [35] is similar to the most probable point (MPP) in probability theory, which is the point with the most probability density on the limit-state boundary. Compared to other points on the boundary, the minimal error of reliability analysis can be achieved by establishing the linear approximation for the performance function at the MPP [36]. Similarly, the MPFE contains the maximal BPA among the focal elements that are crossed by the limit-state boundary. The searching process of MPFE is formulated as: max m(D ) j = 1, 2, . . . , n (11) (k1) s.t. f (Z) = v where m(D ) represents the BPA of the focal element where the Z point is located. Note that there is a (k1) difference between v , j = 1, 2, . . . , n at each iteration step due to the minor difference of Bel . j j Consequently, different MPFEs may be obtained for Equation (11). However, the difference between the MPFEs is minor relative to the entire design domain. To ensure efficiency, the unique MPFE is investigated at each iteration. Equation (11) can be rewritten as: max m(D ) k (12) (k1) s.t. f (Z) = v (k1) where v represents the performance threshold that has not yet converged. Step 2 is to establish linear approximation for the performance function at the central point Z of MPFE: (k) M(k) M(k) M(k) L (Z) = f Z + Z Z r f Z (13) The L-function is used to replace the f -function to calculate Bel, and thereby the optimization processes in Equation (7) no longer requires the calculation of any performance function. The efficient calculation of Bel has been achieved in the iterative process, but the overall process of EBDO still requires dozens or even hundreds of Bel evaluations due to the nested optimization. To eliminate the nested optimization, a decoupling strategy is proposed similar to that in the probabilistic method [37]. At each iteration step, the reliability constraint is transformed (k) into a deterministic constraint by constructing the shifting vector of S ; and then a deterministic j Appl. Sci. 2019, 9, 1457 7 of 19 optimization is updated and solved to obtain the current solution. In the k-th iteration, the deterministic optimization can be written as: Appl. Sci. 2019, 9, x FOR PEER REVIEW 6 of 18 min v Theoretically, the EBDO problems in Equation (9) = can be solved by existing methods [34]. (k) s.t. f d, Z S  v j = 1, 2, . . . , n (14) However, the resulting computational burden will be extremely heavy. To address this issue, a l u novel iteration framework is developed, in which the uncertainty analysis and design optimization l u d  d  d , X  X  X alternate until convergence. In the k-th iteration, each optimization problem in Equation (9) requires the performance of an The shifting vector determines the deviation between the original reliability boundary and the uncertainty analysis at the previous design point: deterministic boundary at the k-th iteration step. For the j-th problem in Equation (14), the formulation k−1 ( ) of the shifting vector is explained as in Figure 2. For convenience of presentation, the constraint Bel f Z  v , Z = X , P , j = 1, 2,...,n ( ) ( ) ( ) (10) jT (k1) (k1) This mainly consists of two steps, illustrated by the example in Figure 1. Step 1 is to search for contains only two evidence variables Z = (a, b). F represents the domain of f  v . Z is the (k−1) (k1) (k1) the most probable focal element (MPFE) along the limit-state boundary of fv (Z) . The MPFE previous design point, which is based on the previous equivalent boundary of f Z S = v . j j [35] is similar to the most probable point (MPP) in probability theory, which is the point with the The rectangular domain represents the FD at the previous design point. F represents the domain of (k1) most probability density on the limit-state boundary. Compared to other points on the boundary, (k1) (k1) f  v . If the FD is entirely in the F domain, Bel f  v = 100%. In Figure 2, the FD of Z the minimal error of reliability analysis can be achieved by establishing the linear approximation j j (k1) (k1) for the performance function at the MPP [36]. Similarly, the MPFE contains the maximal BPA is partially in the F domain, and Bel f  v is still less than Bel . To satisfy Bel f (Z)  v j j j among the focal elements that are crossed by the limit-state boundary. The searching process of (k1) MPFE Bel , Z is form needs ulated toas move : further into the F domain. Therefore, the equivalent boundary needs to move further toward the F domain. The updated equivalent boundary is constructed as follows: maxmD  ( ) D  jn = 1, 2,...,  (11) (k−1) (k) (k1) (k) (k1) (k) s.t.fv Z =  ( ) f Z S = v , S = S + DS (15) j j j j j where mD ( ) represents the BPA of the focal element where the Z point is located. Note that there (k) (k) (k−1) T where DS denotes the increment of the previous shifting vector. The principle for calculating DS is a difference between v , j = 1, 2,...,n at each iteration step due to the minor difference of Bel . j j jT j (k1) (k) is set as Bel f (Z)  v  Bel and is just satisfied. Thus, the mathematical model of DS is Consequently, different MPFEs may be obtained for Equation (11). However, the difference between j j th cre eated MPFEs as: is minor relative to the entire design domain. To ensure efficiency, the unique MPFE is mink sk investigated at each iteration. Equation (11) can be rewritten as: (16) (k1) s.t. Bel f (m Za+ x mD s )  v = Bel ( ) j j (12) k−1 ( ) Equation (16) can be solved by multivariable optimization methods [38]. To further improve s.t.fv Z = ( ) efficiency, the f -function is replaced by the L-function formulated in Equation (13). k−1 ( ) where v represents the performance threshold that has not yet converged. Figure 1. Uncertainty analysis for the performance function. FD: frame of discernment; MPFE: most Figure 1. Uncertainty analysis for the performance function. FD: frame of discernment; MPFE: most probable focal element. probable focal element. Step 2 is to establish linear approximation for the performance function at the central point Z of MPFE: ( ) (k) M(k) M(k) M(k) L Z = f Z + Z− Z f Z (13) ( ) ( ) ( ) ( ) The L-function is used to replace the f-function to calculate Bel, and thereby the optimization processes in Equation (7) no longer requires the calculation of any performance function. Appl. Sci. 2019, 9, x FOR PEER REVIEW 7 of 18 The efficient calculation of Bel has been achieved in the iterative process, but the overall process of EBDO still requires dozens or even hundreds of Bel evaluations due to the nested optimization. To eliminate the nested optimization, a decoupling strategy is proposed similar to that in the probabilistic method [37]. At each iteration step, the reliability constraint is transformed into a (k) deterministic constraint by constructing the shifting vector of S ; and then a deterministic optimization is updated and solved to obtain the current solution. In the k-th iteration, the deterministic optimization can be written as: min v (k) s.t. f d , Z − S  v j = 1, 2,...,n ( jT )  (14) lu lu d  d  d , X  X  X The shifting vector determines the deviation between the original reliability boundary and the deterministic boundary at the k-th iteration step. For the j-th problem in Equation (14), the formulation of the shifting vector is explained as in Figure 2. For convenience of presentation, the k−1 ( ) constraint contains only two evidence variables . F represents the domain of fv  . Z =(ab , ) k−1 ( ) is the previous design point, which is based on the previous equivalent boundary of kk−− 11 ( ) ( ) fv Z−= S . The rectangular domain represents the FD at the previous design point. F ( ) jj (k−1) (k−1) represents the domain of fv  . If the FD is entirely in the F domain, Bel f= v 100% . In ( ) j j (k−1) (k−1) T Figure 2, the FD of Z is partially in the F domain, and Bel f  v is still less than Bel . To ( ) j j (k−1) T (k−1) satisfy Bel f (Z) v Bel , Z needs to move further into the F domain. Therefore, the ( ) jj equivalent boundary needs to move further toward the F domain. The updated equivalent boundary is constructed as follows: (k) (k−1) (k) (k−1) (k) fv Z− S = , S = S +S ( ) (15) j j j j j ( ) S where j denotes the increment of the previous shifting vector. The principle for calculating ( ) (k−1) T S Bel f (Z) v Bel and is just satisfied. Thus, the mathematical model of ( ) j jj is set as ( ) is created as: S min s (16) k−1 ( ) T s.t. Bel f Zs +  v = Bel ( ) ( ) jj Appl. Sci. 2019, 9, 1457 8 of 19 Equation (16) can be solved by multivariable optimization methods [38]. To further improve efficiency, the f -function is replaced by the L-function formulated in Equation (13). Figure 2. Formulation of the shifting vector. FD: frame of discernment. Figure 2. Formulation of the shifting vector. FD: frame of discernment. Uncertain analysis and design optimization are carried out alternatively until they meet the Uncertain analysis and design optimization are carried out alternatively until they meet the following convergence criteria: following convergence criteria: (k) Bel  Bel > j j = (k) (k1) v v j = 1, 2, . . . , n (17) j j (k) where # is the minimal error limit. The solutions of d , X , j = 1, 2, . . . , n form the final r j j optimal set. The flowchart of the EBRO algorithm is summarized as Figure 3. 5. Application Discussion In the previous sections, an EBDO method is developed for engineering problems with epistemic uncertainty. This method is especially suitable for the robust design of micro-electromechanical systems (MEMS). On one hand, unlike traditional engineering structural problems, the design of MEMS usually involves micro structure, novel materials, and extreme operating conditions, where epistemic uncertainties inevitably exist. Evidence theory is well suited to deal with such uncertainties. On the other hand, high performance and insensitivity to uncertainties are the fundamental requirements for MEMS design. Over the past two decades, robust optimization for MEMS has gradually attracted the attention of both academics and engineering practice [39–41]. In this section, this method is applied to three applications of MEMS: a micro-force sensor, a low-noise image sensor, and a capacitive accelerometer. The features of the proposed approach are investigated in terms of efficiency and accuracy. Performance function evaluations are accounted to indicate efficiency, and the reference solution is compared to verify accuracy. The reference solution is obtained by the double-loop method, where sequential quadratic programming [38] is employed for performance optimization, and a Monte-Carlo simulation [42] is used for robust assessment. Appl. Sci. 2019, 9, x FOR PEER REVIEW 8 of 18 ( ) T Bel  Bel jj (kk ) ( −1) jn = 1, 2,..., vv −  (17) jj  r  (k) j  ** where  is the minimal error limit. The solutions of dX , ,jn = 1, 2,..., form the final optimal ( ) r j j T Appl. Sci. 2019, 9, 1457 9 of 19 set. The flowchart of the EBRO algorithm is summarized as Figure 3. Formulate the EBRO model as Equation (8) Calculate the joint BPAs mD by Equation (4) ( ) Convert into a series of sub-problems as Equation (9) (4)Reference source not found.(9) 1 (11 ) ( ) ( ) Set k = 1 , , and solve Equation (14) to obtain dX , S = 0 ( ) j jj jn = 1, 2,..., kk:1 =+ Establish approximate function as Equation (13) Convert the constraints in Equation (9) into the deterministic constraints as Equation (15) kk ( ) ( ) Update Equation (14), and solve it to obtain dX , ( ) jj No Convergence ? Yes ** End and output dX , ( ) jj Figure 3. The flowchart of the proposed method. EBRO: evidence-theory-based robust optimization; Figure 3. The flowchart of the proposed method. EBRO: evidence-theory-based robust optimization; BPA: basic probability assignment. BPA: basic probability assignment. 5.1. A Micro-Force Sensor A piezoelectric micro-force sensor [43] has several advantages, including a reliable structure, fast response, and simple driving circuits. It has been extensively applied in the fields of precision positioning, ultrasonic devices, micro-force measurement, etc. Given that uncertainties are inevitable in structural sizes and material parameters, robust optimization is essential to ensure the performance of the sensor. As shown in Figure 4, the core part of the micro-force sensor is a piezoelectric cantilever beam, which consists of a piezoelectric film, a silicon-based layer, and two electrodes. The force at the free end causes bending deformation on the beam, which drives the piezoelectric film to output polarization charges through the piezoelectric effect. The charge is transmitted to the circuit by the electrodes and converted into a voltage signal. According to the theoretical model proposed by Smits et al. [43], this voltage can be formulated as: P Si P P P 3 d  S  S  h h  h + h  L F 31 11 11 U = (18) K #  w 33 Appl. Sci. 2019, 9, 1457 10 of 19 where Si P P Si P 3 P K = 4 S  S  h h + 4 S  S  h  h 11 11 11 11 (19) 2 2 4 2 2 P 4 Si P Si P Si P + S  h + S  h + 4 S  S  S  h 11 11 11 11 11 where F is the concentration force; L, w represent the length and width of the beam; h, h denote Si P the thickness of the silicon-base layer and piezoelectric film; S , S are the compliance coefficient 11 11 P P of the silicon-based layer and piezoelectric film; and d , # is the piezoelectric coefficient and 31 33 P 4 dielectric constant of the piezoelectric film. The constants in Equation (18) include h = 5 10 mm, P 12 2 Si 12 2 S = 18.97 10 m /N, and S = 7.70 10 m /N, The structural sizes of L, w, h and the 11 11 P P material parameter of d , # are viewed as evidence variables. The marginal BPAs of the variables P P are shown in Figure 5, and the nominal values of d , # are, respectively, 1.8 C/N, 1.6 F/m. Appl. Sci. 2019, 9, x FOR PEER REVIEW 31 33 10 of 18 Appl. Sci. 2019, 9, x FOR PEER REVIEW 10 of 18 Electrode Electrode Piezoelectric film Piezoelectric film Electrode F h Electrode Si-based layer Si-based layer Figure 4. A piezoelectric cantilever beam. Figure Figure 4. A4. piezoelectric A piezoelectri cantilever c cantilever beam. beam. (a) (b) (a) (b) (c) (d) (c) (d) (e) (e) Figure 5. Marginal BPAs of variables in the micro-force sensor problem. BPA: basic probability Figure 5. Marginal BPAs of variables in the micro-force sensor problem. BPA: basic probability Figure 5. Marginal BPAs of variables in the micro-force sensor problem. BPA: basic probability assignment; FD: frame of discernment. assignment; FD: frame of discernment. assignment; FD: frame of discernment. In engineering, the greater the output voltage, the higher the theoretical accuracy of the sensor. In engineering, the greater the output voltage, the higher the theoretical accuracy of the sensor. Thus, U is regarded as the objective function. The design variables are Lw , and h . The constraints Thus, U is regarded as the objective function. The design variables are Lw , and h . The constraints of shape, stiffness and strength are considered, which are expressed as  0.83 ,   2.5μm , and of shape, stiffness and strength are considered, which are expressed as  0.83 ,   2.5μm , and   32.0 MPa , where  is the ratio of w to h,  denotes the displacement at the free end of the   32.0 MPa , where  is the ratio of w to h,  denotes the displacement at the free end of the beam, and  denotes the maximum stress of the beam.  and  can be written as [43].   beam, and denotes the maximum stress of the beam.  and can be written as [43]. Si P Si P P Si P Si P P 6 F L S  S h+ S h  h+ h ( ) ( ) 6 F L 11 S 11 S h 1+ 1 S h  h+ h ( ) ( ) 11 11 11  =  = Kw  Kw  (20) (20) 3 Si P P Si P 3 Si P P Si P 4 F L  S  S  S h+ S h ( ) 11 11 11 11 4 F L  S  S  S h+ S h ( ) 11 11 11 11  =  = Kw  Kw  Appl. Sci. 2019, 9, 1457 11 of 19 In engineering, the greater the output voltage, the higher the theoretical accuracy of the sensor. Thus, U is regarded as the objective function. The design variables are L, w and h. The constraints of shape, stiffness and strength are considered, which are expressed as h  0.83, d  2.5 m, and s 32.0 MPa, where h is the ratio of w to h, d denotes the displacement at the free end of the beam, and s denotes the maximum stress of the beam. d and s can be written as [43]. Si P Si P P 6FLS (S h+S h )(h+h ) 11 11 11 s = Kw (20) 3 Si P P Si P 4FL S S  S h+S h ( ) 11 11 11 11 d = Kw Due to uncertainties in the structure, h, d and s are also uncertain. Theoretically, the three constraints should be modeled as reliability constraints. To focus on the topic of robust optimization, the constraints are considered as deterministic in this example. That is, the nominal values of the uncertain variables are used to calculate h, d and s. In summary, the EBRO problem is formulated as follows: max U , max Bel(U(X, P)  U ) 0 0 ¯ ¯ s.t.  0.83, d X  2.5m, s X  32.0MPa (21) 0.40mm  L  1.20mm, 0.06mm  w  0.10mm, 0.04mm  h  0.10mm P P where X = (L, w, h), P = d , # ; U represents the performance threshold, which is set as the 31 33 deterministic design variable. The steps to solve this problem using the proposed method are detailed below. Firstly, according to the marginal BPAs of the five variables in Figure 5, the joint BPAs of the focal elements (8 = 32768) are calculated by Equation (4). Secondly, Equation (21) is converted into a series of sub-problems, which are expressed as: max U j = 1, 2, . . . , 5 s.t. Bel(U(X, P)  U )  Bel ¯ ¯ (22) 0.83, d X  2.5m, s X  32.0MPa 0.40mm  L  1.20mm, 0.06mm  w  0.10mm, 0.04mm  h  0.10mm Bel = (80%, 85%, 90%, 95%, 99.9%) where Bel represent a series of target Bel for the proposition of U  U , which are given by the designer according to engineering experience or quality standards. Thirdly, the iteration starts from the (0) (0) (0) (0) (0) (0) (0) initial point of L , w , h , U = (0.60 mm, 0.08 mm, 0.06 mm, 35.6 mV), where L , w , h (0) are selected by the designer and U is calculated by Equation (18). At each iteration step, (k) the approximate function of U is established as Equation (13), and then 10 numbers of DS are obtained through Equation (16). Correspondingly, the 10 optimization problems as Equation (14) are updated. By solving them, the optimal set in the current iteration is obtained. After four iteration steps, the optimal set is converged as listed in Table 1. The results show that the performance threshold decreases gradually with increase in Bel. In engineering, a designer can intuitively select the optimal design option from the optimal set by balancing the product performance and robustness. In term of accuracy, the solutions of the proposed method are very close to the corresponding reference solutions, and the maximal error is only 2.5% under the condition of Bel = 95%. In efficiency, the proposed method calculates performance function only 248 times, and the computational cost is much less than that of evolutionary algorithms [31]. From a mathematical point of view, it is unfair to compare the efficiency of the proposed method with the evolutionary algorithms. From the view of engineering Appl. Sci. 2019, 9, 1457 12 of 19 practicality Appl. Sci. 2019 , however , 9, x FOR , P the EER solutions REVIEW of the proposed method may help the designers create a r12 elatively of 18 clear picture of the problem with high efficiency and acceptable accuracy. 5.2. An Ultra-Low-Noise Image Sensor Table 1. Optimal set of the micro-force sensor problem. Recently, a type of ultra-low-noise image sensor [44] was developed for applications requiring high-quality imaging under Textremely low Tlight conditionsT . Such a type of T sensor is ideally T suited Results Bel = 80% Bel = 85% Bel = 90% Bel = 95% Bel = 99.9% 1 2 3 4 5 for a variety of low light level cameras for surveillance, industrial, and medical applications. In 0.937, 0.086, 0.937, 0.086, 0.937, 0.086, 0.937, 0.086, 0.937, 0.086, X (mm) Proposed application, thje sensor and other components are assembled on a printed circuit board (PCB). Due to 0.072 0.072 0.072 0.072 0.072 method the mismatch in the thermal expansion coefficient of the various materials, thermal deformation v , Bel 22.1 mV, 80.4% 20.0 mV, 85.3% 18.2 mV, 90.6% 15.7 mV, 95.5% 9.6 mV, 99.9% j j Refer occurs ence on the PCB under the combined action of self-heating and thermal environment. As a result, r r v , Bel 22.4 mV, 80.0% 20.0 mV, 85.3% 18.5 mV, 90.0% 16.1 mV, 95.0% 9.6 mV, 99.9% j j solution the imaging quality of the sensor is reduced. Moreover, to acquire more image information under low-light conditions, the sensor is designed in a large format. Thus, the imaging quality is more 5.2. An Ultra-Low-Noise Image Sensor susceptible to deformation. This issue has become a challenging problem in this field and needs to be solved urgently. Recently, a type of ultra-low-noise image sensor [44] was developed for applications requiring A robust optimization problem is considered for the camera module as Figure 6, in which the high-quality imaging under extremely low light conditions. Such a type of sensor is ideally suited for a image-sensor-mounted PCB is fastened with the housing. The sensor is designed as a 4/3-inch variety of low light level cameras for surveillance, industrial, and medical applications. In application, optical format and features an array of five transistor pixels on a 6.5 μm pitch with an active imaging the sensor and other components are assembled on a printed circuit board (PCB). Due to the mismatch area of 2560 H  2160 V pixels. It delivers extreme low light sensitivity with a read noise of less ( ) ( ) in the thermal expansion coefficient of the various materials, thermal deformation occurs on the PCB than 2.0 electrons root mean square (RMS) and a quantum efficiency above 55%. In order to analyze under the combined action of self-heating and thermal environment. As a result, the imaging quality the thermal deformation of the sensor under the operating temperature 20 C~45 C , the finite ( ) of the sensor is reduced. Moreover, to acquire more image information under low-light conditions, element model (FEM) is created as shown in Figure 7, in which the power dissipation P= PP , of ( ) the sensor is designed in a large format. Thus, the imaging quality is more susceptible to deformation. the codec chip and the converter is given as 1.2 W and 0.2 W, according to the test data. It can be This issue has become a challenging problem in this field and needs to be solved urgently. observed that a certain deformation appears on the sensor die, and the peak–peak value (PPV) of the A robust optimization problem is considered for the camera module as Figure 6, in which the displacement response achieves about 3.0 μm. Consequently, the image quality of the sensor will image-sensor-mounted PCB is fastened with the housing. The sensor is designed as a 4/3-inch optical decrease. To address the issue, the design objective is set to minimize the PPV, and the design format and features an array of five transistor pixels on a 6.5 m pitch with an active imaging area variables of X = X,, X X are the normal positions of the PCB-fixed points. In engineering, ( ) 1 2 3 of 2560(H) 2160(V) pixels. It delivers extreme low light sensitivity with a read noise of less than manufacturing errors are unavoidable and power dissipation fluctuates with changing loads, and 2.0 electrons root mean square (RMS) and a quantum efficiency above 55%. In order to analyze thereby X and P are treated as evidence variables. Their BPAs are summarized on the basis of the thermal deformation of the sensor under the operating temperature (20 C  45 C), the finite limited samples, as listed in Table 2. This robust optimization is constructed as follows: element model (FEM) is created as shown in Figure 7, in which the power dissipation P = (P , P ) 1 2 min v, max Bel PPV XP ,  v ( ( ) ) (23) of the codec chip and the converter is given as 1.2 W and 0.2 W, according to the test data. It can be observed that a certain deformation appears on the sensor die, and the peak–peak value (PPV) of As mentioned above, the performance function of PPV is implicit and based on the thetime displacement -consuming FE response M, wh achieves ich consist about s of 88,2 3.0 89 m. 8-nod Consequently e thermally, co the upled image hexah quality edron el of the emen sensor ts. The will decr com ease. put T ation o addr al ess time the for issue, solving theth design e FEM objective is about is 0.1 set hours to minimize , if using the a com PPV put , and er with the design the i7-4710HQ variables CPU and 8 G of RAM. To realize the parameterization and reduce the computational cost of of X = (X , X , X ) are the normal positions of the PCB-fixed points. In engineering, manufacturing 1 2 3 obtaining reference solutions, a second-order polynomial response surface is created for the errors are unavoidable and power dissipation fluctuates with changing loads, and thereby X and P are performance function by sampling 200 times on the FEM. treated as evidence variables. Their BPAs are summarized on the basis of limited samples, as listed in PPV = 2.396− 9.924X − 6.495X + 14.178X + 0.311P − 0.226P + 8.564X + 16.960X Table 2. This robust optimization 1 is constr 2 ucted as 3 follows: 1 2 1 2 (24) 2 2 2 + 14.104X − 0.019P + 0.794P −1.540X X −13.168X X −13.822X X − 0.074PP 3 1 2 1 2 1 3 2 3 1 2 min v, max Bel(PPV(X, P)  v) (23) (a) (b) Figure 6. The camera module (a) with an ultra-low-noise image sensor (b). PCB: printed circuit Figure 6. The camera module (a) with an ultra-low-noise image sensor (b). PCB: printed circuit board. board. Appl. Sci. 2019, 9, x FOR PEER REVIEW 13 of 18 Appl. Sci. 2019, 9, 1457 13 of 19 Figure 7. The finite element model (FEM) of the image-sensor-mounted PCB. PCB: printed circuit board. Figure 7. The finite element model (FEM) of the image-sensor-mounted PCB. PCB: printed circuit Table 2. The marginal BPA of variables in the image sensor problem. board. Xi, i = 1, 2, 3 (mm) P (W) P (W) 1 2 Table 2. The marginal BPA of variables in the image sensor problem. Subinterval BPA Subinterval BPA Subinterval BPA Xi, i=1,2,3 (mm) P1(W) P2 (W) X 0.05, X 0.03 6.7% [0.95, 1.1.05] 37.5% [0.55, 0.65] 6.5% i i 24.2% [1.05, 1.15] 17.6% [0.65, 0.75] 23.8% X Subin 0.03, Xterv 0.01 al BPA Subinterval BPA Subinterval BPA i i X 0.01, X + 0.01 38.3% [1.15, 1.25] 5.4% [0.75, 0.85] 32.2% i i  XX−− 0.05, 0.03  0.95,1.1.05 0.55, 0.65 6.7%   37.5%   6.5% ii  X + 0.01, X + 0.03 24.2% [1.25, 1.35] 19.1% [0.85, 0.95] 12.5% i i X + 0.03, X + 0.05 6.7% [1.35, 1.45] 20.4% [0.95, 1.05] 24.9%  XX−− 0.03, 0.01 1.05,1.15 0.65, 0.75 i i 24.2%   17.6%   23.8% ii   XX−+ 0.01, 0.01 1.15,1.25 0.75, 0.85 38.3%   5.4%   32.2% ii  As mentioned above, the performance function of PPV is implicit and based on the  XX++ 0.01, 0.03 1.25,1.35 0.85, 0.95 24.2%   19.1%   12.5% ii  time-consuming FEM, which consists of 88,289 8-node thermally coupled hexahedron elements. The  XX++ 0.03, 0.05 1.35,1.45 0.95,1.05 computational time for solving 6.7% the FEM is about  0.1h, if using 20.4% a computer  withthe i7-4710HQ 24.9% CPU ii  and 8 G of RAM. To realize the parameterization and reduce the computational cost of obtaining In order to analyze the efficiency of the proposed method for problems with different reference solutions, a second-order polynomial response surface is created for the performance function dimensional uncertainty, three cases are considered: only P is uncertain in Case 1; only X is by sampling 200 times on the FEM. uncertain in Case 2; and both of them are uncertain in Case 3. The initial design option is selected as 0 0 0 v = 2.92μm X = 0.10 mm, 0.10 mm, 0.10 mm , and is obtained by PPV XP , . After giving ( ) ( 2 ) 2 PPV = 2.396 9.924X 6.495X + 14.178X + 0.311P 0.226P + 8.564X + 16.960X 2 3 2 1 1 1 2 (24) 2 2 2 Bel = 8+ 514.104 %, 95% X, 9 9.9 0.019 % , th P e + ori 0.794 ginal P pro 1.540 blem X is X converted 13.168 into X X three 13.822 sub-pro X X blems 0.074 , as in P EPquation ( ) j 1 2 1 3 2 3 1 2 3 1 2 (9). They are solved by the proposed method and the double-loop method; all results are listed in In order to analyze the efficiency of the proposed method for problems with different dimensional Table 3. Firstly, the results of the proposed method and the reference solutions are almost identical uncertainty, three cases are considered: only P is uncertain in Case 1; only X is uncertain in for all cases, and thus the validity of the results is presented. Secondly, each of the cases converges Case 2; and both of them are uncertain in Case 3. The initial design option is selected as into a stable optimal set after three or four iteration steps. For this problem, the convergence of the 0 0 ¯ ¯ ¯ proposed method is little affected by the number of uncertain variables. Thirdly, the performance X = (0.10 mm, 0.10 mm, 0.10 mm), and v = 2.92 mis obtained by PPV X , P . After giving function evaluations N increase with the increasing dimensional number of uncertainties, while ( ) Bel = (85%, 95%, 99.9%), the original problem is converted into three sub-problems, as in Equation overall the efficiency of the proposed method is relatively high. Case 3 is taken as an example. Even (9). They are solved by the proposed method and the double-loop method; all results are listed in if the FEM is called directly by EBRO, N = 198 means a computational time of only about 20 hours. Table 3. Firstly, the results of the proposed method and the reference solutions are almost identical for all cases, and thus the validity of the results is presented. Secondly, each of the cases converges Table 3. The optimal set of the micro-force sensor problem. into a stable optimal set after three or four iteration steps. For this problem, the convergence of the proposed method is little affected by the Ca number se 1:2 of uncertain Case 2: variables. 3 Case 3 Thir: dly 5 , the performance Results function evaluations (N ) increase withdimens the incr io easing ns dimens dimensional ions number dimensio ofns uncertainties, while overall the efficiency of the proposed method is relatively high. Case 3 is taken as an example. Even if 128 142 198 the FEM is called directly by EBRO, N = 198 means a computational time of only about 20 h. IterationsF 3 4 4 (0.200, 0.185, (0.200, 0.185, (0.200, 0.187, 0.000) 0.000) 0.000) Proposed (0.200, 0.186, (0.200, 0.186, (0.200, 0.188, X (mm) method 0.000) 0.000) 0.000) (0.200, 0.188, (0.200, 0.187, (0.200, 0.190, 0.000) 0.000) 0.000) ** v , Bel (1.35 μm, (1.41 μm, (1.74 μm, jj Appl. Sci. 2019, 9, 1457 14 of 19 Table 3. The optimal set of the micro-force sensor problem. Case 1:2 Case 2:3 Case 3:5 Results Dimensions Dimensions Dimensions N 128 142 198 Iterations 3 4 4 (0.200, 0.185, 0.000) (0.200, 0.185, 0.000) (0.200, 0.187, 0.000) X (mm) (0.200, 0.186, 0.000) (0.200, 0.186, 0.000) (0.200, 0.188, 0.000) Proposed method (0.200, 0.188, 0.000) (0.200, 0.187, 0.000) (0.200, 0.190, 0.000) (1.35 m, 85.1%) (1.41 m, 85.8%) (1.74 m, 86.4%) v , Bel (1.70 m, 96.5%) (1.55 m, 96.4%) (2.06 m, 96.1%) j j (2.00 m, 100.0%) (1.65 m, 100.0%) (2.62 m, 100.0%) (1.35 m, 85.1%) (1.41 m, 85.8%) (1.71 m, 85.8%) r r v , Bel Reference solution (1.69 m, 95.5%) (1.55 m, 96.4%) (2.02 m, 95.4%) j j (2.00 m, 100.0%) (1.65 m, 100.0%) (2.56 m, 99.9%) 5.3. A Capacitive Accelerometer The capacitive accelerometer [45] has become very attractive for high-precision applications due to its high sensitivity, low power consumption, wide dynamic range of operation, and simple structure. The capacitive accelerometer is not only the central element of inertial guidance systems, but also has applications in a wide variety of industrial and commercial problems, including crash detection for vehicles, vibration analysis for industrial machinery, and hovering control for unmanned aerial systems. Most of the capacitive accelerometers consist of two main modules: the sensing structure and the signal processing circuit; the former plays a critical role in the overall product performance. The sensing structure in this example, as in Figure 8, mainly includes five parts: a fixed electrode, a movable electrode, a coil, a counter weight, block 1, and block 2. The material they are made of is listed in Table 4. The capacitance between the two electrodes varies with the vertical displacement of the movable plate under the excitation of acceleration, which can be clearly presented through the finite element simulation, as in Figure 9. The nodes in the effective area on the movable electrode have been offset relative to the original position under the excitation of acceleration. The increment of capacitance is expressed as [44]: DC = # A (25) A = h+d h i=1 where # is the dielectric constant; h represents the original distance between the electrodes; S denotes the effective area on the movable electrode; d is the displacement response of the i-th node, and S i i is the area of corresponding element. Note that the performance function of A is based on the FEM, which contains 114,517 8-node thermally coupled hexahedron elements in total, and it takes about 1/3 h to solve each time, when using a personal computer. The displacement of the movable electrode, in addition to the response to acceleration, may be caused by varying ambient temperature. This can be found from the simulation result in Figure 9, where the load is changed from acceleration to varying temperature. Reducing the effect of thermal deformation on accuracy has become a problem that must be faced in the design process. Therefore, the EBRO model of the capacitive accelerometer is formulated as: A X, a ( ) min v, maxBel f =  v (26) DT where f represents the sensitivity of error to temperature at 35 C; v denotes the performance threshold; and X, P denote the design vector and parameter vector. The components of X, a are the structural sizes, as shown in Figure 8, where a , a , a are the mounting angle of the counter weight, block 1 2 3 1 and block 2, respectively. The value ranges are given as 6.0 mm  X , X  12.0 mm, and the 1 2 Appl. Sci. 2019, 9, x FOR PEER REVIEW 15 of 18 Appl. Sci. 2019, 9, 1457 15 of 19 Table 4. Material of accelerometer parts. Appl. Sci. 2019, 9, x FOR PEER REVIEW 15 of 18 Appl. Sci. 2019, 9, x FOR PEER REVIEW 15 of 18 Fixed Movable Counter Part Coil Block 1 Block 2 nominal values are a = 0 mm, i = 1, 2, 3. All of them are uncertain variables, respectively caused by Table 4. Material of accelerometer parts. Electrode Electrode Weight machining errors and assemblyTable errors. 4. Materi According al of accelerometer to the existing parts. samples, the marginal BPAs are listed Material Silicon Silicon Copper Wolfram Wolfram Aluminium Fixed Movable Counter in Figure 10. Part Coil Block 1 Block 2 Fixed Movable Counter Electrode Electrode Weight Part Coil Block 1 Block 2 Electrode Electrode Weight Material Silicon Silicon Copper Wolfram Wolfram Aluminium Material Silicon Silicon Copper Wolfram Wolfram Aluminium Figure 8. The sensing structure of a capacitive accelerometer. Figure 8. The sensing structure of a capacitive accelerometer. Table 4. Material of accelerometer parts. Fixed Movable Counter Part Coil Block 1 Block 2 Figure 8. The sensing structure of a capacitive accelerometer. Electrode Electrode Weight Figure 8. The sensing structure of a capacitive accelerometer. Material Silicon Silicon Copper Wolfram Wolfram Aluminium Figure 9. The FEM of the capacitive accelerometer. Figure 9. The FEM of the capacitive accelerometer. Figure 9. The FEM of the capacitive accelerometer. Figure 9. The FEM of the capacitive accelerometer. (a) (b) Figure 10. The marginal BPAs of variables in the accelerometer problem. BPA: basic probability (a) (b) assignment; FD: frame of discernment. (a) (b) Figu Figure re 10. 10. The The m mar arg ginal inal BP BPA As s of of variables variables i in n the the acceler accelero ometer meter pr problem. oblem. BPA: BPA: basic basic pr probabil obability ity After being given a series of Bel = 80%, 85%, 90%, 95%, 99.9% , Equation (26) can be rewritten Figure 10. The marginal BPAs of variabl j es in the accelerometer problem. BPA: basic probability ass assignment; ignment; FD: FD: frame frame of of discernment. discernment. as Eq assua ignm tion ent; (27): FD: frame of discernment. After being given a series of Bel = 80%, 85%, 90%, 95%, 99.9%, Equation (26) can be rewritten min v After being given a series of Bel = 80%, 85%, 90%,95%, 99.9% , Equation (26) can be rewritten as Equation After bein (27): g given a series of Bel = 80%, 85%, 90%, 95%, 99.9% , Equation (26) can be rewritten  A X ,α , j = 1, 2, ... , 6 as Equation (27): ( ) )  (27) st. Bel f =  v  Bel min v as Equation (27): min v T  , j = 1, 2, . . . , 6 (27)  A(X,a)  st. Bel f =  v  Bel min v DT j  A X ,α , j = 1, 2, ... , 6 ( ) For easy reproduction of results, the response surface  of is constructed as follows:(27 ) T A(X ,α) st. Bel f =  v  Bel  ,j = 1, 2, ... , 6  A( X ,α)  (27) T  st. Bel f =  v  Bel   j T   For easy reproduction of results, the response surface of A X ,α is constructed as follows: ( ) For easy reproduction of results, the response surface of A(X ,α) is constructed as follows: Appl. Sci. 2019, 9, 1457 16 of 19 For easy reproduction of results, the response surface of A X, a is constructed as follows: ( ) 2 2 A = 1.207X 0.430X X 18.06X + 1.004X 13.974X 1 2 1 2 1 2 (28) 2 2 2 +100.9a 9.0a (a 1) + 89.0a 7.2a + 40.9a 6.7a + 144.0 1 1 2 3 1 2 3 Next, the EBRO is performed by the proposed method and the double-loop method. The initial (0) (0) ¯ ¯ design point is selected as X = (9.6mm, 9.6mm), and f X = 0.565m/ C. All results are given in Table 5. The proposed method converges the optimal set after four iteration steps. Each element of the optimal set is very close to that of the reference solution. This indicates to some extent the convergence and accuracy of the proposed method. As for efficiency, the performance function evaluations of the proposed method are done 171 times. Compared to the double-loop method (12,842 times), the efficiency of this method has a definite advantage. Given that hundreds of simulations or dozens of hours of computation are acceptable for most engineering applications, it is feasible to directly call the time-consuming simulation model when performing the EBRO in practice. Table 5. The optimal set of the accelerometer problem. Results Proposed Method Reference Solution (9.0661, 8.9042) (9.0661, 8.9042) (9.0659, 8.9040) (9.0660, 8.9040) (9.0657, 8.9038) (9.0657, 8.9039) X (mm) (9.0653, 8.9035) (9.0654, 8.9036) (9.0653, 8.9035) (9.0648, 8.9031) (9.0653, 8.9035) (9.0644, 8.9028) (0.244 m, 81.8%) (0.244 m, 81.8%) (0.302 m, 86.3%) (0.279 m, 85.2%) (0.350 m, 90.8%) (0.338 m, 90.1%) v , Bel j j (0.449 m, 96.1%) (0.424 m, 95.5%) (0.607 m, 99.4%) (0.594 m, 99.2%) (0.776 m, 100.0%) (0.733 m, 99.9%) On the other hand, the proposed method provides six design options under different robustness requirements in this example. The higher the robustness, the greater the performance threshold. From a designer ’s point of view, choosing a lower yield (i.e., Bel = 81.8%) means that a higher cost and less error (i.e., v = 0.244 m/ C) are introduced by temperature varying. Usually, the final design option is selected from the optimal set after balancing the cost and performance of the accelerometer. Objectively speaking, the proposed method does not provide a complete Pareto optimal set, but rather solutions under the given conditions. However, for the design of an actual product, the information of Bel can usually be obtained on the basis of the engineering experience or the quality standards. Therefore, the proposed method is suitable for most product design problems. 6. Conclusions Due to inevitable uncertainties from various sources, the concept of robust optimization has been deeply rooted in engineering designs. Compared to traditional probability models, evidence theory may be an alternative to model uncertainties in robustness optimization, especially in the cases of limited samples or conflicting information. In this paper, an effective EBRO method is developed, which can provide a computational tool for engineering problems with epistemic uncertainty. The contribution of this study is summarized as follows. Firstly, the improved EBDO model is formulated by introducing performance threshold as a newly-added design variable, and this model can handle the uncertainties involved in design variables and parameters. Secondly, the original ERBO is transformed into a series of sub-problems to avoid double-objective optimization, and thus the difficulty of solving is reduced greatly. Thirdly, an iterative strategy is proposed to drive the robustness analysis and Appl. Sci. 2019, 9, 1457 17 of 19 the optimization solution alternately, resulting in nested optimization in the sub-problems achieving decoupling. The proposed method is applied to the three MEMS design problems, including a micro-force sensor, an image sensor, and a capacitive accelerometer. In the applications, the finite element simulation models and surrogate models are both given. Numerical results show that the proposed method has good engineering practicality due to comprehensive performance in terms of efficiency, accuracy, and convergence. Also, this work provides targeted engineering examples for peers to develop novel algorithms. In the future, the proposed method may be extended to more complex engineering problems with dynamic characteristics or coupled multiphysics. Author Contributions: Conceptualization, Z.H. and S.D.; Data curation, J.X.; Formal analysis, J.X.; Funding acquisition, T.Y.; Investigation, Z.H.; Methodology, Z.H.; Project administration, T.Y.; Resources, S.D. and F.L.; Software, J.X.; Validation, S.D. and F.L.; Visualization, J.X.; Writing–original draft, Z.H.; Writing–review & editing, S.D. Funding: This research was supported by the Major Program of National Natural Science Foundation of China (51490662); the Educational Commission of Hunan Province of China (18A403, 17A036, 17C0044); and the Natural Science Foundation of Hunan Province of China (2016JJ2012, 2017JJ2022, 2019JJ40296, 2019JJ40014). Conflicts of Interest: The authors declare no conflict of interest. References 1. Taguchi, G.; Phadke, M.S. Quality Control, Robust Design, and the Taguchi Method. In Quality Engineering through Design Optimization; Springer: Berlin, Germany, 1989; pp. 77–96. 2. Beyer, H.G.; Sendhoff, B. Robust optimization—A comprehensive survey. Comput. Methods Appl. Mech. Eng. 2007, 196, 3190–3218. 3. Fowlkes, W.Y.; Creveling, C.M.; Derimiggio, J. Engineering Methods for Robust Product Design: Using Taguchi Methods in Technology and Product Development; Addison-Wesley: Reading, MA, USA; Bosten, MA, USA, 1995; pp. 121–123. 4. Gu, X.; Sun, G.; Li, G.; Mao, L.; Li, Q. A comparative study on multiobjective reliable and robust optimization for crashworthiness design of vehicle structure. Struct. Multidiscip. Optim. 2013, 48, 669–684. [CrossRef] 5. Yao, W.; Chen, X.; Luo, W.; van Tooren, M.; Guo, J. Review of uncertainty-based multidisciplinary design optimization methods for aerospace vehicles. Prog. Aerosp. Sci. 2011, 47, 450–479. [CrossRef] 6. Nguyen, A.T.; Reiter, S.; Rigo, P. A review on simulation-based optimization methods applied to building performance analysis. Appl. Energy 2014, 113, 1043–1058. [CrossRef] 7. Hoffman, F.O.; Hammonds, J.S. Propagation of uncertainty in risk assessments: The need to distinguish between uncertainty due to lack of knowledge and uncertainty due to variability. Risk Anal. 1994, 14, 707–712. [CrossRef] [PubMed] 8. Grubbs, F. An introduction to probability theory and its applications. Technometrics 1958, 9, 342. [CrossRef] 9. Gao, W.; Chen, J.J.; Sahraee, S. Reliability-based optimization of trusses with random parameters under dynamic loads. Comput. Mech. 2011, 47, 627–640. 10. Haldar, A.; Mahadevan, S.; Haldar, A.; Mahadevan, S. Probability, reliability and statistical methods in engineering design (haldar, mahadevan). Bautechnik 2013, 77, 379. 11. Chen, W.; Wiecek, M.M.; Zhang, J. Quality utility—A compromise programming approach to robust design. J. Mech. Des. 1999, 121, 179–187. [CrossRef] 12. Lee, K.H.; Park, G.J. Robust optimization considering tolerances of design variables. Comput. Struct. 2001, 79, 77–86. [CrossRef] 13. Zheng, J.; Luo, Z.; Jiang, C.; Gao, J. Robust topology optimization for concurrent design of dynamic structures under hybrid uncertainties. Mech. Syst. Signal. Process. 2018, 120, 540–559. [CrossRef] 14. Tzvieli, A. Possibility theory: An approach to computerized processing of uncertainty. J. Assoc. Inf. Sci. Technol. 1990, 41, 153–154. [CrossRef] 15. Georgescu, I. Possibility Theory and the Risk; Springer: Berlin, Germany, 2012. 16. Gupta, M.M. Fuzzy set theory and its applications. Fuzzy Sets Syst. 1992, 47, 396–397. [CrossRef] 17. Sun, B.; Ma, W.; Zhao, H. Decision-theoretic rough fuzzy set model and application. Inf. Sci. 2014, 283, 180–196. [CrossRef] Appl. Sci. 2019, 9, 1457 18 of 19 18. Elishakoff, I.; Colombi, P. Combination of probabilistic and convex models of uncertainty when scarce knowledge is present on acoustic excitation parameters. Comput. Methods Appl. Mech. Eng. 1993, 104, 187–209. [CrossRef] 19. Ni, B.Y.; Jiang, C.; Huang, Z.L. Discussions on non-probabilistic convex modelling for uncertain problems. Appl. Math. Model. 2018, 59, 54–85. [CrossRef] 20. Dempster, A.P. Upper and lower probabilities induced by a multivalued mapping. Ann. Math. Stat. 1967, 38, 325–339. [CrossRef] 21. Shafer, G. A Mathematical Theory of Evidence; Princeton University Press: Princeton, NY, USA, 1976. 22. Caselton, W.F.; Luo, W. Decision making with imprecise probabilities: Dempster-shafer theory and application. Water Resour. Res. 1992, 28, 3071–3083. [CrossRef] 23. Du, X. Uncertainty analysis with probability and evidence theories. In Proceedings of the 2006 ASME International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, Philadelphia, PA, USA, 10–13 September 2006. 24. Vasile, M. Robust mission design through evidence theory and multiagent collaborative search. Ann. N. Y. Acad. Sci. 2005, 1065, 152–173. [CrossRef] 25. Croisard, N.; Vasile, M.; Kemble, S.; Radice, G. Preliminary space mission design under uncertainty. Acta Astronaut. 2010, 66, 654–664. [CrossRef] 26. Zuiani, F.; Vasile, M.; Gibbings, A. Evidence-based robust design of deflection actions for near Earth objects. Celest. Mech. Dyn. Astron. 2012, 114, 107–136. [CrossRef] 27. Hou, L.; Pirzada, A.; Cai, Y.; Ma, H. Robust design optimization using integrated evidence computation—With application to Orbital Debris Removal. In Proceedings of the IEEE Congress on Evolutionary Computation, Sendai, Japan, 25–28 May 2015. 28. Sentz, K.; Ferson, S. Combination of Evidence in Dempster-Shafer Theory; Sandia National Laboratories: Albuquerque, NM, USA, 2002; Volume 4015. 29. Jiang, C.; Zhang, W.; Han, X.; Ni, B.Y.; Song, L.J. A vine-copula-based reliability analysis method for structures with multidimensional correlation. J. Mech. Des. 2015, 137, 061405. [CrossRef] 30. Dong, W.; Shah, H.C. Vertex method for computing functions of fuzzy variables. Fuzzy Sets Syst. 1987, 24, 65–78. [CrossRef] 31. Jin, Y.; Branke, J. Evolutionary optimization in uncertain environments—A survey. IEEE Trans. Evol. Comput. 2005, 9, 303–317. [CrossRef] 32. Marler, R.T.; Arora, J.S. Survey of multi-objective optimization methods for engineering. Struct. Multidiscip. Optim. 2004, 26, 369–395. [CrossRef] 33. Park, G.J.; Lee, T.H.; Lee, K.H.; Hwang, K.H. Robust design: An overview. AIAA J. 2006, 44, 181–191. [CrossRef] 34. Huang, Z.L.; Jiang, C.; Zhang, Z.; Fang, T.; Han, X. A decoupling approach for evidence-theory-based reliability design optimization. Struct. Multidiscip. Optim. 2017, 56, 647–661. [CrossRef] 35. Zhang, Z.; Jiang, C.; Wang, G.G.; Han, X. First and second order approximate reliability analysis methods using evidence theory. Reliab. Eng. Syst. Saf. 2015, 137, 40–49. [CrossRef] 36. Breitung, K. Probability approximations by log likelihood maximization. J. Eng. Mech. 1991, 117, 457–477. [CrossRef] 37. Du, X.; Chen, W. Sequential optimization and reliability assessment method for efficient probabilistic design. J. Mech. Des. 2002, 126, 871–880. 38. Fletcher, R. Practical Methods of Optimization; John Wiley & Sons: Somerset, NJ, USA, 2013; pp. 127–156. 39. Coultate, J.K.; Fox, C.H.J.; Mcwilliam, S.; Malvern, A.R. Application of optimal and robust design methods to a MEMS accelerometer. Sens. Actuators A Phys. 2008, 142, 88–96. [CrossRef] 40. Akbarzadeh, A.; Kouravand, S. Robust design of a bimetallic micro thermal sensor using taguchi method. J. Optim. Theory Appl. 2013, 157, 188–198. [CrossRef] 41. Li, F.; Liu, J.; Wen, G.; Rong, J. Extending sora method for reliability-based design optimization using probability and convex set mixed models. Struct. Multidiscip. Optim. 2008, 59, 1–17. [CrossRef] 42. Fishman, G. Monte Carlo: Concepts, Algorithms, and Application; Springer Science & Business Media: Berlin, Germany, 2013; pp. 493–583. 43. Smits, J.G.; Dalke, S.I.; Cooney, T.K. The constituent equations of piezoelectric bimorphs. Sens. Actuators A Phys. 1991, 28, 41–61. [CrossRef] Appl. Sci. 2019, 9, 1457 19 of 19 44. Fossum, E.R.; Hondongwa, D.B. A review of the pinned photodiode for CCD and CMOS image sensors. IEEE J. Electron. Devices Soc. 2014, 2, 33–43. [CrossRef] 45. Benmessaoud, M.; Nasreddine, M.M. Optimization of MEMS capacitive accelerometer. Microsyst. Technol. 2013, 19, 713–720. [CrossRef] © 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

Journal

Applied SciencesMultidisciplinary Digital Publishing Institute

Published: Apr 7, 2019

References