Access the full text.
Sign up today, get DeepDyve free for 14 days.
INTRODUCTIONDifferent from fossil fuel, wind energy is a renewable resource with abundant reserves and broad commercial prospects, and it has been widely applied to power generation [1]. Permanent magnet synchronous generator (PMSG) as the core equipment of power generation series, it has higher market acceptance for its better suitability for low wind speed, low energy consumption and lower subsequent maintenance cost [2, 3]. Unfortunately, the parameters of PMSG are different under different operating conditions [4], and this will prevent the controller from working well and may even cause malfunctions, therefore, the precise estimation of the parameters is essential.Online parameter estimation methods mainly include: extended Kalman filter algorithm [5, 6], model reference adaptive system [7, 8], least square method [9, 10], neural network [11, 12], genetic algorithm [13, 14], observer‐based method [15], etc. In [5], the extended Kalman filter method (EKF) is often used to estimate the resistance and flux linkage parameters of the generator, its estimation model was also designed in detail, in the same way that [6] used the EKF to estimate the machine state and used it to construct a closed‐loop controller with better performance and improved the tracking capability of the system, however, the computation of matrix for EKF is relatively larger and the appropriate Q and R matrices are more difficult to choose. In [7, 8], the model reference adaptive system (MRAS) was used for parameter estimation, which solved the equation under‐ranking problem by fixing one parameter to estimate the other two parameters, and the experiment showed that its estimation effect was better, however, it ignores the fact that the parameters of machine are strongly coupled, which also limits its dynamic performance, moreover, the design of adaptive law is complicated. Compared with other methods, the least squares method (LS) in [9] has the advantages of better convergence speed and easy realization, but the accuracy and robustness of its parameter estimation are harder to be satisfied. In [10], better estimation results were obtained by EKF optimization of the regression and vector matrices for LS, but the data saturation problem of LS is still persistent. Ref. [11] used neural network (NN) to estimate the machine resistance and flux linkages parameters, while [12] estimated only the flux linkages parameter. They had designed the estimation model in detail and both obtained better estimation accuracy due to the higher estimation performance of NN, however, if the tuning criterion is not implemented properly, the method will fall into a local minimum or overfitting. Refs. [13, 14] proposed genetic algorithm (GA) to estimate parameters, this method has smaller error and eliminates the need for a priori knowledge of machine parameters, but it suffers from premature and computational problems, moreover, the problem of the equation being under‐ranked is ignored. The observer‐based method in [15] has better performance in estimating parameters, however, the robustness is not enough when dealing with strongly coupled problems with parameters.With the measurement data and a suitable objective function, an ideal automatic parameter estimation method is available by the bionic search optimization method. More particularly, the PSO algorithm benefits from simple implementation, higher search speed, parallel search in the solution space, and is powerful in addressing multi‐parameter estimation problems [16, 17]. In [18], the PSO was used to estimate the machine parameters, however, the PSO tends to fall into local optimality and cannot estimate all parameters well. The lower precision of PSO parameter estimation is attributed to the fact that its parameters are constant, hence, in [19], by changing the constant inertia weight of PSO to a linear decreasing approach, and experimental results showed better performance. Ref. [20] employed a Gaussian mutation for the extreme values of PSO to enable PSO jumping out of the local optimum and the parameter estimation accuracy was superior. Ref. [21] simplified the speed term of PSO to make the convergence speed of PSO better. Ref. [22] incorporated SA to PSO, and the result proved that its accuracy is enhanced. The above improvement methods have better estimation performance compared to traditional PSO, however, they are still difficult to prevent PSO from falling into the local optimum problem. In addition, inappropriate initiation opportunities for optimisation operations are more difficult to obtain the desired results and often require human intervention, increasing caseloads for researchers. Moreover, there is lack of theoretical basis for evaluating multiple parameters when the equation is under‐ranked.To address the problem of equation under‐ranking and further improve the performance of PSO parameter estimation, the parameter estimation method based on SLPSO is proposed, its self‐learning capability is also enhanced and human intervention is mitigated. The main contributions are summarized as follows:The negative sequence weakening current and id = 0 current are injected into the d‐axis in time sharing, the same amount of data is collected under the two states, and a mathematical model for full‐rank parameter estimation is developed.The speed term of PSO is simplified to improve the convergence speed in the later stage. Moreover, the chaos decreasing strategy is adopted for the inertia weight to strengthen the global search ability.The self‐learning dense fleeing strategy (SLDF) based on population density information and Levy flight is designed to allow particles to learn deeply based on population density, the problem of premature algorithmic maturation and human intervention in the evolutionary process is prevented.The memory tempering annealing (MTA) is integrated into the PSO to make it explore potentially better areas (exploration), and the greedy algorithm (GA) is also introduced late in the evolution to accelerate the convergence of the algorithm to better regions (excavation). Simulation and experimental results show that the proposed method has better estimation robustness and precision than other PSO methods.The remainder of this paper is organized as follows. The full‐rank estimation mathematical model is designed in Section 2. The principle of proposed method is detail described in Section 3. The scheme of parameter estimation and the optimization process and steps are described in Section 4. The simulation and experiment results and analysis are given in Section 5. Finally, some conclusions are presented in Section 6.PMSG MODELThe voltage equation of PMSG under the d–q coordinate system can be expressed as (1)1ud=Rid+Lddiddt−ωeLqiquq=Riq+Lqdiqdt+ωe(Ldid+ψm)$$\begin{equation} \left\{ \def\eqcellsep{&}\begin{array}{l} u_d = R{i_d} + {L_d} \displaystyle\frac{{{\rm{d}}{i_d}}}{{{\rm{d}}t}} - {\omega _e}{L_q}{i_q}\\[10pt] u_q = R{i_q} + {L_q} \displaystyle\frac{{{\rm{d}}{i_q}}}{{{\rm{d}}t}} + {\omega _e}({L_d}{i_d} + {\psi _m}) \end{array} \right.\end{equation}$$where R is stator resistance, ud, uq, id, iq are the voltages and currents of the d‐axis and q‐axis, Ld, Lq are the stator inductances of the d‐axis and q‐axis, ωe is the electrical angular velocity, ψm is permanent magnet flux linkage.The steady‐state voltage equation in the d–q coordinate system is usually expressed as (2)2ud=Rid−ωeLqiquq=Riq+ωe(Ldid+ψm)$$\begin{equation}\left\{ \def\eqcellsep{&}\begin{array}{l} u_d = R{i_d} - {\omega _e}{L_q}{i_q}\\[4pt] u_q = R{i_q} + {\omega _e}({L_d}{i_d} + {\psi _m}) \end{array} \right.\end{equation}$$The rank of (2) is 2, which has an under‐rank problem in the case of estimating four parameters (i.e., the resistance R, the d‐axis inductance Ld, the q‐axis inductance Lq and the permanent magnet flux ψm). Most scholars employ the strategy of injecting id ≠ 0 (negative sequence weakening current) and id = 0 current in the d‐axis to solve the under‐ranking problem [23]. The injection form is shown in Figure 1.1FIGUREDiagram of the injected formCollecting the same amount of data under the two states of id = 0 and id = −2 to obtain the fourth‐order full‐rank discrete equation, which can be expressed as (3)3ud0(k)=−ωe(k)Lqiq0(k)uq0(k)=Riq0(k)+ωe(k)ψmud1(k)=Rid1(k)−ωe(k)Lqiq1(k)uq1(k)=Riq1(k)+ωe(k)(Ldid1(k)+ψm)$$\begin{equation}\left\{ \def\eqcellsep{&}\begin{array}{l} {u_{d0}}(k) = - {\omega _e}(k){L_q}{i_{q0}}(k)\\[6pt] {u_{q0}}(k) = R{i_{q0}}(k) + {\omega _e}(k){\psi _m}\\[6pt] {u_{d1}}(k) = R{i_{d1}}(k) - {\omega _e}(k){L_q}{i_{q1}}(k)\\[6pt] {u_{q1}}(k) = R{i_{q1}}(k) + {\omega _e}(k)({L_d}{i_{d1}}(k) + {\psi _m}) \end{array} \right.\end{equation}$$where k is the current number of iterations, ud0(k), uq0(k), iq0(k) and ωe(k) are the data sampled for the kth time in 0‐t1 time in Figure 1, ud1(k), uq1(k), id1(k) and iq1(k) are the kth collected data in t1 − t2 time.ENHANCED SELF‐LEARNING PARTICLE SWARM OPTIMIZATION ALGORITHM WITH LEVY FLIGHTSimple particle swarm optimizationThe principle of PSO is to continuously approach the position with a smaller fitness value to obtain the optimal solution of the problem. The speed and position of the particle is updated in a way that can be expressed as (4)4vik+1=wvik+c1r1(Pibestk−xik)+c2r2(Pgbestk−xik)xik+1=xik+vik+1.$$\begin{equation}\left\{ \def\eqcellsep{&}\begin{array}{l} v_{{\rm{i}}}^{k + 1} = wv_{{\rm{i}}}^k + {c_1}{r_1} \big(P_{{ibest}}^k - x_{i}^k\big) + {c_2}{r_2}\big(P_{{gbest}}^k - x_{i}^k\big)\\[6pt] x_{i}^{k + 1} = x_{i}^k + v_{{\rm{i}}}^{k + 1} \end{array} \right..\end{equation}$$The velocity of particles is too divergent in the late stage of the algorithm will lead to a slow convergence [24], therefore, the simplified particle swarm optimization (SPSO) is proposed, and (4) is simplified as (5)5xik+1=wxik+c1r1(Pibestk−xik)+c2r2(Pgbestk−xik).$$\begin{equation}x_{{\rm{i}}}^{k + 1} = wx_{{\rm{i}}}^k + {c_1}{r_1}\big(P_{{ibest}}^k - x_{i}^k\big) + {c_2}{r_2}\big(P_{{gbest}}^k - x_{i}^k\big).\end{equation}$$Chaos decreasing strategyThe inertia weight w is an important parameter that affects the performance of SPSO, it generally decreases linearly from 0.9 to 0.4, and its expression can be expressed as (6)6w=wmax−(wmax−wmin)kkmax,$$\begin{equation}w = w{}_{\max } - (w{}_{\max } - w{}_{\min })\frac{k}{{{{\rm{k}}_{{\rm{max}}}}}},\end{equation}$$where kmax is the maximum number of iterations.Larger inertia weights w favour global search, and conversely, smaller one favour local search and convergence. However, the SPSO progressively enhances the capability of local search (linear decreasing strategy) and is prone to fall into local optimum. Chaotic mappings with the merits of ergodicity and randomness can enhance evolutionary diversity, and the effectiveness of introducing chaotic optimization operations for PSO was verified in [25, 26]. The logistic mapping form can be expressed as (7)7z=4×z(1−z),$$\begin{equation}z = 4 \times z(1 - z),\end{equation}$$where the initial value of z is between (0, 1) and is non‐equal to 0, 0.25, 0.5 and 1.The updated equation of improved inertia weight by Chaos theory can be expressed as (8)8w=z×wmax−(wmax−wmin)kkmax.$$\begin{equation}w = z \times w{}_{\max } - (w{}_{\max } - w{}_{\min })\frac{k}{{{{\rm{k}}_{{\rm{max}}}}}}.\end{equation}$$This strategy combines the Logistic mapping with the linear decline strategy and the random strategy, which improves the performance of the linear decline strategy and the random strategy, the monotony of SPSO populations is effectively suppressed.Self‐learning dense fleeingThe organisms will flee from living densities that are too thin, and the population density is expressed as (9)9s(i,Pibest)=1−d(i,Pibest)dmax,$$\begin{equation}s(i,{P_{ibest}}){\rm{ = 1}} - \frac{{d(i,{P_{ibest}})}}{{{d_{\max }}}},\end{equation}$$where d(i, Pibest) represents the Euclidean length from the particle i and the individual extreme, dmax represents the maximum distance between the particle and the extreme.As the iteration proceeds, the population gradually becomes denser and there is an urgent need to exploit new living spaces. Levy flight is a random search strategy between short‐distance flight and stochastic long‐distance exploration that obeys the Levy distribution. Levy flight is introduced into the update of SPSO to facilitate its population evolutionary depth, and the Levy flight position update equation is expressed as (10)10Xik+1=Xik+Levy⊕αLevy=Sl⊕Xik$$\begin{equation}\left\{ \def\eqcellsep{&}\begin{array}{l} X_i^{k + 1} = X_i^k + Levy \oplus \alpha \\[6pt] Levy = {S_l} \oplus X_i^k \end{array} \right.\end{equation}$$where α is the step size associated with the scale of the problem, it is a random number in all dimensions of the particle, and (10) can be rewritten as (11)11Xik+1=Xik+Sl⊕Xik⊕random(size(Xik))$$\begin{equation}X_i^{k + 1} = X_i^k + {S_l} \oplus X_i^k \oplus random\big(size\big(X_i^k\big)\big) \end{equation}$$The step size Sl is calculated as (12)12Sl=0.01·S$$\begin{equation}{S_l} = 0.01 \cdot S\end{equation}$$where the factor 0.01 comes from L/100, which is the typical step size for walking, otherwise, the Levy flight may become too aggressive, making the new solution jump out of the optimization‐seeking domain (wasting computational power).The step length S can be calculated by the Mantegna algorithm for random walks, which can be expressed as [27]13S=μν1ββ∈(0,2]$$\begin{equation}S = \frac{\mu }{{{{\left| \nu \right|}^{\frac{1}{\beta }}}}}\;\;\;\;\;\;\;\;\;\;\beta \in (0,2]\end{equation}$$where μ and ν follow a Gaussian distribution, they can be expressed as (14)14μ∼N(0,σu2),ν∼N(0,σv2)$$\begin{equation} \mu \sim N\big(0,\sigma _u^2\big),\nu \sim N\big(0,\sigma _v^2\big) \end{equation}$$where15σu=Γ(1+β)sinπβ2Γ1+β2β2(β−1)/21/βσv=1$$\begin{equation} \sigma _u = {\left\{ {\frac{{\Gamma (1 + \beta )\sin \left(\frac{{\pi \beta }}{2}\right)}}{{\Gamma \left(\frac{{1 + \beta }}{2}\right)\beta {2^{(\beta - 1)/2}}}}} \right\}^{1/\beta }}\;\;\;\;\;\;\;\sigma _v = 1\end{equation}$$At higher PSO population density (s(i, Pibest) > rand()), Levy flight is better able to facilitate PSO fleeing from areas of lower survival density and protect the evolutionary vitality of the population.Global ‐ domain enhancementSA accepts the position of poor fitness with probability [28]. At temperature T, the fitness values of the original position i and the new position j are fi and fj, and the probability (Pij) of receiving the new position is expressed as16Pij=e−fj−fiT$$\begin{equation}P_{ij} = {\rm{ }}{e^{ - \frac{{{f_j} - {f_i}}}{T}}}\end{equation}$$When Pij > rand(), the proposed method accept the new position j; otherwise, keep the original position i. The probability of accepting inferior solutions in the early stage of SA is larger, which facilitates PSO out of the local optimum.When receiving a new solution, the temperature (tempering) will increase for continuing to search the potential areas. To avoid repetition of calculation, the number of tempering should not be too many, it is set to 5 times. Moreover, a memory is set to record the solution with the best fitness value to prevent the forgetfulness of SA.The above is the MTA, which contributes to the global exploration of PSO, however, the PSO evolution degenerates to domain search at a later stage (MTA does not accept the difference solution for five consecutive times) with limited evolutionary potential, therefore, its replacement by the greedy algorithm (GA) with simple principle and high efficiency of local search at a later stage to enhance the fine exploitation locally.Moreover, the initial value of GA is the PSO late‐seeking optimal value, which is closer to the real value than the random initial value and contributes to better acquisition of the global optimal value.The proposed methodA strategy of chaotic decreasing inertia weights is used in SLPSO to enhance the global search capability, and a scatter learning strategy is designed based on the population density to facilitate the particles to explore new lively intervals. Moreover, MTA is introduced to assist the algorithm in exploring potentially better regions, and GA is used to enhance the depth and speed of evolution in the later stages of PSO.The basic steps of SLPSO are stated as follows:Algorithm: SLPSO1: Initialize parameters, data sampling and recording as in Figure 1, and obtain initial individual and population extremes.2: for 1 < k < kmax3: Update particle position (xi) by (5) and evaluate their fitness value (f(xi)).4: Get the fitness difference between the new and the old position Δf (fj ‐ fi,).5: if Δf < 0 Δf > 0 && exp(‐Δf/T) > rand(), the particle enters the new position, and the annealing operation is performed T = CT, or else, keep the original position //T is the initial temperature and C is the coefficient of annealing.6: if Nt < 5, the tempering annealing T = 2CT, or else, the local detailed exploitation by GA by inheriting the optimal solution of PSO. // Nt is number of tempering.7: The inertia weight is chaotically decreasing by utilizing the (8), and obtain the population density (s(i, Pibest)) by (9).8: if s(i, Pibest ) > rand(), SLDF strategy is initiated to explore new lively areas and enhance population diversity.9: if f(xi) < f(Pibest), update Pibest (Pibest ← xi).10: if f(Pgbest ) < f(Pibest), update Pgbest (Pgbest ← Pibest).11: if the maximum number of iterations is met, the memory output optimal parameters, or else, continue to iterate.PRINCIPLE OF PARAMETER ESTIMATIONThe problem of parameter estimation can be transformed into an optimization problem. The basic idea is that the parameters of the adjustable model are continuously adjusted through SLPSO, so that the output difference between the reference model and the adjustable model is minimised. Finally, the optimal solution output by SLPSO is used as the identified parameter. The reference model is expressed as (17)17y=h(p,I)$$\begin{equation}y = h(p,I)\end{equation}$$where the h function is (3), p is the machine parameters, and p = (R, Ld, Lq, ψm), I is the system input, and I = (id, iq, ωe), y is the system output, and y = (ud, uq).To estimate the parameters of machine, a model with the same structure and adjustable parameters is designed, which can be expressed as (18)18ŷ=h(p̂,I)$$\begin{equation}\hat y = h(\hat p,I)\end{equation}$$where p̂$\hat p$ is the adjustable model parameters, and p̂=(R̂,L̂d,L̂q,ψ̂m)$\hat p = (\hat R,{\hat L_d},{\hat L_q},{\hat \psi _m})$, ŷ$\hat y$ is the adjustable model output, and ŷ=(ûd,ûq)$\hat y = ({\hat u_d},{\hat u_q})$.It is necessary to compare the output of the reference model and the adjustable model to accurately estimate the parameters. The fitness function serves as a criterion for PSO to gauge estimation parameter precision, it can be expressed as (19)19f1(L̂q)=1kmax(ud0(k)−ûd0(k))2f2(R̂,ψ̂m)=1kmax(uq0(k)−ûq0(k))2f3(R̂,L̂q)=1kmax(ud1(k)−ûd1(k))2f4(R̂,L̂d,ψ̂m)=1kmax(uq1(k)−ûq1(k))2$$\begin{equation}\left\{ \def\eqcellsep{&}\begin{array}{l} {f_1}({{\hat L}_q}) = \displaystyle\frac{1}{{{{\rm{k}}_{\max }}}}{(u_{d0}(k) - \hat u_{d0}(k))^2}\\[10pt] {f_2}(\hat R,{{\hat \psi }_m}) = \displaystyle\frac{1}{{{{\rm{k}}_{\max }}}}{(u_{q0}(k) - \hat u_{q0}(k))^2}\\[10pt] {f_3}(\hat R,{{\hat L}_q}) = \displaystyle\frac{1}{{{{\rm{k}}_{\max }}}}{(u_{d1}(k) - \hat u_{d1}(k))^2}\\[10pt] {f_4}(\hat R,{{\hat L}_d},{{\hat \psi }_m}) = \displaystyle\frac{1}{{{{\rm{k}}_{\max }}}}{(u_{q1}(k) - \hat u_{q1}(k))^2} \end{array} \right.\;\end{equation}$$where ûd0(k)$\hat u_{d0}(k)$, ûq0(k)$\hat u_{q0}(k)$, ûd1(k)$\hat u_{d1}(k)$ and ûq1(k)$\hat u_{q1}(k)$ represent the d–q axis voltages output by the adjustable model.All parameters are estimated at the same time by (20)20min[f(p̂)]=∑i=14aifi$$\begin{equation}\min [f(\hat p)] = \sum\limits_{i = 1}^4 {{a_i}{f_i}} \end{equation}$$where ai is the weighting factor, which are all 0.25 for the estimation parameters are equally important.The principle block diagram of parameter estimation is shown in Figure 2.2FIGUREBlock diagram of parameter identificationThe steps of parameter estimation:Initialize the SLPSO parameters.Collect electrical signals, and obtain the outputsûd0(k)$\hat u_{d0}(k)$, ûq0(k)$\hat u_{q0}(k)$, ûd1(k)$\hat u_{d1}(k)$ and ûq1(k)$\hat u_{q1}(k)$ of adjustable model from (18).The initial fitness value f(p̂$\hat p$(k)) is obtained from (20).The current individual and group parameter extremes p̂ipbest$\hat p{_i^{pbes}{^t}}$ and p̂gbest${\hat p^{gbest}}$ are determined by the fitness value, and the parameter is updated by (5), such as, the update of the R̂$\hat R$can be expressed as:21R̂(k+1)=wR̂s(k)+c1r1R̂pbest−R̂(k)+c2r2R̂gbest−R̂(k)$$\begin{eqnarray} \hat R(k + 1) &=& w{{\hat R}_s}(k) + {c_1}{r_1}\left[ {\hat R^{pbest} - {{\hat R}}(k)} \right]\nonumber\\ && +\ {c_2}{r_2}\left[ {\hat R^{gbest} - \hat R(k)} \right] \end{eqnarray}$$where R̂pbest$\hat R^{pbest}$ and R̂gbest$\hat R^{gbest}$ are the individuals and groups optimal values of R̂$\hat R$ respectively, and other parameters update in the same way.5.The inertia weight is updated by (8), and obtained the population density (s(i, Pibest)) by (9).6.Higher population density and the SLDF is initiated, and the Metropolis principle is used to judge whether to accept new parameters.22Δf=f(p̂(k+1))−f(p̂(k))p=1Δf≤0e−ΔfTΔf>0$$\begin{equation}\left\{ \def\eqcellsep{&}\begin{array}{l} \Delta f = f(\hat p(k + 1)) - f(\hat p(k))\\[6pt] p = {\rm{ }}\left\{ \def\eqcellsep{&}\begin{array}{ll} 1 &\Delta f \le 0\\[6pt] {e^{\frac{{ - \Delta f}}{T}}} & \Delta f > 0 \end{array} \right. \end{array} \right. \end{equation}$$The p > rand(), update the parameter value, and annealing operation is performed, otherwise, keep the original parameter value.7.Perform tempering annealing or GA optimization operations according to the rules.8.The maximum number of iterations is reached, the memory output the optimal parameters, otherwise, continue to iteration.SIMULATION AND EXPERIMENTAL ANALYSISSimulation analysisTo verify the effectiveness of the proposed method, a PMSG vector control system is established in Matlab/Simulink as shown in Figure 3. The parameters of generator are shown in Table 1.3FIGUREVector control system block diagram1TABLEGenerator parameter tableParameterValueUnitPole pairs2pairsResistance2.875ΩStator d‐axis inductance4.5mHStator q‐axis inductance13.5mHPermanent magnet flux0.17858WbRated power1.0kWRated speed1500rpmRated torque15N·mThe parameters of test algorithm are all set as follows: the population number is 20, the number of iterations is the ratio of the running time to the sampling time, the acceleration factor c1 and c2 take 1.6, the annealing temperature T and the coefficient C are 1000 and 0.95, respectively, the simulation system runs for 0.2 s, system sampling frequency is 10 kHz.The actual system is disturbed by uncertain factors and there are random errors. Therefore, SLPSO, memory tempering annealing PSO (MTAPSO), SAPSO and PSO are tested under different working conditions to independently estimate machine parameters for ten times, and take the average value as the final output value.Working condition 1The estimation results and errors in the operating state with the torque of 10 N∙m and the speed of 1000 r/min are shown in Table 2.2TABLEThe results of parameter estimation under condition 1ParameterPSOSAPSOMTAPSOSLPSOR (Ω)3.2033.1113.0052.929Error (%)11.4098.2094.5221.878Ld (mH)4.1574.3504.4244.557Error (%)−7.622−3.333−1.6891.289Lq (mH)13.25513.65213.62113.592Error (%)−1.8151.1260.8960.681ψm (Wb)0.17030.17270.17470.1759Error (%)−4.637−3.293−2.173−1.501Estimate time (s)0.0680.0620.0550.045Fitness value7.3885.3253.3102.502Working condition 2Temperature has a great influence on machine parameters, after the test machine runs for a period, its parameters become as: R is 3.1625 Ω, Ld is 4.635 mH, Lq is 14.175 mH, and ψm is 0.169651 Wb. Table 3 is the estimation results and errors under the running state of the torque of 15 N·m and speed of 1500 r/min.3TABLEThe results of parameter estimation under condition 2ParameterPSOSAPSOMTAPSOSLPSOR (Ω)2.7942.8862.9903.225Error (%)−11.653−8.743−5.4551.976Ld (mH)4.0874.3044.7234.711Error (%)11.823−7.1411.8991.640Lq (mH)13.74014.39214.35514.289Error (%)−3.0691.5311.2700.804ψm (Wb)0.17850.17570.17370.1724Error (%)5.2163.5662.3871.620Estimate time (s)0.0760.0640.0570.046Fitness value8.0945.8923.6812.651From the data in Tables 2 and 3, we can see that PSO is prone to fall into local optimum when dealing with optimization problems with strongly coupled parameters, and the accuracy is poor (the maximum estimation error is greater than 11%), and its convergence speed is slower. The improved SAPSO, MTAPSO and SLPSO have better accuracy than PSO, and the estimation accuracy of SLPSO is within 2%, which is 3.44% better than MTAPSO and 6.29% better than SAPSO. When the working conditions change, the accuracy of SLPSO is still within 2%, the performance is less affected by external influences, and its robustness is better.Experimental verificationThis paper uses RT‐LAB to implement the hardware in the loop simulation (HILS) of the machine drive system. The RT‐LAB experiment platform is shown in Figure 4. The model of the DSP controller is TMS320F2812, which runs the algorithm, and RT‐LAB (OP5600) is used to construct machine and inverter. The experimental test conditions are consistent with the simulation.4FIGURERT‐LAB experiment platform (a) Test bench (b) HILS configurationWorking condition 1Figures 5–7 show the results of parameter estimation and the fitness curve, their parameter estimation results are shown in Table 4.5FIGUREParameter estimation results of R and ψm under condition 1. (a) PSO (b) SAPSO (c) MTAPSO (d) SLPSO6FIGUREParameter estimation results of Ld and Lq under condition 1 (a) PSO (b) SAPSO (c) MTAPSO (d) SLPSO7FIGUREThe curve of fitness function under condition 14TABLEThe experimental results of parameter estimation under condition 1ParameterPSOSAPSOMTAPSOSLPSOR (Ω)3.2113.1163.0112.930Error (%)11.6878.3834.7301.913Ld (mH)4.1524.3474.5794.558Error (%)−7.733−3.4001.7561.289Lq (mH)13.25113.65313.62813.593Error (%)−1.8441.1330.9480.689ψm (Wb)0.16900.17200.17450.1759Error (%)−5.365−3.685−2.285−1.501Estimate time (s)0.3900.3800.3700.260Fitness value7.4235.3373.3192.504The estimation curve of R and the ψm by PSO is close to stable at 390 ms, and its estimation value of R deviates from the true value by nearly 11.7%, the SAPSO estimation curve stabilizes within 8.5% of the true value at 380 ms, and the MTAPSO stabilizes at 4.8% of the true value at 370 ms. Compared with the other three methods, the convergence speed of the SLPSO estimation curve is faster, and its estimation error is within 2% at 260 ms, moreover, its estimation error is 1.913%, which is 0.6, 0.77 and 0.84 times smaller than that of MTAPSO, SAPSO and PSO, respectively, demonstrating that the proposed method has favourable global self‐decoupling ability in dealing with the strongly coupled parameter problem.It can be seen from Figure 6 that the inductance estimation curve of PSO fluctuates greatly, and its error is 4.5% higher than that of SAPSO. The inductance estimation accuracy of MTAPSO is 1.5% higher than that of SAPSO. The exploration domain of SLPSO with inertial weight chaotic decreasing strategy is broader, which makes it better to escape from local optimum, and the final estimation accuracy of inductance remains within 1.3%.It can be seen from Figure 7 that PSO falls into a local optimum, which causes its fitness value curve to converge to 7.388 at 390 ms. The fitness values of SAPSO and MTAPSO are smaller than PSO, which are 5.337 and 3.319, respectively. MTAPSO stabilizes at 370 ms, and SLPSO converges to 2.504 in 260 ms, which shows that SLPSO has better accuracy and convergence speed than other methods.Working condition 2Figures 8–10 show the results of parameter estimation and the fitness curve under condition 2, and their parameter estimation results are shown in Table 5.8FIGUREParameter estimation results of R and ψm under condition 2 (a) PSO (b) SAPSO (c) MTAPSO (d) SLPSO9FIGUREParameter estimation results of Ld and Lq under condition 2 (a) PSO (b) SAPSO (c) MTAPSO (d) SLPSO10FIGUREThe curve of fitness function under condition 25TABLEThe experimental results of parameter estimation under condition 2ParameterPSOSAPSOMTAPSOSLPSOR (Ω)2.7822.8842.9773.225Error (%)−12.032−8.806−5.8661.976Ld (mH)4.0824.3874.7254.713Error (%)11.931−5.351−1.9421.683Lq (mH)13.70114.40814.37314.290Error (%)−3.3441.6441.3970.811ψm (Wb)0.17910.17610.17430.1724Error (%)5.5703.8012.7401.620Estimate time (s)0.480.420.3800.260Fitness value8.3185.9903.7472.657Figure 8 shows the estimation curves of R and ψm by the four algorithms when the machine parameters and operating conditions change. The estimation curve of PSO fluctuates greatly, and its error exceeds 12%. The curves of SAPSO, MTAPSO, and SLPSO also fluctuate. SLPSO has a smaller fluctuation, which is only 0.095% higher than that under working condition 1, and the estimation accuracy is maintained better than other methods.Figure 9 shows the estimation curves of inductance by the four algorithms. Changes in working conditions and parameters cause the system to fluctuate. PSO is greatly affected, the parameter estimation accuracy of inductance is reduced by 4.2%, the accuracy of the optimized SAPSO, MTAPSO and SLPSO are reduced by 1.95%, 0.45% and 0.39% respectively. The accuracy of SLPSO decreases relatively lower, and its error remains within 2%.It can be seen from Figure 10 that the fitness value curve of PSO fluctuates greatly under the condition of increased disturbance. SAPSO and MTAPSO have joined SA, the estimation accuracy has been improved, and the accuracy of MTAPSO with tempering and memory is better than SAPSO. The fitness value of SLPSO with SLDF and global ‐ domain enhancement strategy is smaller than other methods, which shows that its estimation accuracy and speed are better, and changes in working conditions have little effect on it.As a conclusion, the accuracy and speed of the parameter estimation of PSO are less satisfactory. The estimation accuracy of SAPSO and MTAPSO is better than that of PSO. Moreover, the estimation accuracy and speed of the SLPSO proposed in this paper are better than the other three schemes, and it exhibits better robustness in the case of changing working conditions and parameters.It is worth noting that hardware circuit connections, signal transmission, electromagnetic interference and noise can have an impact on parameter estimation, which is the reason for some differences between experiment and simulation. The robust estimation methods based on maximum correntropy criterion (MCC) were proposed in [29–31] to better eliminate the problem of unreasonable estimates due to data errors. Therefore, using robust estimation methods, longer distances between devices, additional shielding and multiple experimental tests can better prevent the effects of various interferences. Certainly, with the development of technology this approach can become easier to implement and more effective in practice.In addition, the inverter non‐linearity reduces the estimation accuracy of each parameter when the PMSG at low speed; instead, it mainly affects the estimation accuracy of R. This just explains why the estimation accuracy of R in this paper is worse than the other parameters. Therefore, the compensation of inverter nonlinearity and the implementation of robust estimation will be a key research direction in the next step.CONCLUSIONTo overcome the issue that estimated equation is under‐ranked and PSO is vulnerable to local optimum, a novel parameter estimation method for machine in SLPSO is proposed. The following conclusions are drawn from the analysis of the experimental results under different scenarios.The full‐rank estimation equation is obtained by injecting id = 0 and negative‐sequence weak magnetic currents in a time‐sharing manner and the potential problem of divergence are avoided for the optimal solution.The chaotic inertia weight is used to facilitate SLPSO to explore potentially better regions, and the SLDF based on population density information and Levy flight is designed, and the algorithm can adaptively perform deep learning or exploitation operations to avoid population monotony and the necessity of human intervention.A global ‐ domain enhancement strategy is devised, i.e. MTA as a tool to facilitate the algorithm in enhancing deep learning and guaranteeing eco‐activity, and GA for accelerating the algorithm in fine‐grained mining and guaranteeing better convergence to better confidence intervals.It is still able to estimate the parameters well under different parameters and working conditions, its estimation accuracy is controlled at more than 98% (estimated error is 1.976%), the estimation error is attenuated by a multiple of 0.67, 0.78 and 0.84 compared to the conventional MTAPSO, SAPSO and PSO, respectively, and the requirements of high‐performance controllers and fault detection demands can be better fulfilled.NOMENCLATURER̂$\hat R$The resistance as to be estimatedL̂d${\hat L_d}$The d‐axis inductance as to be estimatedL̂q${\hat L_q}$The q‐axis inductance as to be estimatedψ̂m${\hat \psi _m}$The flux linkage as to be estimatedΓThe standard Gamma function⊕The element‐by‐element multiplicationc1The acceleration coefficientc2The acceleration coefficientLThe typical length scaleMTAMemory tempering annealingPgbestThe best position of the particle swarmPibestThe best position found by the particle iPSOParticle swarm optimizationr1The random number between 0 and 1r2The random number between 0 and 1rand()The random number between 0 and 1SASimulated annealingSAPSOSimulated annealed PSOviThe velocity of the particle iwmaxThe initial inertia weightwminThe minimum inertia weightxiThe position of the particle iACKNOWLEDGEMENTSThis work was supported by the National Natural Science Foundation of China under Grant 52277034, Hunan Education Department Science Research Project under Grant Number 21C0747 and 20C0170, Changsha Science and Technology Plan Project under Grant kq2105001.CONFLICT OF INTERESTThe authors declare no conflict of interest.DATA AVAILABILITY STATEMENTThe data that support the findings of this study are available from the corresponding author upon reasonable request.REFERENCESWang, T., Gao, M., Mi, D., et al.: Dynamic equivalent method of PMSG‐based wind farm for power system stability analysis. IET Gener. Transm. Distrib. 14(17), 3488–3497 (2020)Xing, P., Fu, L., Wang, G., et al.: A compositive control method of low‐voltage ride through for PMSG‐based wind turbine generator system. IET Gener. Transm. Distrib. 12(1), 117–125 (2018)Ahmed, H., Bhattacharya, A.: PMSG‐based VS‐WECS for constant active power delivery to standalone load using direct matrix converter‐based SST with BESS. IET Gener. Transm. Distrib. 13(10), 1757–1767 (2019)Sel, A., Sel, B., Kasnakoglu, C.: GLSDC based parameter estimation algorithm for a PMSM model. Energies 14(3), 611 (2021)Li, X., Kennel, R.: General formulation of Kalman‐filter‐based online parameter identification methods for VSI‐fed PMSM. IEEE Trans. Ind. Electron. 68(4), 2856–2864 (2020)Yang, H., Yang, R., Hu, W., et al.: FPGA‐based sensorless speed control of PMSM using enhanced performance controller based on the reduced‐order EKF. IEEE J. Emerging Sel. Top. Power Electron. 9(1), 289–301 (2019)Loria, A., Panteley, E., Maghenem, M.: Strict Lyapunov functions for model reference adaptive control: Application to Lagrangian systems. IEEE Trans. Autom. Control 64(7), 3040–3045 (2018)Zhao, H., Eldeeb, H.H., Wang, J., et al.: Parameter identification based online noninvasive estimation of rotor temperature in induction motors. IEEE Trans. Ind. Appl. 57(1), 417–426 (2021)Feng, G., Lai, C., Mukherjee, K., et al.: Current injection‐based online parameter and VSI nonlinearity estimation for PMSM drives using current and voltage DC components. IEEE Trans. Transp. Electrif. 2(2), 119–128 (2016)De Souza, D.A., Batista, J.G., Vasconcelos, F.J., et al.: Identification by recursive least squares with Kalman Filter (RLS‐KF) applied to a robotic manipulator. IEEE Access 9, 63779–63789 (2021)Liu, K., Zhu, Z.Q.: Position‐offset‐based parameter estimation using the Adaline NN for condition monitoring of permanent‐magnet synchronous machines. IEEE Trans. Ind. Electron. 62(4), 2372–2383 (2015)Ortombina, L., Pasqualotto, D., Tinazzi, F., et al.: Magnetic model identification of synchronous motors considering speed and load transients. IEEE Trans. Ind. Appl. 56(5), 4945–4954 (2020)Accetta, A., Alonge, F., Cirrincione, M., et al.: GA‐based off‐line parameter estimation of the induction motor model including magnetic saturation and iron losses. IEEE Open J. Ind. Appl. 1, 135–147 (2020)Wei, H., Tang, X.S.: A genetic‐algorithm‐based explicit description of object contour and its ability to facilitate recognition. IEEE Trans. Cybern. 45(11), 2558–2571 (2015)Chen, C.S., Chen, S.K., Chen, L.Y.: Disturbance observer‐based modeling and parameter identification for synchronous dual‐drive ball screw gantry stage. IEEE/ASME Trans. Mechatron. 24(6), 2839–2849 (2019)Liu, Z.H., Wei, H.L., Zhong, Q.C., et al.: GPU implementation of DPSO‐RE algorithm for parameters identification of surface PMSM considering VSI nonlinearity. IEEE J. Emerging Sel. Top. Power Electron. 5(3), 1334–1345 (2017)Liu, Z.H., Wei, H.L., Li, X.H., et al.: Global identification of electrical and mechanical parameters in PMSM Drive based on dynamic Self‐Learning PSO. IEEE Trans. Power Electron. 33(12), 10858–10871 (2018)Calvini, M., Carpita, M.: PSO‐based self‐commissioning of electrical motor drives. IEEE Trans. Ind. Electron. 62(2), 768–776 (2015)Tang, X., Xie, X., Fan, B., et al.: A fault‐tolerant flow measuring method based on PSO‐SVM with transit‐time multipath ultrasonic gas flowmeters. IEEE Trans. Instrum. Meas. 67(5), 992–1005 (2018)Sarangi, A., Samal, S., Sarangi, S.K.: Analysis of Gaussian & Cauchy mutations in modified particle swarm optimization algorithm. In: IEEE International Conference on Advanced Computing and Communication Systems (ICACCS), pp. 463–467. IEEE, Piscataway, NJ (2019)Lin, C.C., Deng, D.J., Kang, J.R., et al.: A dynamical simplified swarm optimization algorithm for the multi‐objective annual crop planning problem conserving groundwater for sustainability. IEEE Trans. Ind. Inf. 17(6), 4401–4410 (2021)Pan, X., Xue, L., Lu, Y., et al.: Hybrid particle swarm optimization with simulated annealing. Multimed. Tools. Appl. 78(8), 29921–29936 (2019)Liu, Z.H., Wei, H.L., Zhong, Q.C., et al.: Parameter estimation for VSI‐Fed PMSM based on a dynamic PSO with learning strategies. IEEE Trans. Power Electron. 32(4), 3154–3165 (2017)Yuan, Q., Yin, G.: Analyzing convergence and rates of convergence of particle swarm optimization algorithms using stochastic approximation methods. IEEE Trans. Autom. Control 60(7), 1760–1773 (2015)Li, Y., Feng, B., Wang, B.: Joint planning of distributed generations and energy storage in active distribution networks: A Bi‐Level programming approach. Energy 245, 123226 (2022)Peng, F., Hu, S., Gao, Z., et al.: Chaotic particle swarm optimization algorithm with constraint handling and its application in combined bidding model. Comput. Electr. Eng. 95, 107407 (2021)Mao, X., Song, S., Ding, F.: Optimal BP neural network algorithm for state of charge estimation of lithium‐ion battery using PSO with Levy flight. J. Energy Storage 49, 104139 (2022)Koshka, Y., Novotny, M.A.: Comparison of D‐wave quantum annealing and classical simulated annealing for local minima determination. IEEE J. Sel. Areas Inf. Theory 1(2), 515–525 (2020)Chen, Y., Ma, J., Zhang, P., et al.: Robust state estimator based on maximum exponential absolute value. IEEE Trans. Smart Grid 8(4), 1537–1544 (2015)Chen, Y., Liu, F., Mei, S., et al.: A robust WLAV state estimation using optimal transformations. IEEE Trans. Power Syst. 30(4), 2190–2191 (2014)Chen, Y., Yao, Y., Zhang, Y.: A robust state estimation method based on SOCP for integrated electricity‐heat system. IEEE Trans. Smart Grid 12(1), 810–820 (2020)
IET Generation Transmission & Distribution – Wiley
Published: Mar 1, 2023
Keywords: local optimization; memory tempering annealing; particle swarm optimization; permanent magnet synchronous generator; self‐learning dense fleeing strategy
You can share this free article with as many people as you like with the url below! We hope you enjoy this feature!
Read and print from thousands of top scholarly journals.
Already have an account? Log in
Bookmark this article. You can see your Bookmarks on your DeepDyve Library.
To save an article, log in first, or sign up for a DeepDyve account if you don’t already have one.
Copy and paste the desired citation format or use the link below to download a file formatted for EndNote
Access the full text.
Sign up today, get DeepDyve free for 14 days.
All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.