An Algorithm Derivative-Free to Improve the Steffensen-Type Methods
An Algorithm Derivative-Free to Improve the Steffensen-Type Methods
Hernández-Verón, Miguel A.;Yadav, Sonia;Magreñán, Ángel Alberto;Martínez, Eulalia;Singh, Sukhjit
2021-12-21 00:00:00
symmetry Article An Algorithm Derivative-Free to Improve the Steffensen-Type Methods 1 2 1, 3 Miguel A. Hernández-Verón , Sonia Yadav , Ángel Alberto Magreñán * , Eulalia Martínez and Sukhjit Singh Department of Mathematics and Computation, University of La Rioja, 26006 Logroño, Spain; mahernan@unirioja.es Department of Mathematics, Dr BR Ambedkar National Institute of Technology, Jalandhar 144011, India; sonia.ma.19@nitj.ac.in (S.Y.); sukhjitmath@gmail.com (S.S.) Instituto Universitario de Matemática Multidisciplinar, Universitat Politècnica de València, 46022 València, Spain; eumarti@mat.upv.es * Correspondence: angel-alberto.magrenan@unirioja.es Abstract: Solving equations of the form H(x) = 0 is one of the most faced problem in mathematics and in other science fields such as chemistry or physics. This kind of equations cannot be solved without the use of iterative methods. The Steffensen-type methods, defined using divided differences are derivative free, are usually considered to solve these problems when H is a non-differentiable operator due to its accuracy and efficiency. However, in general, the accessibility of these iterative methods is small. The main interest of this paper is to improve the accessibility of Steffensen-type methods, this is the set of starting points that converge to the roots applying those methods. So, by means of using a predictor–corrector iterative process we can improve this accessibility. For this, we use a predictor iterative process, using symmetric divided differences, with good accessibility and then, as corrector method, we consider the Center-Steffensen method with quadratic convergence. In addition, the dynamical studies presented show, in an experimental way, that this iterative process also improves the region of accessibility of Steffensen-type methods. Moreover, we analyze the Citation: Hernández-Verón, M.A.; semilocal convergence of the predictor–corrector iterative process proposed in two cases: when Yadav, S.; Magreñán, Á.A.; Martínez, H is differentiable and H is non-differentiable. Summing up, we present an effective alternative E.; Singh, S. An Algorithm for Newton’s method to non-differentiable operators, where this method cannot be applied. The Derivative-Free to Improve the theoretical results are illustrated with numerical experiments. Steffensen-Type Methods. Symmetry 2022, 14, 4. https://doi.org/ Keywords: iterative method; local convergence; non-differentiable operator; dynamics; Steffensen’s method 10.3390/sym14010004 Academic Editor: Vasile Berinde MSC: 47H99; 65H10 Received: 13 November 2021 Accepted: 13 December 2021 Published: 21 December 2021 1. Introduction Publisher’s Note: MDPI stays neutral One of the most studied problems in numerical mathematics is finding the solution of with regard to jurisdictional claims in nonlinear systems of equations published maps and institutional affil- H(x) = 0, (1) iations. m m where H : W R ! R is a nonlinear operator, H ( H , H , . . . , H ) with H : W 1 2 i R ! R, 1 i m, and W is a non-empty open convex domain. In this context, iterative methods are a powerful tool for solving these equations [1]. Many applied problems can be Copyright: © 2021 by the authors. reduced to solving systems of nonlinear equations, which is one of the most basic problems Licensee MDPI, Basel, Switzerland. in mathematics. These problems arise in all scientific areas. Both in mathematics, physics This article is an open access article and especially in a diverse range of engineering applications. This task has applications in distributed under the terms and many scientific fields [2,3]. Applications in the geometric theory of the relativistic string conditions of the Creative Commons can be found [4], also when solcing nonlinear equations in porous media problems [5,6], Attribution (CC BY) license (https:// in solving nonlinear stochastic differential equations (by the first order finite difference creativecommons.org/licenses/by/ method) [7], in solving nonlinear Volterra integral equations [8], an many others. 4.0/). Symmetry 2022, 14, 4. https://doi.org/10.3390/sym14010004 https://www.mdpi.com/journal/symmetry S Symmetry 2022, 14, 4 2 of 26 In general, there are two aspects that must be considered when we choose an iterative process to approximate a solution of Equation (1). The first one is related to the compu- tational efficiency of the iterative process [9]. The other one, with the same importance, is known as the accessibility of the iterative process [10], which represents the possibility to locate starting points that ensure the convergence of the sequence generated by the iterative process to a solution of Equation (1). Newton’s method, due to its characteristics, it is usually considered as a reference in the measure of these two aspects. However, this method has a serious shortcoming: the derivative H (x) has to be computed and evalu- ated at each iteration. This makes it inapplicable when the equations involved presents non-differentiable operators and in situations when the evaluation of the derivative is too expensive in terms of computation and time. In these cases, one alternative commonly used is to approximate the derivatives by divided differences using a numerical derivation formula, where iterative processes free of derivatives are obtained. For this purpose, au- m m thors use first order divided differences [9,11]. First, we denote by L(R ,R ) the space of m m m m bounded linear operators from R toR . An operator [x, y; D] 2 L(R ,R ) is called first m m order divided difference for the operator D : W R ! R on the points x and y (x 6= y) if it is satisfied that [x, y; D](x y) = D(x) D(y). (2) In this paper, we consider derivative-free iterative processes using the previous ideas. But these methods have also a serious shortcoming: they have a region of reduced accessi- bility. In [10], the accessibility of an iterative process is increased by means of an analytical procedure, which consists of modifying the convergence conditions. However, in this work, we will increase accessibility by constructing an iterative predictor–corrector process. This iterative process has a first prediction phase and then a second accurate approximation phase. The first phase allows us, by applying the predictor method, to locate a starting point for the corrector method to ensure convergence to a solution of the equation. Kung and Traub presented in [12] a class of iterative processes without deriva- tives. These iterative processes considered by Kung and Traub contain Steffensen-type methods as a special case. In [13], a generalized Steffensen-type is considered with the following algorithm: x 2 W, a, b 2 [0, 1], > 0 y = x a H(x ), n n n (3) z = x + b H(x ), , n n n x = x [y , z ; H] H(x ), n > 0. n+1 n n n n As special cases of the previous algorithm, the three most well-known Steffesen-type methods: for a = 0 and b = 1 we obtain the original Steffensen method, the Backward- Steffensen method is obtained for a = 1 and b = 0 and the Center-Steffensen method is obtained for a = 1 and b = 1. Notice that, if we consider the Newton’s method, x = x H (x ) H(x ), n 0; x 2 W is given, (4) n+1 n n n 0 which is one of the most used iterative methods [14–18] to approximate a solution x of H(x) = 0, the Steffensen-type methods are obtained as a special case of this method, where the evaluation of H (x) in each step is approximated by the divided difference of first order [x a H(x), x + b H(x); H]. The Steffensen-type methods have been widely studied by many recognized researchers such as Alarcón, Amat, Busquier and López ([19]) who presented a study and applications of Steffensen method to boundary-value problems, Argyros ([20]) who gave an improved convergence theorem related to Steffensen method or Ezquerro, Hernández, Romero and Velasco ([21]) who studied the generalization of the Steffensen method to Banach spaces. Symmetry 2022, 14, 4 3 of 26 Symmetric divided differences generally perform better. This fact can be seen in the dynamical behavior of the Center-Steffensen method, see Section 2, which is the best one, in term of convergence, from the Steffensen-type methods given previously. Moreover, this method maintains the quadratic convergence of Newton’s method, by approximating the derivative through symmetric divided differences with respect to the x , and the Center-Steffensen method also has the same computational efficiency as Newton’s method. However, to achieve the second order in practice, an iteration close enough to the solution is needed to have a good approximation of the first derivative of H used in Newton’s method. In other case, some extra iterations in comparison with Newton’s method are required. Basically, when the norm of H(x) is big, the approximation of the divided difference to the first derivative of H is bad. So, in general, the set of starting points of the Steffensen-type methods is poor. This reality can be observed experimentally by means of the basins of attraction shown in Section 2. This fact justifies that Steffensen-type methods are less used than Newton’s method to approximate solutions of equations for differentiable operators. Thus, two are our main objectives in this work: on the one hand, in the case of differentiable operators, where Newton’s method can also be applied, our objective is to construct a predictor–corrector iterative process with an accessibility and efficiency such as Newton’s method. Secondly, the other objective is to ensure that this predictor–corrector iterative process considered have a behavior such as Newton’s method but considering the case of non-differentiable operators where Newton’s method cannot be applied. Following this idea, in this paper we consider the derivative-free point-to-point itera- tive process given by x given in W, (5) x = x [x Tol, x + Tol; H] H(x ), n 0, n+1 n n n n where Tol = (tol, tol, . . . , tol) 2 R for a real number tol > 0. Thus, we use a symmetric divided difference to approximate the derivative that appear in Newton’s method. Fur- thermore, by varying the parameter tol, we can approach the value of H (x ). Notice that, in the differentiable case, for tol = 0 we obtain the Newton’s method. The dynamical behavior of this simple iterative process is like Newton’s method, with one varying the parameter tol. However, although reducing the value of tol we can reach a speed of convergence like Newton’s method, its order of convergence is linear. That is why we will consider this method as a predictor, due to its good accessibility, and we will consider then the Center-Steffensen method: x 2 W, a, b 2 [0, 1], > 0 y = x H(x ), n n n (6) z = x + H(x ), , n n n x = x [y , z ; H] H(x ), n > 0, n n n n n+1 as a corrector method, whose order of convergence is quadratic. The paper is organized as follows. Section 2 contains the motivation of the paper. In Section 3, we present a semilocal convergence analysis of the new method when operator H is both differentiable and non-differentiable cases. Moreover, some numerical experiments are shown where the theoretical results are proven numerically. Next, Section 4 contains the study of dynamical behavior for the predictor–corrector method. Finally, in Section 5, we present the conclusions of the work carried out. 2. Motivation When iterative processes defined by divided differences are applied to find the so- lutions of nonlinear equations, it is important to note that the region of accessibility is reduced with respect to Newton’s method. In practice, we can see this circumstance with Symmetry 2022, 14, 4 4 of 26 the basins of attraction (the set of points of the plane such that initial conditions chosen in the set dynamically evolve to a particular attractor ([22,23]) of iterative methods when they are applied to solve a complex equation H(z) = 0, where H : C ! C and z 2 C. First, in the differential case, we compare the dynamical behavior of the Newton’s method, the Steffensen-type methods (3) and the iterative process given in (5) for solving the complex equation H (z) = z 1 = 0. In the non-differentiable case, we compare the Steffensen-type methods (3) and the iterative process given in (5) for solving the complex equation H (z) = z(z + 2jzj 5) = 0. Our objective is to justify that the accessibility region of the iterative process (5) is comparable to the one associated to Newton’s method, in the differentiable case, and notably greater compared to Steffensen-type methods (3) in both cases, differentiable and non-differentiable. In each case, the favorable choice of the iterative process (5) as a predictor method is proven. We will show the fractal pictures that are generated to approximate the three solutions of H (z), z = 1, z = 0.5 0.866025i and z = 0.5 + 0.866025i and the ones generated to approximate the three solutions of H (z), z = 0, z = 1 6 and z = 1 + 6. We are interested in identifying the attraction basins of the three solutions z , z and z [23]. These basins also allow us to compare the regions of accessibility of these methods. In all the cases, the tolerance 10 and a maximum of 100 iterations are used. If we have not obtained the desired tolerance after 100 iterations, we do not continue and decide that the iterative method starting at z does not converge to any zero. The regions of accessibility of the two iterative methods when they are applied to ap- proximate the solutions z , z and z of H (z) = z 1 = 0 are shown in Figures 1 and 2. The strategy used is the following: A color is assigned to each solution of the equation and if the iteration does not converge, color black is used. To obtain the pictures, red, yellow, and blue colors have been assigned for the attraction basins of the three zeros. The basins shown have been generated using Mathematica 10 [24]. If we observe the behavior of these methods, it is clear that methods (3) are stricter with respect to the starting point than Newton’s method (see the black zone). However, if we consider the iterative process (5), see in Figure 3, varying the parameter tol, a dynamical behavior similar to Newton’s method can be obtained. In Figures 1 and 2 are shown the dynamical behavior of Newton’s Method and the Steffensen, Backwards-Steffensen and Center-Steffensen method, where the predictor method (5) is better than the Steffensen-type methods (3). ����� ����� -����� -����� -����� -����� ����� ����� Figure 1. Newton’s Method applied to H (z) = z 1. 1 Symmetry 2022, 14, 4 5 of 26 ��� ��� ��� ��� ��� ��� ��� ��� -����� -����� ����� ����� Steffensen’s method ����� ����� -����� -����� -����� -����� ����� ����� Backward-Steffensen method ����� ����� -����� -����� -����� -����� ����� ����� Center-Steffensen method Figure 2. Basins of attraction to polynomial H (z) = z 1. Once the accessibility has been graphically analyzed, showing that method (5) is better than the Steffensen-type methods (3) and, like Newton’s method in terms of convergence, we want to prove it in a numerical way and, for that purpose, we compute the percentage of points that converges. This information is presented in Table 1. Symmetry 2022, 14, 4 6 of 26 ����� ����� -����� -����� -����� -����� ����� ����� Method (5) with tol = 0.002 ����� ����� -����� -����� -����� -����� ����� ����� Method (5) with tol = 0.5 ����� ����� -����� -����� -����� -����� ����� ����� Method (5) with tol = 0.75 Figure 3. Basins of attraction to polynomial H (z) = z 1. Nevertheless, the use of derivative-free iterative methods is necessary when the operator H is non-differentiable. For this reason, one aim of this work is, from the predictor method (5), preserve, in some way, the good accessibility of Newton’s method. Then, in non-differentiable case, if we use the Steffensen-type methods defined in (3) to solve the equation H (z) = z(z + 2jzj 5) = 0, the predictor method (5) improve the accessibility region of Steffensen-type methods (3), as we can see in Figures 4 and 5, where the basins of attraction of the two solutions of this equation are drawn for the mentioned methods. Symmetry 2022, 14, 4 7 of 26 ��� ��� ��� ��� ��� ��� ��� ��� -����� -����� ����� ����� Steffensen’s method ��� ��� ��� ��� ��� ��� ��� ��� -����� -����� ����� ����� Backward-Steffensen method ��� ��� ��� ��� ��� ��� ��� ��� -����� -����� ����� ����� Center-Steffensen method Figure 4. Basins of attraction to equation H (z) = z(z + 2jzj 5) = 0. Once the accessibility has been graphically analyzed, showing that method (5) is better than the other ones, we want to prove it in a numerical way and, for that purpose, we compute the percentage of points which converges. We get this information in Table 2. Table 1. Percentage of convergence points for H (z) = z 1. Method Percentage of Convergent Points Newton’s method 100% Steffensen 4.35% Backward-Steffensen 6.17% Center-Steffensen 9.15% Method (5) with tol = 0.002 100% Method (5) with tol = 0.5 100% Method (5) with tol = 0.75 100% Symmetry 2022, 14, 4 8 of 26 ����� ����� -����� -����� -����� -����� ����� ����� Method (5) with tol = 0.002 ����� ����� -����� -����� -����� -����� ����� ����� Method (5) with tol = 0.5 ����� ����� -����� -����� -����� -����� ����� ����� Method (5) with tol = 0.75 Figure 5. Basins of attraction to equation H (z) = z(z + 2jzj 5) = 0. Table 2. Percentage of convergence points for H (z) = z(z + 2jzj 5) = 0. Method Percentage of Convergent Points Steffensen 5.37% Backward-Steffensen 5.91% Center-Steffensen 9.38% Method (5) with tol = 0.002 100% Method (5) with tol = 0.5 100% Method (5) with tol = 0.75 100% As we have just seen, the iterative process predictor (5) has a significantly better dynamic behavior than the Steffensen-type methods, being like Newton’s method in the differentiable case. Therefore, we can say that the iterative process predictor has a good accessibility, improving the one of the Steffensen-type methods in both cases, differentiable and non-differentiable. This leads us to construct an iterative process predictor–corrector, using the Center-Steffensen method as the iterative correction process, which maintains its quadratic convergence. Consequently, we consider the predictor–corrector method: Symmetry 2022, 14, 4 9 of 26 > Given an initial guess u 2 W, > u = u [u Tol, u + Tol; H] H(u ), j = 0, ..., N 1, j+1 j j j j > 8 > x = u , 0 N > 0 > (7) > < y = x H(x ), n > 0, > n n n > > z = x + H(x ), n > 0, > > n n n > > > > > > : : x = x [y , z ; H] H(x ), n > 0, n+1 n n n n where Tol = (tol, tol, . . . , tol) 2 R for a real number tol > 0. Thus, this predictor– corrector method will be a Steffensen-type method with good accessibility and quadratic convergence from an iteration to be determined. 3. Semilocal Convergence From the dynamic study carried out previously, it is evident that, if we denote by D = fx 2 W : fx g, given by (6) , convergesg and D = fx 2 W : fx g,g given by n n Corr 0 Pred 0 (5), converges the accessibility domains of iterative processes (6) and (5), it will be verified that D D . That is, the set of starting points that ensure convergence for method Corr Pred (6) is less than the corresponding set for method (5). In this section we show that, starting from an element x 2 D , we can locate a point x such that x 2 D . Therefore, 0 Pred N N Corr 0 0 we obtain a starting point that ensures the convergence of method (6). Thus, doing some iterations with the predictor method, we locate a point x that ensures the convergence of method (6). Therefore, we increase the accessibility of Center-Steffensen method. The semilocal study of the convergence is based on demanding conditions to the initial approximation u , from certain conditions on the operator H , and provide conditions required to the initial approximation that guarantee the convergence of sequence (7) to the solution x . In order to analyze the semilocal convergence of iterative processes that do not use derivatives in their algorithms, the conditions are usually required on the operator divided difference. Although in the case that the operator H is Fréchet differentiable, the divided difference operator can be defined from the Fréchet derivative of the operator H. 3.1. Differentiable Operators Next, we establish the semilocal convergence of iterative process given in (7) for m m differentiable operators. So, we consider H : W R ! R a Fréchet differentiable operator and there exists [v, w; H] = H (tv + (1 t)w) dt, (8) for each pair of distinct points v, w 2 W. Notice that, as H is Fréchet differentiable [x, x; H] = H (x). Now, we suppose the following initial conditions: 0 1 (D1) Let u 2 W such that exists G = [ H (u )] with kG k b and k H(u )k d . 0 0 0 0 0 0 0 0 + (D2) k H (x) H (y)k Kkx yk, x, y 2 W, K 2 R . Firstly, we obtain some technical results. Lemma 1. The following items are verified. (i) Let R > 0 with B(u , R +kTolk) W. If bK(R +kTolk) < 1 then, for each pair of distinct points y, z 2 B(u , R +kTolk), there exists [y, z; H] such that k[y, z; H] k . (9) 1 bK(R +kTolk) Symmetry 2022, 14, 4 10 of 26 (ii) If u , u 2 W, for j = 0, 1, . . . , N , then j j 1 0 k H(u )k (kTolk +ku u k)ku u k. (10) j j j 1 j j 1 (iii) If x , x 2 W, , for j > 1, then j j 1 k H(x )k (k H(x k +kx x k)kx x k. (11) j j 1 j j 1 j j 1 Proof. To prove the item (i), from (D1), we can write k I G [y, z; H]k kG kk H (u ) [y, z; H]k 0 0 0 0 0 k ( H (ty + (1 t)z) H (u )) dtk bK kty + (1 t)z u k dt bK kt(y u ) + (1 t)(z u )k dt 0 0 bK(R +kTolk). Then, by the Banach Lemma for inverse operators [25] the item (i) is proved. Regarding item (ii), from the Taylor expansion for the operator H and (7), we can obtain 0 0 0 H(u ) = H(u ) + H (u )(u u ) + ( H (u + t(u u )) H (u ))dt(u u ) j j 1 j 1 j j 1 j 1 j j 1 j 1 j j 1 = ( H (u ) [u Tol, u + Tol; H])(u u )+ j 1 j 1 j 1 j j 1 0 0 ( H (u + t(u u ) H (u ))dt(u u ). j 1 j j 1 j 1 j j 1 Taking norms in the last equality obtained previously and, considering (8), the proof of item (ii) is evident. Item (iii) is proved analogously to item (ii), just considering the algorithm of the iterative process predictor–corrector (7). To simplify the notation, from now on, we denote A = [u Tol, u + Tol; H], B = [x H(x ), x + H(x ); H], j j j j j j j j and the parameters a = b Kd and b = bKtol. Other parameters which will be used are: 0 0 0 L 1 M = (b + La ), where L = . 0 0 2 1 b bKR Moreover, notice that the polynomial equation p(t) = 0, where 2 2 2 2 3 3 3 p(t) = 2a (1 b ) (2 + a 5b + 3b )bKt + (4 5b )b K t 2b K t , 0 0 0 0 0 has at least a positive real root since that p(0) > 0 and p(t) ! ¥ as t ! ¥. Then, we denote by R the smallest positive root of the polynomial equation p(t) = 0. Finally, we denote by [x] the integer part of the real number x. Symmetry 2022, 14, 4 11 of 26 m m Theorem 2. Let H : W R ! R a Fréchet differentiable operator defined on a nonempty open convex domain W. Suppose that conditions (D1) and (D2) are satisfied and there exists 1 b tol > 0 such that M < 1, R < and B(u , R +kTolk) W. If we consider bK log(kTolk/d ) < 0 1 + if kTolk < d , N > log( M) (12) 1 if kTolk > d , then the iterative process predictor–corrector (7), starting at u , converges to x a solution of H(x) = 0. Moreover, u , x , x 2 B(u , R) for j = 1, . . . , N and n > 0. j 0 0 Lbd Proof. First, notice that it is easy to check that R = . 1 M Then, from the item (i), in the previous Lemma, u 2 B(u , R +kTolk), there exists 0 0 1 1 A such that k A k = Lb. Then, u is well defined and ku 1 1 0 0 1 bK(R +kTolk) u k k A kk H(u )k Lbd < R, with what we get that u 2 B(u , R). Now, obviously, 0 0 0 0 u Tol 2 B(u , R +kTolk) and, again from the item (i) in the previous Lemma, there 1 0 1 1 exists A such that k A k Lb, Then, u is well defined and, from (10), we have that 1 1 K K k H(u )k (kTolk +ku u k)ku u k (kTolk + Lbd )ku u k. (13) 1 1 0 1 0 0 1 0 2 2 Moreover, from (13), we get k H(u )k (kTolk + Lbd )Lbd = Md . 1 0 0 0 Therefore, we obtain that LbK ku u k k A kk H(u )k (kTolk + Lbd )ku u k Mku u k. 2 1 1 0 1 0 1 0 And ku u k < ku u k, since M < 1. 2 1 1 0 Consequently, it is easy to check that u 2 B(u , R) since that 2 0 ku u k ku u k +ku u k (1 + M)ku u k < Lbd = R. 2 0 2 1 1 0 1 0 0 1 M Following a recursive procedure, it is easy to check the following relationships for j = 1, 2, . . . , N . 1 1 (a) There exists A such that k A k Lb, j 1 j 1 (b) k H(u )k (kTolk + Lbd )ku u k, j 1 j 1 j 2 j 1 (c) k H(u )k M d , j 1 (d) ku u k Mku u k < ku u k, j j 1 j 1 j 2 j 1 j 2 j 1 1 (e) ku u k (1 + M + ... + M )ku u k < Lbd = R. 0 0 0 j 1 1 M Now, from the algorithm of the iterative process predictor–corrector (7), we consider x = u 2 B(u , R). Then, from the hypothesis required to the parameter N in (12), we 0 N 0 0 have that kx H(x ) u k = ku H(u ) u k ku u k + M d ku 0 0 0 N N 0 N 0 0 N 0 0 0 0 u k +kTolk. Then x H(x ) 2 B(u , R +kTolk). So, from item (i) of Lemma 1, there 0 0 0 0 1 1 exists B with kB k Lb. 0 0 On the other hand, from item (ii) of Lemma 1, we obtain k H(x )k = k H(u )k (kTolk +ku u k)ku u k N N N 1 N N 1 0 0 0 0 0 2 Symmetry 2022, 14, 4 12 of 26 K K (kTolk +ku u k)ku u k (kTolk + Lbd )ku u k. 1 0 N N 1 0 N N 1 0 0 0 0 2 2 Then, LbK kx x k kB kk H(x )k (kTolk + Lbd )kx u k Mku u k. 1 0 0 0 0 N 1 N N 1 0 0 0 0 And, as a direct consequence, we get that kx x k < ku u k M ku u k, 1 0 N N 1 1 0 0 0 and kx u k [1 + M + + M ]ku u k < Lbd = R. 0 0 0 1 1 1 M Therefore, x 2 B(u , R), and, from item (iii) of Lemma 1, we have 1 0 K K N +1 k H(x )k (k H(x k +kx x k)kx x k (kTolk +ku u k)kx x k M d . 0 0 0 0 0 0 1 1 1 1 1 2 2 N +1 So, we have that kx H(x ) u k kx u k + M d kx u k +kTolk. 0 0 0 0 1 1 1 1 Then x H(x ) 2 B(u , R +kTolk). Now, from item (i) of Lemma 1, there exists B 1 1 0 with kB k Lb. Moreover, we get LbK kx x k kB kk H(x )k (kTolk + Lbd )kx x k = Mkx x k < kx x k, 2 1 1 0 1 0 1 0 1 0 and N +1 kx u k [1 + M + + M ]ku u k < Lbd = R. 2 0 1 0 0 1 M Now, following an inductive procedure, it is easy to check the recurrence relations defined for j > 1 as: 1 1 (a ) There exists B such that kB k Lb, j 1 j 1 (b ) k H(x )k (kTolk + Lbd )kx x k, j 1 j 1 j 2 0 N +j 1 (c ) k H(x )k M d , j 1 0 (d ) kx x k Mkx x k < kx x k, j j 1 j 1 j 2 j 1 j 2 0 N +j 1 (e ) kx u k (1 + M + ... + M )ku u k < Lbd = R. j 0 1 0 0 1 M Now, using M < 1, for n > N , we have j j N +n N +n+i 1 kx x k kx x k M ku u k < ku u k. (14) n 1 0 1 0 n+j å n+i n+i 1 å 1 M i=1 i=1 Hence, fx g is a Cauchy sequence which converges to x . Since N +n+1 k H(x )k M d , thus, H(x ) = 0 by using the continuity of H. Next, we present a uniqueness result for the iterative process predictor–corrector (7). Theorem 3. Under conditions of the previous Theorem, the solution x of the equation H(x) = 0 is unique in B(u , R). Proof. To prove the uniqueness part, suppose y is another solution of (1) in B(u , R). If Q = [x , y ; H] is invertible, then x = y since Q x y = H(x ) H(y ). But ( ) Symmetry 2022, 14, 4 13 of 26 1 1 0 1 0 0 k I G Qk kG kk H (u ) Qk kG kk k H (ty + (1 t)x ) H (u )k dt bKR < 1. 0 0 0 0 0 Therefore, by the Banach Lemma of inverse operators, there exists Q and then x = y . 3.2. Non-Differentiable Operators In this section, we want to obtain a result of semilocal convergence for iterative process (7) when H is a non-differentiable operator. In order to obtain it, we must suppose that for each pair of distinct points x, y 2 W, there exists a first-order divided difference of H at these points. As we consider W an open convex domain of R , this condition is satisfied ([9,26]). Moreover, it is also necessary to impose a condition on the first-order divided difference of the operator H. As it appears in [27,28], a Lipschitz-continuous condition or a Hölder- continuous can be considered, but in the above cases, it is known [29], that the Fréchet derivative of H exists in W. Therefore, these conditions cannot be verified if the operator H is non-differentiable. Then, to establish the semilocal convergence of iterative process given in (7) for non-differentiable operator H, we suppose that the following conditions hold: 1 1 1. [(ND1)] Let u 2 W such that A exists with jj A jj b and jj H(u )jj d . 0 0 0 0 0 0 2. [(ND2)]jj[x, y; H] [u, v; H]jj P + K(jjx ujj +jjy vjj), P, K 0, with x, y, u, v 2 W, x 6= y, u 6= v. To simplify the notation, from now on, we denote M = b (P + K(b d + 2kTolk)) and S = 0 0 0 1 b (P + 2K(R +kTolk)) In these conditions, we start our study obtaining a technical result, the proof of which is evident from algorithm given in (7). Lemma 4. The following items can be easily verified. (i) If u , u 2 W, for j = 0, 1, . . . , N , then j j 1 0 H(u ) = [u , u ; H] A )(u u . (15) j j j 1 j 1 j j 1 (ii) If x , x 2 W, for j > 1, then j j 1 H(x ) = [x , x ; H] B (x x ). (16) j j j 1 j 1 j j 1 Theorem 5. Under the conditions (ND1)-(ND2) if the real equation b d (1 b (P + 2K(t kTolk))) 0 0 0 t = , (17) 1 b (P + 2K(t +kTolk)) M has at least one positive root, the smallest positive root is denoted by R, and there exists tol > 0 such that satisfies M + b (P + 2K(R +kTolk)) < 1, (18) and B(u , R +kTolk) W. If we consider 8 " # log(kTolk/ Md ) b d (P + b d K) > 0 0 0 0 0 2 + if kTolk < , log(S) 1 2b d 0 0 N > (19) b d (P + b d K) > 0 0 0 0 1 if kTolk > , 1 2b d 0 0 Symmetry 2022, 14, 4 14 of 26 then the iterative process predictor–corrector (7), starting at u , converges to x a solution of H(x) = 0. Moreover, u , x , x 2 B(u , R), for j = 1, . . . , N and n > 0, and x is unique j 0 0 solution of H(x) = 0 in B(u , R) W. b d (P + b d K) 0 0 0 0 Proof. First, notice that kTolk < if and only if kTolk < Md . Moreover, 1 2b d 0 0 the smallest positive real root R of (17) satisfies b d 0 0 R = . (20) 1 S Second, we prove that u is well defined and u 2 B(u , R) for j = 0, 1, 2, ..., N . From 0 0 j j condition (ND1), u is well defined and ku u k k A kk H(u )k b d < R. 1 0 0 0 0 Thus, u 2 B(u , R) and u Tol 2 B(u , R +kTolk). Using Lemma 4, we get 1 0 1 0 k H(u )k = k[u , u ; H] [u Tol, u + Tol, H]kku u k 1 1 0 0 0 1 0 P + K(ku u k + 2kTolk) ku u k 0 0 1 1 b P + K(b d + 2kTolk) d = Md . 0 0 0 0 0 Now, 1 1 k I A A k k A kk A A k 1 1 0 0 0 b k[u Tol, u + Tol; H] [u Tol, u + Tol; H]jj 0 1 1 0 0 b P + K(ku u k +ku u k) 0 1 0 1 0 b (P + 2KR) < 1. Hence, by using Banach Lemma for inverse operators, A exists and k A k . (21) 1 b (P + 2KR) Thus, u is well defined. Moreover, ku u k k A kk H(u )k 2 1 1 ku u k 1 0 1 b (P + 2KR) ku u k 1 0 1 b (P + 2K(R +kTolk)) = Sku u k < ku u k < R, 1 0 1 0 and u 2 B(u , R) as 2 0 b d 0 0 ku u k ku u k +ku u k (S + 1)ku u k < < R. 2 0 2 1 1 0 1 0 1 S In a similar way, by using the principle of mathematical induction, we can establish the following recurrence relations. For j = 1, 2, . . . , N , (A1) k A k , 1 b (P + 2KR) j 1 (A2) k H(u )k P + K(b d + 2kTolk) ku u k MS d , 0 0 0 j j j 1 j 1 (A3) ku u k Sku u k S jju u k < b d < R, 1 0 0 0 j j 1 j 1 j 2 Symmetry 2022, 14, 4 15 of 26 b d 0 0 (A4) ku u k < R. j 0 1 S To study the convergence of the predictor of (7), we consider x = u 2 B(u , R). 0 N 0 Using Lemma 4, we get k H(u )k P + K(ku u k + 2kTolk) ku u k N N N 1 N N 1 0 0 0 0 0 N 1 P + K(b d + 2kTolk) S ku u k 0 0 1 0 N 1 N 1 0 e 0 b P + K(b d + 2kTolk) S d = MS d , (22) 0 0 0 0 0 and, by the hypothesis required to the parameter N in (19), we have N 1 e 0 kx H(x ) u k kx u k +k H(x )k ku u k + MS d < R +kTolk, 0 0 0 0 0 0 N 0 0 so, B = [x H(x ), x + H(x ); H] is well defined. Now, we consider 0 0 0 0 0 1 1 k I A B k k A kkB A k 0 0 0 0 0 b k[x H(x ), x + H(x ); H] [u Tol, u + Tol; H]k 0 0 0 0 0 0 0 b P + 2K(kx H(x ) u k +kTolk) 0 0 0 0 b P + 2K(R +kTolk) < 1. Hence, B exists and kB k . (23) 1 b (P + 2K(R +kTolk)) Using (22) and (23), we get kx x k kB kk H(x )k 1 0 0 b (P + K(b d + 2kTolk)) 0 0 0 ku u k N N 1 0 0 1 b (P + 2K(R +kTolk) Sku u k S ku u k < ku u k < R, 0 0 N N 1 1 1 0 0 and kx u k kx x k +ku u k + ... +ku u k 1 0 1 0 N N 1 1 0 0 0 N N 1 0 0 (S + S + ... + S + 1)ku u k 1 0 b d 0 0 < = R. 1 S Hence, x 2 B(u , R). Again, using Lemma 4 and condition (ND2), we have 1 0 k H(x )k k[x , x ; H] [x H(x ), x + H(x ); H]kkx x k 1 1 0 0 0 0 0 1 0 P + K(kx x k + 2k H(x )k) S ku u k 1 0 0 1 0 N 1 N 0 0 b P + K(b d + 2Md S ) S d 0 0 0 0 0 N N 0 0 b P + K(b d + 2kTolk) d S = Md S , (24) 0 0 0 0 0 and then b d 0 0 e 0 kx H(x ) u k kx u k +k H(x )k + Md S = R +kTolk. 1 1 0 1 0 1 0 1 S Symmetry 2022, 14, 4 16 of 26 Then B is well defined, therefore 1 1 k I A B k k A kkB A k 1 1 0 0 0 b P + 2K(kx H(x ) u k +kTolk) 0 1 1 0 b P + 2K(R +kTolk) < 1. Hence, B exists and kB k . (25) 1 b P + 2K(R +kTolk) Using (24) and (25), we get kx x k kB kk H(x )k 2 1 1 b P + K(b d + 2kTolk) 0 0 0 kx x k 1 0 1 b P + 2K(R +kTolk) N +1 = Skx x k < S ku u k < b d . 1 0 1 0 0 0 and kx u k kx x k +kx x k + ...ku u k 2 0 2 1 1 0 1 0 N +1 N 0 0 (S + S + ... + 1)ku u k 1 0 b d 0 0 = R. 1 S Hence, x 2 B(u , R). 2 0 Using mathematical induction, we can establish the following recurrence relation for j 1: (B1) kB k . 1 b P + 2K(R +kTolk) N +j 1 e 0 e (B2) k H(x )k P + K(b d + 2kTolk))kx x k Md S < Md . j 0 0 j j 1 0 0 N N +j 0 0 (B3) kx x k S kx x k S ku u k < b d < R. j+1 j j j 1 1 0 0 0 b d 0 0 (B4) kx u k < = R. j+1 0 1 S Now, using S < 1, for n N j j N +n N n+i 1 kx x k kx x k S S ku u k ku u k. (26) n+j n å n+i n+i 1 å 1 0 1 0 1 S i=1 i=1 Hence, fx g is a Cauchy sequence which converges to x . Since, k H(x )k P + K(b d + 2kTolk) kx x k, n 0 0 n n 1 and kx x k ! 0 as n ! ¥, thus H(x ) = 0 by using the continuity of H. n n 1 Theorem 6. Under conditions of the previous Theorem, the solution x of the equation H(x) = 0 is unique in B(u , R). Proof. To prove the uniqueness of x , let y be another solution of H(x) = 0 in B(u , R). If Q = [y , x ; H] is invertible , then y = x . Since, Q(y x ) = H(y ) H(x ) = 0. Symmetry 2022, 14, 4 17 of 26 1 1 But,k I A Qk k A kkQ A k b P + K(ky u k +kx u k + 2kTolk) 0 0 0 0 0 0 b (P + 2K(R +kTolk)) < 1. Hence, by the Banach Lemma for inverse operators, Q exists. Therefore, y = x . 3.3. Numerical Experiments Now, we perform a numerical experience to show the applicability of the theoreti- cal results previously obtained. So, we deal with nonlinear integral equations that are used in a great variety of applied problems in electrostatic, low frequency electromag- netic problems, electromagnetic scattering problems and propagation of acoustical and elastic waves ([30,31]). We focus on the nonlinear integral equation of Hammerstein type expressed as follows [H(x)](s) = x(s) w(s) G(s, t) M(t, x(t)) dt, s 2 [a, b]. (27) where ¥ < a < b < +¥, G is the Green’s function, w, and M are known functions and x is the solution to be obtained. We solve the equation H(x) = 0, where H : W C[a, b] ! C[a, b] by transforming the problem into a nonlinear system. First, we approximate the given integral by a quadrature formula with the corresponding weighs q and nodes t , j = 1, 2, ..., n. The discretization of j j the problem by using these nodes gives us the following nonlinear system: x = w + e M(t , x ), j = 1, 2, . . . , n, (28) j j å ji i i i=1 where (b t )(t a) j i q , i j, b a e = q G(t , t ) = ji i j i (b t )(t a) i j q , i > j. b a n n We can formulate the system from R into R , by using the following functions and matrices, H(x) = x w E M(t, x) = 0, (29) with T T n x = (x , x , . . . , x ) , w = (w , w , . . . , w ) , E = (e ) . n n 1 2 1 2 ji j,i=1 To illustrate the theoretical results in both differentiable and non-differentiable cases, we take w = 1/5 for j = 1, ..., n, 3 T M(t, x) = (lx(t) + sjx(t)j) , with l, s 2 R, and [a, b] = [0, 1]. Specifically, we solve the following nonlinear system, 8 8 H(x) x E(la + sb ) = 0, H : R ! R , (30) x x 1 1 1 1 T 3 3 3 where x = (x , x , . . . , x ) , = , , . . . , , a = x , x , . . . , x , 1 2 8 1 2 8 5 5 5 5 b = (jx j,jx j, . . . ,jx j) and E = (e ) . x 1 2 8 i j i,j=1 So, we are now in conditions of applying the theoretical development for both cases the differentiable and non-differentiable one. 3.3.1. H a Differentiable Operator We consider in the above described nonlinear integral the following values l = 1 and s = 0 so we have a differentiable problem. Moreover, we work in the domain Symmetry 2022, 14, 4 18 of 26 W = B(0, 1) R defined with the infinity norm. In these terms, for the associated operator H it is easy to characterize the Fréchet derivative H so we have H(x) x l E a , (31) 0 2 H (x) = I 3l E diag(x ). Then, by applying the theoretical results obtained in previous sections we take as starting point u = (1/3, 1/3, ..., 1/3) and different values of Tol = (tol, tol, ..., tol). The values for the parameters than appear in the semilocal convergence study are b = 1.0435, d = 0.1380, K = 0.75 and a = 0.1127. Other results such as N and the value for the radii 0 0 0 of the domains of existence and uniqueness for the solution can be find in Table 3. As can be seen in the results of this table when tol decreases also does the semilocal convergence radii, being the value of N similar. Table 3. Radii of the semilocal convergence balls for different values of tol. Tol R N 0.13 0.253051 1 0.05 0.201863 1 0.01 0.185348 2 0.001 0.182101 2 Finally, we obtain the approximated solution of the nonlinear integral Equation (31) by applying Newton’s method (4) and the new predictor–corrector Steffensen-type method, In (7). We run the corresponding algorithms with Matlab20 working in variable precision arithmetic with 100 digits, using as stopping criteria jjx x jj < 10 and with the n+1 starting point and values of tol used and obtained in the semilocal convergence study. The results in Table 4 show that the behavior of the new predictor–corrector Steffensen method is as good as Newton’s method. The approximated solution gives us following values if we round to 6 digits: x ˜ = [0.20008, 0.200378, 0.200749, 0.201001, 0.201001, 0.200749, 0.200378, 0.20008]. 1 1 1 Table 4. Numerical results with starting guess u = , , . . . , . 3 3 3 (7) (7) (7) (7) Method Newton tol = 0.13 tol = 0.05 tol = 0.01 tol = 0.001 k 5 6 6 5 5 jjx x jj 7.46581e-31 1.60297e-61 1.60297e-61 7.52955e-31 7.38325e-31 n+1 n jj H(x )jj 7.56107e-31 1.58277e-61 1.58277e-61 7.43468e-31 7.47747e-31 n+1 3.3.2. H a Non-Differentiable Operator If, in (29) we work again in W = B(0, 1) by considering m = 8, l = 1 and s = 1/2, we obtain the non-differentiable system of nonlinear equations 1 1 H(x) x E(a + b ). (32) x x 5 2 In these terms, we characterize the divided difference operator by using the follow- ing formula: [x, y; H] = ( H (x , ..., x , y , ..., y ) H (x , ..., x , y , ..., y ) i j i 1 j j+1 m i 1 j 1 j m x y j j Symmetry 2022, 14, 4 19 of 26 having that [u, v; H] = I (lC + sD), 8 2 2 8 where C = (c ) with c = 0 if i 6= j but c = e (u + v + u v ) while D = (d ) ji ji ii ii i i ji j,i=1 i i j,i=1 ju j jv j i i with d = 0 and d = e . Furthermore, if we work in the domain W = B(0, 1) we ji ii ii u v i i have the following bounds, k[x, y; H] [u, v; H]k P + K(kx uk +ky vk) with P = 2kEkjsj and K = 3jljkEk. Now, by taking starting point u = (1/3, 1/3, ..., 1/3) and different values of Tol = (tol, tol, ..., tol) we have the following bounds for the parameters involved in the semilocal convergence Theorem 5, b = 1.1163, d = 0.1588, K = 0.375, P = 0.125 and a = 0.0742. 0 0 0 Other results such as N and the value for the radii of the domains of existence and uniqueness for the solution can be find in Table 5. We can corroborate a similar behavior than in the differentiable case, that is, for smaller values of tol the radii increases. Table 5. Radii of the semilocal convergence balls for different values of tol. tol R N 0.035 0.307562 2 0.03 0.299809 2 0.02 0.286741 2 0.01 0.275946 3 Finally, we obtain the approximated solution of the nonlinear integral equation (32) by applying center Steffesen method (3) and the new Steffesen method (7). We run the corresponding algorithms in Matlab20 working in variable precision arithmetic with 100 digits, using as stopping criteria jjx x jj < 10 and with the starting point and n+1 n values of tol obtained in the semilocal convergence study. The results in Table 6 show that the behavior of the new predictor–corrector Steffensen method improves the Center- Steffensen method. The approximated solution gives us following values if we round to 6 digits: x ˜ = [0.201133, 0.205354, 0.210668, 0.214297, 0.214297, 0.210668, 0.205354, 0.201133] 1 1 1 Table 6. Numerical results with starting guess x = , , . . . , . 3 3 3 (3) (7) (7) (7) (7) Method a = b = 1 tol = 0.035 tol = 0.03 tol = 0.02 tol = 0.01 k 6 5 5 5 5 jjx x jj 1.09662e-61 7.7866e-31 7.7866e-31 7.31874e-31 4.74014e-31 n+1 n jj H(x )jj 1.02405e-61 7.27131e-31 7.27131e-31 6.83442e-31 4.42645e-31 n+1 4. Dynamical Behavior of Predictor–Corrector Method In this section, we compare the behavior of predictor–corrector method (7) for func- tions H and H used in the motivation section for different values of tol and N . In this 1 2 0 case, we will have a greater demand to obtain the attraction basins. So, in all the cases, the tolerance 10 and a maximum of 100 iterations are used. If we have not obtained the desired tolerance with 100 iterations, do not continue and decide that the iterative method starting at z does not converge to any zero. For the differentiable case, as we can see in Figures 6–8, by increasing the value of N we can achieve an accessibility such as that of Newton’s method. Once the accessibility has been graphically analyzed, showing that method (7) is better than the Steffensen-type Symmetry 2022, 14, 4 20 of 26 methods (see Figure 2), we want to see its behavior in a numerical way and, for that purpose, we compute the percentage of points which converges. We get this information in Table 7. ���� ������� -������� -���� -���� -������� ������� ���� Method (7) with tol = 0.002 and N = 3 ���� ������� -������� -���� -���� -������� ������� ���� Method (7) with tol = 0.5 and N = 3 ���� ������� -������� -���� -���� -������� ������� ���� Method (7) with tol = 0.75 and N = 3 Figure 6. Basins of attraction to polynomial H (z) = z 1. 1 Symmetry 2022, 14, 4 21 of 26 ���� ������� -������� -���� -���� -������� ������� ���� Method (7) with tol = 0.002 and N = 5 ���� ������� -������� -���� -���� -������� ������� ���� Method (7) with tol = 0.5 and N = 5 ���� ������� -������� -���� -���� -������� ������� ���� Method (7) with tol = 0.75 and N = 5 Figure 7. Basins of attraction to polynomial H (z) = z 1. Table 7. Percentage of convergence points for H (z) = z 1. Method (7) Percentage of Convergent Points tol = 0.002 and N = 3 69.34 % tol = 0.5 and N = 3 69.51% tol = 0.75 and N = 3 70.04% tol = 0.002 and N = 5 93.03% tol = 0.5 and N = 5 93.27% tol = 0.75 and N = 5 94.56% tol = 0.002 and N = 10 97.13% tol = 0.5 and N = 10 98.89% tol = 0.75 and N = 10 99.36% 0 Symmetry 2022, 14, 4 22 of 26 ���� ������� -������� -���� -���� -������� ������� ���� Method (7) with tol = 0.002 and N = 10 ���� ������� -������� -���� -���� -������� ������� ���� Method (7) with tol = 0.5 and N = 10 ���� ������� -������� -���� -���� -������� ������� ���� Method (7) with tol = 0.75 and N = 10 Figure 8. Basins of attraction to polynomial H (z) = z 1. Similarly, in the non-differentiable case, as it can be seen in Figures 9–11, we verify that by increasing the value of N , we can achieve an accessibility as presented by Newton’s method in the differentiable case. Once the accessibility has been graphically analyzed, showing that method (7) is better than the Steffensen-type methods (see Figure 4), we want to see its behavior in a numerical way and, for that purpose, we compute the percentage of points which converges. We get this information in Table 8. Symmetry 2022, 14, 4 23 of 26 Table 8. Percentage of convergence points for H (z) = z(z + 2jzj 5). Method Percentage of Convergent Points tol = 0.002 and N = 3 58.44% tol = 0.5 and N = 3 58.58% tol = 0.75 and N = 3 58.66% tol = 0.002 and N = 5 97.60% tol = 0.5 and N = 5 97.54% tol = 0.75 and N = 5 97.35% tol = 0.002 and N = 10 99.58% tol = 0.5 and N = 10 99.62% tol = 0.75 and N = 10 99.57% ���� ������� -������� -���� -���� -������� ������� ���� Method (7) with tol = 0.002 and N = 3 ���� ������� -������� -���� -���� -������� ������� ���� Method (7) with tol = 0.5 and N = 3 ���� ������� -������� -���� -���� -������� ������� ���� Method (7) with tol = 0.75 and N = 3 Figure 9. Basins of attraction to equation H (z) = z(z + 2jzj 5). 2 Symmetry 2022, 14, 4 24 of 26 ���� ������� -������� -���� -���� -������� ������� ���� Method (7) with tol = 0.002 and N = 5 ���� ������� -������� -���� -���� -������� ������� ���� Method (7) with tol = 0.5 and N = 5 ���� ������� -������� -���� -���� -������� ������� ���� Method (7) with tol = 0.75 and N = 5 Figure 10. Basins of attraction to equation H (z) = z(z + 2jzj 5). 2 Symmetry 2022, 14, 4 25 of 26 ���� ������� -������� -���� -���� -������� ������� ���� Method (7) with tol = 0.002 and N = 10 ���� ������� -������� -���� -���� -������� ������� ���� Method (7) with tol = 0.5 and N = 10 ���� ������� -������� -���� -���� -������� ������� ���� Method (7) with tol = 0.75 and N = 10 Figure 11. Basins of attraction to equation H (z) = z(z + 2jzj 5). 5. Concluding Remarks Due to the inconvenience of applying Steffensen-type iterative processes in terms of their accessibility, we have built a predictor–corrector iterative process that, while maintaining the efficiency of Steffensen-type methods, improves the accessibility of these methods. Thus, it can be used as an efficient alternative to Newton’s method when applied to nonlinear systems of non-differentiable equations. Author Contributions: Investigation, M.A.H.-V., S.Y., Á.A.M., E.M. and S.S.; Writing—original draft, M.A.H.-V., S.Y., Á.A.M., E.M. and S.S.; Writing—review and editing, M.A.H.-V., S.Y., Á.A.M., E.M. and S.S. All authors have read and agreed to the published version of the manuscript. Funding: This research was partially supported by the project PGC2018-095896-B-C21-C22 of Spanish Ministry of Economy and Competitiveness and by the project of Generalitat Valenciana Prome- teo/2016/089. Institutional Review Board Statement: Not applicable. Informed Consent Statement: Not applicable. Data Availability Statement: Not applicable. Conflicts of Interest: The authors declare no conflict of interest. Symmetry 2022, 14, 4 26 of 26 References 1. Regmi, S. Optimized Iterative Methods with Applications in Diverse Disciplines; Nova Science Publisher: New York, NY, USA, 2021. 2. Argyros, I.K.; Cho, Y.J.; Hilout, S. Numerical Methods for Equations and Its Applications; CRC Press/Taylor and Francis: Boca Raton, FL, USA, 2012. 3. Argyros, I.K.; Hilout, S. Numerical Methods in Nonlinear Analysis; World Scientific Publishing Co.: Hackensack, NJ, USA, 2013. 4. Barbashov, B.M.; Nesterenko, V.V.; Chervyakov, A.M. General solutions of nonlinear equations in the geometric theory of the relativistic string. Commun. Math. Phys. 1982, 84, 471–481. [CrossRef] 5. Brugnano, L.; Casulli, V. Iterative Solution of Piecewise Linear Systems. SIAM J. Sci. Comput. 2008, 30, 463–472. [CrossRef] 6. Difonzo, F.V.; Masciopinto, C.; Vurro, M.; Berardi, M. Shooting the Numerical Solution of Moisture Flow Equation with Root Water Uptake Models: A Python Tool. Water Resour. Manag. 2021, 35, 2553–2567. [CrossRef] 7. Soheili, A.R.; Soleymani, F. Iterative methods for nonlinear systems associated with finite difference approach in stochastic differential equations. Numer. Algorithms 2016, 71, 89–102. [CrossRef] 8. Gou, F.; Liu, J.; Liu, W.; Luo, L. A finite difference method for solving nonlinear Volterra integral equation. J. Univ. Chin. Acad. Sci. 2016, 33, 329–333. 9. Grau-Sánchez, M.; Noguera, M.; Amat, S. On the approximation of derivatives using divided difference operators preserving the local convergence order of iterative methods. J. Comput. Appl. Math. 2013, 237, 363–372. [CrossRef] 10. Argyros, I.K.; George, S. On the complexity of extending the convergence region for Traub’s method. J. Complex. 2020, 56, 101423. [CrossRef] 11. Argyros, I.K. On the Secant method. Publ. Math. Debrecen 1993, 43, 223–238. 12. Kung, H.T.; Traub, J.F. Optimal order of one-point and multipoint iteration. J. ACM 1973, 21, 643–651. [CrossRef] 13. Amat, S.; Ezquerro, J.A.; Hernández-Verón, M.A. On a Steffensen-like method for solving nonlinear equations. Calcolo 2016, 53, 171–188. [CrossRef] 14. Abbasbandy, S.; Asady, B. Newton’s method for solving fuzzy nonlinear equations. Appl. Math. Comput. 2004, 159, 349–356. [CrossRef] 15. Chun, C. Iterative methods improving Newton’s method by the decomposition method. Comput. Math. Appl. 2005, 50, 1559–1568. [CrossRef] 16. Galántai, A. The theory of Newton’s method. J. Comput. Appl. Math. 2000, 124, 25–44. [CrossRef] 17. Kelley, C.T. Solving Nonlinear Equations with Newton’s Method; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 2003. 18. Magreñán, Á.A.; Argyros, I.K. A Contemporary Study of Iterative Methods; Academic Press: Cambridge, MA, USA; Elsevier: Hoboken, NJ, USA, 2018. 19. Alarcón, V.; Amat, S.; Busquier, S.; López, D.J. A Steffensen’s type method in Banach spaces with applications on boundary-value problems. J. Comput. Appl. Math. 2008, 216, 243–250. [CrossRef] 20. Argyros, I.K. A new convergence theorem for Steffensen’s method on Banach spaces and applications. Southwest J. Pure Appl. Math. 1997, 1, 23–29. 21. Ezquerro, J.A.; Hernández, M.A.; Romero, N.; Velasco, A.I. On Steffensen’s method on Banach spaces. J. Comput. Appl. Math. 2013, 249, 9–23. [CrossRef] 22. Kneisl, K. Julia sets for the super-Newton method, Cauchy’s method, and Halley’s method. Chaos 2001, 11, 359–370. [CrossRef] [PubMed] 23. Varona, J.L. Graphic and numerical comparison between iterative methods. Math. Intell. 2002, 24, 37–46. [CrossRef] 24. Wolfram, S. The Mathematica Book, 5th ed.; Wolfram Media/Cambridge University Press: Cambridge, UK, 2003. 25. Kantorovich, L.V.; Akilov, G.P. Functional Analysis; Pergamon Press: Oxford, UK, 1982. 26. Balazs, M.; Goldner, G. On existence of divided differences in linear spaces. Rev. Anal. Numer. Theor. Approx. 1973, 2, 3–6. 27. Hilout, S. Convergence analysis of a family of Steffensen-type methods for generalized equations. J. Math. Anal. Appl. 2008, 329, 753–761. [CrossRef] 28. Moccari, M.; Lotfi, T. On a two-step optimal Steffensen-type method: Relaxed local and semi-local convergence analysis and dynamical stability. J. Math. Anal. Appl. 2018, 468, 240–269. [CrossRef] 29. Hernández, M.A.; Rubio, M.J. A uniparametric family of iterative processes for solving non-differentiable equations. J. Math. Anal. Appl. 2002, 275, 821–834. [CrossRef] 30. Bruns, D.D.; Bailey, J.E. Nonlinear feedback control for operating a nonisothermal CSTR near an unstable steady state. Chem. Eng. Sci. 1977, 32, 257–264. [CrossRef] 31. Wazwaz, A.M. Applications of Integral Equations; Linear and Nonlinear Integral Equations; Springer: Berlin/Heidelberg, Germany, 2011.
http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png
Symmetry
Multidisciplinary Digital Publishing Institute
http://www.deepdyve.com/lp/multidisciplinary-digital-publishing-institute/an-algorithm-derivative-free-to-improve-the-steffensen-type-methods-maFrpEI0vO