Least-Squares Residual Power Series Method for the Time-Fractional Differential Equations

In this study, an applicable and effective method, which is based on a least-squares residual power series method (LSRPSM), is proposed to solve the time-fractional differential equations. The least-squares residual power series method combines the residual power series method with the least-squares method. These calculations depend on the sense of Caputo. Firstly, using the classic residual power series method, the analytical solution can be solved. Secondly, the concept of fractional Wronskian is introduced, which is applied to validate the linear independence of the functions. Thirdly, a linear combination of the first few terms as an approximate solution is used, which contains unknown coefficients. Finally, the least-squares method is proposed to obtain the unknown coefficients. The approximate solutions are solved by the least-squares residual power series method with the fewer expansion terms than the classic residual power series method. The examples are shown in datum and images.The examples show that the new method has an accelerate convergence than the classic residual power series method.


Introduction
e fractional calculation has been a popular topic for the past few years. Leibniz and L'Hopital were the rst to propose fractional di erential equations. Fractional di erential equations are applied to many di erent elds, such as control science and engineering [1] and computer science and technology [2]. Many researchers have studied di erent theories in fractional di erential equations.
In recent years, a new method has been proposed to solve the fractional di erential equations by the fractional residual power series method, which was used to nd the analytical solution for several classes of fractional di erential equations. Many scholars have devoted themselves to the study of this eld. In [7], the time-fractional foam drainage equation was worked out by the residual power series method. Wang and Chen [8] proposed that the residual power series can be applied to time-fractional Whitham-Broer-Kaup equations. e fractional variation of the (1 + 1)-dimensional Biswas-Milovic equation [9] was gured out by the residual power series method. Alquran et al. [10] solved the time-fractional Phi-4 equation by the residual power method. A system of multipantograph delay di erential equations using the residual power series method was calculated by Komashynska et al. [11]. e approximate solutions for the time-fractional Sharma-Tasso-Olever equation [12] were solved by the residual power series method. In [13], an approach was applied to nd the exact solutions of fractional-order time-dependent Schrödinger equations. e residual power series method was also used for many other problems, such as the gas dynamic equation [14], the fractional initial Emden-Fowler equation [15], the generalized Berger-Fisher equation [16], and the nonlinear time-space-fractional Benney-Lin equation [17].
Recently, some existing methods have been modi ed by the least-squares method so that the approximate solution achieves higher accuracy. Kumar and Koundal [18] proposed an approach in which the system of nonlinear fractional partial di erential equations was gured out by generalized least-squares homotopy perturbations. e approximate analytical solutions for nonlinear differential equations were solved by the least-squares homotopy perturbation method [19]. e linear and nonlinear fractional partial differential equations were figured out by the leastsquares homotopy perturbation method [20].
In this paper, the least-squares method is combined with the residual power series method, which is called the leastsquares residual power series method (LSRPSM). Compared with the classic residual power series method, a more accurate approximate solution with fewer expansion terms can be obtained by the new method. e rest of the structure is as follows. In Section 2, some basic definitions about the sense of Caputo and fractional partial Wronskian are introduced. In Section 3, the leastsquares residual power series method is proposed with necessary definitions. Numerical results and discussions are presented by graphics and charts in Section 4. At last, the conclusion is drawn in Section 5.

Basic Definitions
In this section, the definition of the Caputo fractional is introduced systematically.
is section also presents the definition of fractional partial Wronskian.
e Riemann-Liouville fractional integral operator of order α ≥ 0 is defined as For the Riemann-Liouville fractional integral, we have where λ and μ are real constants.

Direct Method of Least-Squares Residual Power Series Method (LSRPSM)
In this section, the general procedure of the least-squares residual power series method is proposed. Based on the classic residual power series method, the method of combining residual power series with the least-squares method is used for the time-fractional differential equations.

Classic Residual Power Series Method.
We consider the following time-fractional differential equation: with where L is a linear operator, N is a nonlinear operator, u(x, t) is an unknown function, and I is an initial condition.

Complexity
Following the classic residual power series method [26,27], the algorithm can be proposed by . (6) In order to obtain the approximate value of (6), the form of the ith series of u(x, t) is proposed. en, the truncated series u i (x, t) is defined by With the initial condition I(u) � 0, we define the ith residual function as follows: In order to get f n (x), n ∈ N * , we look for the solution of where N * � 1, 2, 3, . . . { }. Here, classic residual power series method will give the i th -order approximate solutions with where , , Theorem 2 (see [28]) (convergence theorem). Suppose that where and r is the radius of convergence.
Moreover, there exists a value ξ, where 0 ≤ ξ ≤ t so that the error term r N (x, t) has the form According to eorem 1, we can obtain

Least-Squares Residual Power Series
Method. e procedure of the least-squares residual power series method is presented, and some definitions we require are proposed in this section.
Let the remainder Res for the differential equation (4) be with the condition where u is the approximate solution of equation (4).
s α i (x, t) i∈N * converge to the solution of equation (4).

Definition 4.
We say u is the ε-approximate residual power series method solution of equation (4) and (18) is also satisfied by u.
Definition 5. We call u is the weak ε-approximate residual power series method solution of equation (4) and (18) is also satisfied by u. We propose the following steps for the least-squares residual power series method.
Step 1. We use the classic residual power series method. e form of u i (x, t) can be written as and the ith residual function is as follows: en, we look for the solution of f n (x) by Complexity 3 where N * � 1, 2, 3, . . . { }. Here the classic residual power series method will give the i th -order approximate solutions with where ϕ 0 , ϕ 1 , . . . , ϕ i can be calculated by (11).
Step 2. e linearly independent functions can be verified by where . . and the elements of S i are linearly independent in the vector space of continuous functions defined on R.
Remark 2. If we cannot find the point so that the value of W α [ϕ 0 , ϕ 1 , ϕ 2 , . . . , ϕ i ] is not 0, the set S i is linearly dependent. So, we must use the classic residual power series method in this case.
Step 3. Assume as the approximate solution of equation (4). And substituting the approximate solution u i in (17), we obtain Step 4. We attach to the following functional: Here, we calculate some constants k n So, we can obtain the value of s α i (x, t) by solving (29): From (27)- (30), we can get Theorem 3. e values of s α i (x, t) from (30) satisfy the property: Proof. Based on the way that s α i (x, t) is computed, the following inequality holds: Also, from (31) we have en, according to the convergence of the residual power series solution from (16), we can get e ϵ-approximate residual power series solutions s α i (x, t) are also the weak solutions of equation (4).

Illustrative Examples
In this section, some examples are presented by the leastsquares residual power series method (LSRPSM). Using the new method, we usually compute the initial iterations by the fractional residual power series method, the rest can be ignored. en, the least-squares method is applied, and the unknown coefficients are obtained. e approximate solutions are calculated by Maple in Windows 7 (64 bit). We analyse the approximate solutions by charts and graphics.
Example 1. Consider the following time-fractional Fornberg-Whitham equation: 4 Complexity with the initial condition and the exact solution when α � 1 is Using the classic residual power series method, we can obtain the following equations [29]: en, . (40) Hence, the functions ϕ 0 , ϕ 1 , and ϕ 2 are linearly independent.
So, we can obtain the approximate solution, which can be written as From (36), we can get the residual function with the initial condition Using the given condition (43) in (41), we get k 0 � 1. So, (41) can be written as So, we can obtain Res by substituting (44) into (42). en, the functional J will be We compute the functional J of (45). We receive two algebraic equations as and then we determine the unknown coefficients of (46) when α � 1: In Figure 1, the exact solutions and the approximate solutions using the least-squares residual power series method are presented by the three-dimensional graphics. We present the absolute errors between the exact solutions and the approximate solutions by the new method. e absolute error can be written as As is shown in Table 1, we present the absolute error between different values of x and t when α � 1, 0 ≤ t ≤ 1, − 10 ≤ x ≤ 10. For each item in the table, the upper corner is the name of the method, and the lower corner is the number of items expanded under the method. And the least-squares residual power series method (LSRPSM) with u i (x, t) when i � 2 is compared with the classic residual power series method (RPSM) with u i (x, t) when i � 5, as shown in Table 1.
From Table 1, we can find that the absolute errors by the least-squares residual power series method with different x Complexity 5 and t between the approximate solutions and the exact solutions are within the acceptable range. e range of magnitude of absolute errors is from 10 − 3 to 10 − 7 . e smaller the value of the variable t with fixed x is, the bigger the absolute errors are. In the meantime, the smaller the value of the variable x with fixed t is, the smaller the absolute errors are. Comparing the classical method with our new method, we found that the new method has high accuracy.
Using the same method, the linearly independent functions can be verified when α � 0.35, 0.55, 0.75, 0.95. en, we can obtain the unknown coefficients k 1 and k 2 , respectively, by using the least-squares method. Figure 2 shows the influence of different α on the approximate solutions when 0 ≤ t ≤ 1, − 10 ≤ x ≤ 10. Figure 2(a) presents approximate solutions for α � 0.35, Figure 2 From Figure 2, the larger the value of α, the smoother the plane. As the parameter α increases, the graphics get closer and closer to the exact solution of the graphic.
For any α ∈ (0, 1], the exact value of |Res(x, t)| is 0. e distinction between the 2 th approximate solutions and the exact solutions can be shown by the values of |Res(x, t)|. So, we can use the values of |Res(x, t)| to represent the deviation between the approximate solution and the exact solution.
Using the same method, the linearly independent functions can be verified when α � 0.2, 0.4. en, the unknown coefficients k 1 and k 2 can be solved, respectively, by using the least-squares method. And we compare the results of the least-squares residual power series method (LSRPSM), the residual power series method (RPSM) and the variational iteration method (VIM) [30]. e approximate solutions of VIM can be written as with the initial condition and the exact solution when α � 1 is Using the classic residual power series method, we can obtain the following equations [31]: en, When Hence, the functions ϕ 0 , ϕ 1 , and ϕ 2 are linearly independent.
So, we can obtain the approximate solution, which can be written as From (50), we can get the residual function with the initial condition Using the given condition (57) in (55), we get k 0 � 1. So, (55) can be written as So, we can obtain Res by substituting (58) into (56). en, the functional J will be We compute the functional J of (59). We receive two algebraic equations as and then we determine the unknown coefficients of (60) when α � 1: In Figure 3, we present the exact solutions and the approximate solutions by the three-dimensional graphics. Figure 3(a) presents the exact solution and Figure 3 Figure 3, we can find that Figure 3(b) is similar to Figure 3(a). So, the approximate solution is accurate when the values of α approach 1.
As is shown in Table 3, we present the absolute error in (48) between different values of x and t when α � 1, 0 ≤ t ≤ 1, − 2 ≤ x ≤ 2. And the least-squares residual power series method (LSRPSM) and the q-homotopy analysis method (q-HAM) [32] with u i (x, t) when i � 2 are compared with the classic residual power series method with u i (x, t) when i � 2, as shown in Table 3. e approximate solutions of the q-HAM can be written as 8 Complexity and the values of parameters are h � − 1.5, ε � 0, and n � 1.
From Table 3, we can obtain that the range of magnitude of absolute errors is from 10 − 3 to 10 − 8 . e absolute errors by the new method with different values of x and t are within the acceptable range. Fix the value of t, and the absolute error is the smallest when x � 0 and the absolute increases as the absolute value of x increases. Compared with the classic residual power series method and the q-homotopy analysis method, the leastsquares residual power series method is more accurate.
From Figure 4, we can conclude that the larger the value of α, the smoother the images, and the closer to the image of the exact solution.
Using the same method, the linearly independent functions can be verified when α � 0.7, 0.9. e unknown coefficients k 1 and k 2 can be obtained by using the leastsquares method. Table 4 shows the approximate solutions and the values of |Res(x, t)| when t � 0.01. Example 3. Consider the following fractional biological population equation: with the initial condition and the exact solution when α � 1 is  Table 3: Absolute errors by LSRPSM, RPSM, and q-HAM for α � 1.
Error(x, t)   10 Complexity Using the classic residual power series method, we can obtain the following equations [33]: Hence, the functions ϕ 0 , ϕ 1 , and ϕ 2 are linearly independent.
So, we can obtain the approximate solution, which can be written as From (64), we can obtain the residual function with the initial condition Using the given condition (71) in (69), we get k 0 � 1. So, (69) can be written as So, we can obtain Res by substituting (72) into (70). en, the functional J will be We compute the functional J of (73). We receive two algebraic equations as zJ and then we determine the unknown coefficients of (74) when α � 1: From Figure 5, we obtain the exact solutions and the approximate solutions by the three-dimensional graphics. As is shown in Table 5, we present the absolute error in (49) between different values of x and t when α � 1, t � 1, 0 ≤ x ≤ 1, 0 ≤ y ≤ 1. And the least-squares residual power series method (LSRPSM) with u i (x, t) when i � 2 is compared with the classic residual power series method and the homotopy perturbation method (HPM) [34] with u i (x, t) when i � 2, as shown in Table 5. e approximate solutions of HPM for i � 2 can be written as From Table 5, we can conclude that the absolute errors with the different values of x and y are within the acceptable range. Given the same items, we compare the residual power series method and the homotopy perturbation method with the least-squares residual power series method, and the new method is more accurate.
From Figure 6, the larger the value of alfa, the smoother the image and the closer the image to the exact solution.
Using the least-squares residual power series method, the linearly independent functions can be verified when α � 0.3, 0.6. e unknown coefficients k 1 and k 2 can be obtained by using the least-squares method. Table 6 shows the approximate solutions and the values of |Res(x, t)| when

Conclusion
In this paper, we discuss the approximate solutions of the least-squares residual power series method. is method is an improvement on the classic residual power series method. We combine the least-squares method and residual power series method. We obtain more accurate approximate solutions with fewer expansion terms. e approximate solutions are presented by data and graphics. e results show that the approximate solutions solved by this method have Error (

Data Availability
e data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest
e authors declare that they have no conflicts of interest.

Acknowledgments
is study was supported by the National Natural Science Foundation of China (grant nos. 11701446, 11601420, and