Local Polynomial Regression Solution for Differential Equations with Initial and Boundary Values

Numerical solutions of the linear differential boundary issues are obtained by using a local polynomial estimator method with kernel smoothing. To achieve this, a combination of a local polynomial-based method and its differential form has been used. The computed results with the use of this technique have been compared with the exact solution and other existing methods to show the required accuracy of it. The effectiveness of this method is verified by three illustrative examples. The presented method is seen to be a very reliable alternative method to some existing techniques for such realistic problems. Numerical results show that the solution of this method is more accurate than that of other methods.


Introduction
Linear ordinary differential equation boundary value problems are very common theoretical and practical issues.Some solutions are put forward in [1][2][3][4][5][6].Most solutions usually include finite difference method and shooting method.The shooting method is the most widely used.Both single and multishooting methods: (1) select the initial value of needing to be solved; (2) use the differential equation initial value algorithm to compute step by step and then get to the end of boundary value; (3) compute the error function between boundary value to compute and real boundary value then readjust the initial value of needing to be solved until the error is up to a certain level of accuracy.In general, this computing process is more tedious.This text makes use of local polynomial estimation to solve the boundary value issues of homogeneous and nonhomogeneous linear differential equations and compare it with the method in [4].
The linear differential equation below has a unique solution: () +  ()   +  ()  =  () ,  () =  0 , () =  1 . (1) Among them,  0 ,  1 are constant, and the function The presented method is useful for obtaining numerical approximations of differential equations, and it is also quite straightforward to write corresponding codes in any programming languages.Also, roundoff errors and necessity for large computer memory are not faced in this method.The computed results obtained by this way have been compared with the exact solution to show the required accuracy of it and the existing methods.Furthermore, the current method is of a general nature and can therefore be used for solving some partial differential equations arising in various areas.

Local Polynomial Estimator for Differential Equations
In this paper, we take advantage of the numerical estimation with the use of local polynomial regression to solve the boundary value problem.The basic idea of this method is first proposed in [5].First, we introduce the mathematical thoughts of local polynomial regression.This idea was mentioned in [7][8][9][10].Since the form of regression function is not specified, so the data points with long distance from  0 provide little information to ( 0 ).
where parameter   depends on  0 , so-called local parameter.
Obviously, the local parameter   =  () ( 0 )/!fit the local model with local data, and it can be minimized: where ℎ controls the size of the bandwidth of local area.Using matrix notation to represent the local polynomial regression is more convenient.Below is the design matrix corresponding to (4) with  and :

𝑌 𝑛
) . ( The weighted least squares problem (4) can be written as here, so the solution vector is For the initial value problem (1) and (3), we choose the best quadratic kernel function (Fan and Yao have proved in [10]):

Local Polynomial Estimator
Consider the following: Here, we suppose that  1 =  <  2 < ⋅ ⋅ ⋅ <   = , then we obtain the following relation: Then, the corresponding minimization function is We can write the solution vector: where In order to find the expression of X, we need to find the elements of X at first.Consider the following:  Consequently, we can get the ] ,+1 .
Substituting X and Ỹ into Now, we can obtain the value of β.

Example Simulation
In this part, we verified this method by solving three examples, solved the solution of specific differential problems and integration with local polynomial estimator, calculated the residual sum of squares, and compared with the existing algorithms.Calculations were performed by using MATLAB 7.0 programming solution.

Initial Value Problem of Homogeneous Linear Differential Equation
Example 1. Boundary value problem of linear differential equation with constant coefficients: The eigenvalue method can be used to obtain the real solution of initial value problem.Different parameters , ℎ, and  correspond to different residual sum of square, given in Figure 1 and Table 1.LPR represents local polynomial regression.The exact solution is  = sin().In Figure 1  In Table 1, comparison of the two methods, when ℎ  is /2/100, we can get the percentage error of optimization algorithm.By comparing the two methods, we can know the difference of percentage error.Certainly, local polynomial regression has more advantages.Then, we can get better results for  = 45 and  = 6 given ℎ = /60,  = 60,  = 8, and ℎ = /80.
In Table 1, LPR represents local polynomial regression, OPA stands for optimization algorithm, Pe stands for Percentage error and ℎ represents step size in [3].
Example 2. Boundary value problem of homogeneous linear differential equations:  Different parameters , ℎ, and  correspond to different residual sum of square, given in Figure 2 and Table 2. LPR represents local polynomial regression.The exact solution is  =  − cos  .In Figure 2, local polynomial estimator is carried out for  = 50 and  = 7 given ℎ = 1/20, and we can get the  2  .Here, We conclude that local polynomial regression is more effective.Certainly, we can get better results for  = 60 and  = 9 given ℎ = 1/135 (Figure 2  The exact solution is  =   .Different parameters , ℎ, and  correspond to different residual sum of square, given in Figure 3 and Table 3. LPR represents local polynomial regression.In Figure 3, local polynomial estimator is carried out for  = 20 and  = 4 giving ℎ = 1/11, and we can get the residual sum of square  2  which is very small.Here,  2  = ∑  =1 [(  ) − ŷ(  )] 2 .In Table 3, local polynomial regression, it can be concluded that the effect of local polynomial regression is good and the residual sum of square  2  is very small.It is sure that we can get better results for  = 70 and  = 10 giving ℎ = 0.0045 (Figure 3: Solution of local polynomial estimator and exact solution of Example 3).Figures 4, 5, and 6 present the absolute errors of this example under three groups of parameters.We can see that the absolute errors obtain ×10 −4 magnitude given different parameters , ℎ with the same  = 5.Moreover, with the increase in , the absolute errors decrease gradually.From Examples 1-3, we can conclude that the residual sum of square and absolute error are smaller by using local polynomial method which shows good accuracy.

Conclusions
In this paper, a local polynomial-based fitting method has been proposed for numerical solutions of differential equations with boundary values.Comparisons of the computed results with exact solutions and existing methods showed that the method has the capability of solving the differential equations with boundary values.It is also capable of obtaining  highly accurate results with minimal computational effort for both time and space.It was seen that the local polynomialbased technique approximates the exact solution very well.Numerical results demonstrated that the proposed method is more effective, and local polynomial estimator is more accurate compared with other algorithms, for instance, optimization algorithm.Furthermore, by using these methods, we can also study the numerical solutions of boundary value problems for differential equations on whole line or fractional differential equations which have been studied by the authors in [11][12][13].In order to illustrate our method further, we will focus on these kinds of fractional differential, integral differential equations.
, LPR is carried out for  = 21 and  = 5 giving ℎ = /30, and we get the  2  and percentage error (Figure 1: Solution of local polynomial estimator and exact solution of Example 1).

Example 3 .
: Solution of local polynomial estimator and exact solution of Example 2).Boundary value problem of nonhomogeneous linear differential equations boundary value problem:
Therefore, we can only use Mathematical Problems in Engineering the local data points around  0 .We suppose that () has +1 derivative at  0 , by the Taylor expansion, for point , located in the neighborhood of this point  0 ; we can use the -order multivariate polynomials to locally approximate () and the surrounding local point of  0 ; we model () as

Table 1 :
Comparison of LPR and OPA.