Solving Operator Equation Based on Expansion Approach

To date, researchers usually use spectral and pseudospectral methods for only numerical approximation of ordinary and partial differential equations and also based on polynomial basis. But the principal importance of this paper is to develop the expansion approach based on general basis functions (in particular case polynomial basis) for solving general operator equations, wherein the particular cases of our development are integral equations, ordinary differential equations, difference equations, partial differential equations, and fractional differential equations. In other words, this paper presents the expansion approach for solving general operator equations in the form Lu + Nu = g(x), x ∈ Γ, with respect to boundary condition Bu = λ, where L, N and B are linear, nonlinear, and boundary operators, respectively, related to a suitable Hilbert space, Γ is the domain of approximation, λ is an arbitrary constant, and g(x) ∈ L2(Γ) is an arbitrary function. Also the other importance of this paper is to introduce the general version of pseudospectral method based on general interpolation problem. Finally some experiments show the accuracy of our development and the error analysis is presented in L2(Γ) norm.


Introduction
Approximation theory is an important field of mathematics which has pure and applied aspects.This field tries to approximate the complicated functions with simplified representments.Some important branches of this field are interpolation, extrapolation, best approximation, and so forth.Also, expansion series have high applications in this field (approximation theory).Expansion series try to represent a function using suitable basis functions such as standard polynomials (nonperiodic case) and trigonometric polynomials (periodic case).Taylor series and Fourier series in classical and nonclassical forms are two high applicable versions of expansions.These series are used in the numerical solution of ordinary, partial differential, and integral equations.Weighted residual methods are classes of methods which use expansion series for solving ordinary, partial differential, and integral equations.In these methods, by substituting a suitable expansion of exact solution of ordinary, partial differential, and integral equations in its equation, the residuals are obtained and then the residuals are minimized in a certain way.This minimization leads to specific methods such as Galerkin, collocation, and Tau formulations.In this paper, using Fourier series and interpolation cases as expansions, we implement the spectral and pseudospectral methods for solving the general operator equation L + N = (),  ∈ Γ, with respect to boundary condition B = .The remainder of the paper proceeds as follows.In Sections 1.1 to 1.4, we introduce the preliminaries and notations, the general interpolation problem, spectral and pseudospectral methods, and operator equations.In Section 2, we implement spectral method for general operator equation.Section 3 is devoted to the implementation of pseudospectral method for general operator equation with error analysis.In Section 4, we present some test experiments to show the accuracy and validity of our approach.Finally, in the last section, we monitor a brief conclusion.
1.1.Preliminaries and Notations.First we introduce some notations, which we use in the following.Definition 1.A mapping from a linear space  into  1 is called a functional.Also, the set of all bounded linear 2 International Journal of Computational Mathematics functionals on the linear space  is itself a linear space called the algebraic conjugate or dual-space on  and is denoted by  * [1].Definition 2. A projection is a linear transformation  from a vector space to itself such that  2 = .It leaves its image unchanged.Though abstract, this definition of projection formalizes and generalizes the idea of graphical projection [1].Definition 3. Suppose  is a linear space and  * is its dualspace; then,  1 ,  2 , . . .,   ∈  * are linearly independent if and only if ( Proof .See [1]. Remark 5. Suppose  ∈  2 (Γ); then, the operators  1 () = ,  2 () = ∫   (integral operator),  3 () =   (derivative operator), and  ℎ (()) = ( − ℎ) (shift operator), ℎ ∈ , are independent [1].

General Interpolation Problem.
The general interpolation problem is stated as follows.Let  be a space of dimension , and let  1 ,  2 , . . .,   be given linear functionals on  * .Find an () ∈  such that where the {  }  =1 are given.This form of interpolation is a generalization of the classical polynomial interpolation.To make the connection, we take P −1 as  and define the functional by where the {  }  =1 are a set of distinct points.The interpolation problem is then to find a polynomial  −1 () taking on preassigned values of  1 ,  2 , . . .,   at the points  1 ,  2 , . . .,   .This type of interpolation is called Lagrange interpolation.Three important subjects in the generalized interpolation are existence, uniqueness, and accuracy of corresponding interpolation problems, respectively, wherein the first two subjects are discussed in [1] and the third one is much more difficult and we are able to give useful answer only for certain types of polynomial interpolation.For more review literatures about this subject, see [1].Throughout this paper, we consider a linear space  as a Hilbert space  2 [, ].

Spectral and Pseudospectral Methods.
In the last two decades, spectral and in particular pseudospectral methods have emerged as intriguing alternatives in many situations and as superior ones in several areas [2][3][4][5][6][7].Spectral and pseudospectral methods are known as highly accurate solvers for ordinary, partial differential, and integral equations.The basic idea of spectral methods is using general Fourier series as an approximate solution with unknown coefficients [2].Three most widely used spectral versions are the Galerkin, collocation, and Tau methods [2].Their utility is based on the fact that if the solution sought is smooth, usually only a few terms in an expansion of global basis functions are needed to represent it to high accuracy.It is well known that spectral methods converge to the solution of the continuous problems faster than any finite power of 1/ where  is the dimension of the reduced order model for smooth solutions [2].Spectral methods, in the context of numerical schemes for differential equations, belong to the family of weighted residual methods, which are traditionally regarded as the foundation of many numerical methods such as finite element, spectral, finite volume, and boundary element methods.Weighted residual methods represent a particular group of approximation techniques, in which the residuals are minimized in a certain way and this leads to specific methods including Galerkin, collocation, and Tau formulations.Also, the basic idea of pseudospectral methods (see, e.g., [6][7][8][9][10]) is based on interpolating an unknown function (the exact solution of our problem) in suitable points and obtaining the unknown values.In other words, using some basis functions {  ,  = 1, . . ., }, such as polynomials, to represent an unknown function (the approximate solution of the operator equation) via An important feature of pseudospectral methods is the fact that one usually is content with obtaining an approximation to the solution on a discrete set of grid points {  ,  = 1, . . ., }.One of several ways to implement the pseudospectral method is via some matrix operations.For instance, for differential operator (L =   ), we have useful matrix which is denoted by  and is named as differentiation matrix; that is, one finds a matrix  such that, at the grid points {  ,  = 1, . . ., }, we have where is the vector of values of   at the grid points.For every operator also we can define its corresponding operational matrix.
Frequently, orthogonal polynomials such as Chebyshev polynomials, Jacobi polynomials, and Hermite polynomials are used as basis functions, and the grid points are corresponding to Chebyshev, Jacobi, and Hermite points, respectively.In the Chebyshev polynomials case, the entries of the differentiation matrix are explicitly known (see, e.g., [7]).Using such an approach for solving boundary value problems involves the solution of linear systems of equations which are known to be very ill-conditioned; for example, for methods based on orthogonal polynomials, the condition number of the approximate first order operator grows like  2 , while the condition number of second order operator in general scaled with  4 .For more review literatures about this method, see [6].Precondition procedure is the method of reducing the obtained ill-condition system from pseudospectral methods [6].This method is used for the numerical solution of integral equations [11], partial differential equations [8,10], and ordinary differential equations [9,10].
1.4.Operator Equations.Many important engineering problems fall into the category of being operator equations with boundary operator conditions such as integral, difference, and partial differential equations.The general form of these equations is in the form with respect to boundary condition B = , where L, N, and B are linear, nonlinear, and boundary operators, respectively, related to a suitable Hilbert space, Γ is the domain of approximation, and () ∈  2 (Γ) is an arbitrary function.
The necessary and sufficient conditions for existence and uniqueness of ( 8) can be seen in [12].Obtaining the explicit solutions of operator equation (8) in general is a difficult problem and it depends on the properties of operators L, N, and B. Also, some numerical methods are presented for solving such operator problems, as iterative methods, method of paper [13], and so on.Simplification of the nonlinear term N in the operator equation (8) to the linear operators for easy and efficient implementation is an important subject.Some methods exist for this simplification such as using Taylor series and linearization method [13].In Section 2, we explain this subject very well.

Spectral Method for General Operator Equation
In this section, we describe the spectral methods for solving general operator equation L + N = (),  ∈ Γ, with respect to boundary condition B = .For this purpose, first consider suitable basis set of functions such as {  ()}  =0 , and expand a function () (exact solution of operator equation ( 8)) based on these basis functions as From basis property of {  ()}  =0 , we can conclude Now, we substitute the expansion series (9), in operator equation ( 8), and its boundary conditions as the following: If the basis functions {  ()}  =0 , automatically satisfy boundary conditions, then they will be called Galerkin basis functions.From (11), we must simplify two terms L(  ()) and N(  ()), so we perform this simplification in two cases.But simplification of the first case L(  ()), is easy, because by using the linear property of operator L we obtain But the nonlinear case N(  ()), is almost cumbersome and some ideas exist for its simplification.Now, we present the idea for simplification of the nonlinear term N(  ()).
For this goal, based on basis functions {  ()}  =1 , and using Gram-Schmidt algorithm, we must obtain an orthogonal system {  ()}  =1 .Then, it is not difficult to see that Now, from (11) and using (13), we obtain By using some project operators on (15) as  1 , . . .,   defined on , we obtain If we suppose that the boundary conditions give us  equations, then from ( 16) we obtain  +  + 1 equations for  + 1 unknown coefficients.So, for obtaining the unknowns, we must eliminate  equations for satisfying the boundary conditions and the obtained system must be solved via suitable iteration methods such as Newton method.
The important point in Theorem 7 is the existence of basis functions {  ()}  =1 and {  ()}  =1 which satisfy ( 20) and ( 21), respectively.We can answer this question by the concept of linear independency of operators.We perform this, for {  ()}  =1 , and in similar manner we can obtain the necessary and sufficient conditions of existence of {  ()}  =1 .Now, we consider an arbitrary basis function {  ()}  =1 of   and we construct new basis functions which satisfy (20) and (21).
(i) Linear Case.In the linear case, if we suppose that the projection operators are linear, then, from property of linear operator L, we have so, for obtaining a matrix relation between the values of   L(I  ()) and   and   , it is sufficient to calculate the   L(  ) values as the following: where  , =   L (  ) , ,  = 0, . . ., .
The previous matrix (under bracket) is a square matrix of ( + 1) × ( + 1) dimensions and has high application in the pseudospectral method.In particular case, when the linear operator is a derivative, this matrix is named as derivative matrix and is very ill conditioned.
(ii) Nonlinear Case.The nonlinear case is almost cumbersome, and different ideas can be used.The first idea is to approximate the nonlinear operator N with some linear operator F and using the method of part (i).But this approach is very complicated and approximating a nonlinear operator with linear operators in general is difficult and was down only for particular operators [13].The second idea is based on approximating a nonlinear operator via Taylor series [14].Also another approach for simplifying nonlinear operator is based on orthogonal expansions (general Fourier series) of N(I  ()).For this goal, we must obtain an orthogonal system {  ()}  =1 from basis functions {  ()}  =1 using Gram-Schmidt algorithm.Then, it is not difficult to see that where Also, for N(I  ()()), we have where We must note that the   and   coefficients are functions of   and    .Now, for case study, let  be the space of polynomials with real coefficients wherein it is a Hilbert space with respect to  2 norm, and   is a subspace of  with dimension .
Remark 10.The best choice of points for polynomial interpolation are the roots of orthogonal polynomials and in particular case Jacobi polynomials due to their small Lebesgue constants [6].
The symbol  2  (,) (Λ) shows the space of measurable functions whose square is Lebesgue integrable in Λ relative to the weight function  (,) .The inner product and norm of  2  (,) (Λ) are defined by Also another useful space is with seminorm and norms, respectively.Consider But an applicable norm that appears in bounding of errors of the spectral method is Now, we present a useful theorem. where (roots of Jacobi polynomials).
Noting to Theorem 11, we can obtain an error bound for any classes of polynomial interpolation such as Hermite interpolation, Birkhoff interpolation, and other classes.This means that if we have an arbitrary polynomial interpolation I 1  (ℎ), then we can transform the interpolation I 1  (ℎ) to the Lagrange interpolation I  (ℎ) and then we use Theorem 11 for obtaining an error bound.Now, by using Theorem 11, in the next theorem we obtain an upper bound for the error of our approximate solution of (8). ) , where I 1  () is an arbitrary polynomial interpolation satisfied in interpolation conditions.

The Test Experiments
Experiment 1.Let us consider the following nonlinear differential equation problem [19]: thus, the linear and nonlinear parts of our ordinary differential equation, respectively, are where  ),  = 0, . . .,  − 1} are the unknown coefficients.In Table 1, we see the comparison between the exact and approximate solutions from pseudospectral method for  = 0,  = 0, and  = 5.
Experiment 2. Consider the second-order difference equation problem [20]: The exact solution is () = (  +  − √ − 1)/( − 1).We have and this equation has no nonlinear operator part.The obtained results between the exact and approximate solutions are shown in Table 2 and Figure 1.
Experiment 3. Consider the Fredholm integrodifferential equation [21]: under the initial condition with the exact solution () =   .In this case, we have where International Journal of Computational Mathematics   Now, by using the pseudospectral method for  = −0.5,  = 0.7, and  = 5, we obtain the following results that are shown in Table 3.
which is a linear operator.Using the presented method for every , , and  = 3, we obtain the exact solution.

Conclusion
In this paper, we have developed the expansion approach for solving some operator equations of the form L + N = (),  ∈ Γ, with respect to boundary condition B = .For this, the principal importance is the development of the expansion approach for solving general operator equation wherein the particular cases are in the solution of integral equations (Experiment 3), ordinary differential equations (Experiment 1), difference equations (Experiment 2), partial differential equations (Experiment 4), and fractional differential equations (Experiment 5).Also, the error analysis is presented in  2 (Γ) norm.Using the roots of another orthogonal polynomials (Jacobi polynomials) rather than classical orthogonal polynomials such as Chebyshev polynomials and Legendre polynomials, for collocation points (with free parameters , ), in pseudospectral method is an important subject which is performed (Experiments 2 and 3).