A HIGHER-ORDER METHOD FOR NONLINEAR SINGULAR TWO-POINT BOUNDARY VALUE PROBLEMS

We present a finite difference method for a general class of nonlinear singular two-point boundary value problems. The order of convergence of the method for such a general class of problems is higher than the previous reported methods. The method yields a fourthorder convergence for the special case p(x) = w(x)= x α , α ≥ 1.

Singular boundary value problems occur in many applications such as transport processes, thermal explosions, and electrohydrodynamics. (References are given in [4].) Standard numerical methods exhibit loss of accuracy or even lack of convergence when applied to singular problems.(For more details please see [5,9].)There have been many numerical methods proposed for solving special cases of the nonlinear problem (1.1).In particular, for the case w(x) = p(x) = x α , 0 < α < 1, Chawla and Katti [2] constructed a second-order three-point finite difference method.In [3], the method is extended to α ≥ 1.Later, Chawla et al. [4] constructed a fourthorder method for α ≥ 1.
For the linear problem, Abu-Zaid and El-Gebeily [1] generalized the method in [3] to w(x) = p(x).Recently, El-Gebeily and Abu-Zaid [7] relaxed this requirement and several other assumptions, however, the order of convergence of their scheme is at most 2.
We construct a higher-order method for the nonlinear problem (1.1).Unlike the previous treatment, this method is designed to work for general p and w which are not even required to be smooth.The higher order of convergence is obtained by approximating the function g by a quadratic interpolating polynomial.This method is fourth order for the case p(x) = w(x) = x α , α ≥ 1, and at least third order for the class of problems considered in [1].Quadrature methods can also be used to set up the integrals associated with this method.So, knowledge of the exact integrals is not necessary (except for those integrals that involve singularities).
We start by constructing the finite difference method.Then we analyze the error and find the rate of convergence.Finally, we present some numerical examples.

Exact discretization of the problem.
In this section, we present the exact discretization of (1.1).We first introduce special sets of basis functions that we use for the discretization.Then we get a system of equations for the exact solution at the mesh points.Our working space is the space of continuous functions on the interval . We associate two sets of basis functions with the nodes x 1 ,x 2 ,...,x N−1 of this partition: (1) the set U 1 ,U 2 ,...,U N−1 given by and for 2 where interpolating polynomials defined on each subinterval and zero outside I k .We assume that ik ,wU k ≥ 0, i= k − 1,k,k+ 1, k = 0,...,N − 1. (2.4) This assumption is satisfied by many standard problems.For example, one can show, by direct calculations, that this is the case when w(x) = p(x) = x α .Let ᏼ k be the projection where for some ξ k ∈ I k .By multiplying both sides of (1.1) by U k , k = 1,...,N − 1, and integrating over the interval [0, 1], we get where •, • is defined by Note that U k is absolutely continuous on [0, 1] for k = 1,...,N −1.Therefore, if a function y ∈ C[0, 1] is such that (py ) is integrable, then −(py ) ,U k = py ,U k .If y is a solution of the boundary value problem (1.1), then our assumptions on the functions p, g, w imply that (py ) is integrable.Hence, integration by parts is justified in our case.
It can be easily checked that the integrals in the left-hand side of system (2.7) take the more explicit form for k = 2,...,N − 1, where y i = y(x i ).Also, by introducing the projections ᏼ k into system (2.7), we get where g i = g(x i ,y i ).Note that g N = g(1, 0) while g 0 = g(0,y 0 ) involves the unknown value of the solution at x = 0. We will deal with this difficulty later.It is known (see [8]), however, that y 0 exists and is finite under our assumptions.
In matrix form, system (2.7) can be written as where (2.13) with (2.14) For x ∈ [0,x 1 ], following the same derivation in [7], we can show the following identity: (2.15) In particular, at the singular point x = 0, identity (2.15) reduces to which can also be written as where (2.18) By expanding g(x) about (x, y 1 ), we get for x ∈ (0,x 1 ), for some ξ ∈ (x, x 1 ).Now, by using (2.17) and (2.19), we get the identity (2.20) 3. Numerical method and error analysis.The setup carried out in Section 2 indicates that we intend to obtain a numerical method by truncating the remainder term R in (2.12).However, one difficulty remains which is, how to obtain the value of g 0 .
To overcome this difficulty we notice that the basis function U 1 (x) is always unity on the interval [0,x 1 ].This means that we may use an approximate value of y 1 , ȳ1 , as an approximate value for y 0 , and thus approximate g 0 = g(0,y 0 ) by ḡ0 = g(0, ȳ1 ).The fact that this approximation does not affect the overall order of the method remains to be shown.
Using this approximation of g 0 and truncating the remainder term R in (2.12), we obtain a numerical method that determines an approximation Then, the approximate value ȳ0 of y 0 is calculated using For the error analysis, let E = Y − Ȳ = [e 1 ,e 2 ,...,e N−1 ] t .Then, from (2.12) and (3.1), we get Using the integral form of the mean value theorem, we write where , where For the term B(Y ), we write an error representation as follows: let J = [1, 0,...,0] t , then where 1 = y 0 − y 1 , F = diag(J)l (1) 0 d 0 , and Hence, the error equation (3.3) can be rewritten as Note that the entries of D as well as d 0 are nonpositive since g y ≤ 0.
The matrix T has the following properties: (1) T is tridiagonal and diagonally dominant, (2) the diagonal elements are positive, (3) t k t k+1 > 0, for k = 1,...,(N − 2).It follows from these properties that T is irreducible and thus T is irreducibly diagonally dominant (see [10, pages 47-55]).Moreover, since the off-diagonal elements are nonpositive, T is an M-matrix.
Since the first row of T −1 includes the largest element in each column, it follows from Lemma A.2 that for some constant c > 0. For the other term in the error equation (3.8), we have, from (2.20), and from (2.15) and Lemma A.1, (3.17) The last equality holds because the first term in the previous equality is dominated by the second term.
It follows from (3.13) and (3.17) that the order of the error in scheme (3.1) is As for the error in computing y 0 , we have from (2.20), (3.2), and (3.15) Thus we have proved the following theorem., where m is the number of derivatives taken in the Taylor series expansion of g.

Theorem 3.1. Under the assumptions (A1), (A2), (A3), (A4), (A5), and (2.4), the finite difference scheme (3.1) and (3.2) converges uniformly to the solution of (1.1) with order of convergence of at least
Remark 3.3.In the approximation (3.2), the exact integral g(•, ȳ1 ), wU + 0 is computed.If this integral cannot be found in closed form or if it is too complicated, then it has to be computed numerically.This can be done by replacing g(•, ȳ1 ), wU + 0 by This results in the extra error term which is of order h 2 1,wU + 0 .
Remark 3.4.For the special case p(x) = w(x) = x α , α > 1, or if the differential equation is regular, the summation term in (3.13) is of order h 4 .Also, the second term of (3.20) is of order h 4 .So the method is fourth-order accurate as should be expected since our method and the method given in [4] are identical.Remark 3.5.In the more general situation p(x) = O(x α ), w(x) = O(x β ), at x = 0, assumption (A5) is satisfied if and only if β−α+2 > 0 and β > −1.If we let β−α+2 = > 0, then the second term in Theorem 3.1 has the order min{O(h 2+2β ), O(h 2 )}.
We end this section with a discussion of the question of existence and uniqueness of solutions of system (3.1).The result is best stated as a lemma.

Lemma 3.6. System (3.1) has a unique solution.
Proof.The proof uses the theory of monotone operators (see [6]).We begin by showing that the matrix T is positive definite.Let V be the linear vector space generated by the basis functions {U i } N−1 i=1 .Any u ∈ V is absolutely continuous and satisfies the boundary condition u(1) = 0. Therefore, Now, writing u = y i U i and letting Y = [y 1 ,y 2 ,...,y N−1 ] t , we can easily check that , This means that the minimum eigenvalue λ m of T is positive and thus T is positive definite.
Next, we show that the operator T −G 1 (where The uniqueness of this solution follows from the strong monotonicity of the operator T − G 1 . Since we do not have a contraction mapping principle here, Picard's iterations applied to (3.1) may not converge.We find the solution of (3.1) by Newton's method.For the implementation of Newton's method, we set Standard theory for Newton's method and our assumptions (A1), (A2), (A3), (A4), and (A5) guarantee that the iterations (3.28) converge if the initial guess is sufficiently close to the true solution of (3.1).

Examples.
In this section, we provide two numerical examples.The first example shows that with our scheme, we get a higher-order convergence than the scheme in [7] for a linear problem.In the second example, we solve a nonlinear problem.The exact solution is y(x) = cos(π x/2) and the order of convergence, according to the scheme in [7], is O(h ln h).Numerical results using the new scheme are shown in Table 4.1.The results show that the order of the convergence of the relative error is about 3.6.The exact solution is y(x) = cos(π x/2).The results are shown in Table 4.2.The order of the convergence of the relative error is about 2.7.Note that in both examples, the order predicted by Theorem 3.1 is at least h 2 (ln h) 2 .Our numerical method achieved higher accuracy for both examples.For the special case p(x) = w(x) = x α , our method is identical to the method constructed in [4] and thus the order is h 4 .Examples are presented in [4].since p ≥ 0.

Some auxiliary lemmas
For the first integral of (A.5), since Similarly, for the third integral of (A.5), we have For the second integral of (A.5), it follows from the definition of U + x (t) that The results follow from (A.6), (A.7), and (A.8).

Call for Papers
Thinking about nonlinearity in engineering areas, up to the 70s, was focused on intentionally built nonlinear parts in order to improve the operational characteristics of a device or system.Keying, saturation, hysteretic phenomena, and dead zones were added to existing devices increasing their behavior diversity and precision.In this context, an intrinsic nonlinearity was treated just as a linear approximation, around equilibrium points.Inspired on the rediscovering of the richness of nonlinear and chaotic phenomena, engineers started using analytical tools from "Qualitative Theory of Differential Equations," allowing more precise analysis and synthesis, in order to produce new vital products and services.Bifurcation theory, dynamical systems and chaos started to be part of the mandatory set of tools for design engineers.
This proposed special edition of the Mathematical Problems in Engineering aims to provide a picture of the importance of the bifurcation theory, relating it with nonlinear and chaotic dynamics for natural and engineered systems.Ideas of how this dynamics can be captured through precisely tailored real and numerical experiments and understanding by the combination of specific tools that associate dynamical system theory and geometric tools in a very clever, sophisticated, and at the same time simple and unique analytical environment are the subject of this issue, allowing new methods to design high-precision devices and equipment.
Authors should follow the Mathematical Problems in Engineering manuscript format described at http://www .hindawi.com/journals/mpe/.Prospective authors should submit an electronic copy of their complete manuscript through the journal Manuscript Tracking System at http:// mts.hindawi.com/according to the following timetable:

2 .Remark 3 . 2 .
If the error term 1,wU + 0 2 is of less order than the error term |y − ȳ1 |, then more accuracy may be achieved by taking more terms in the expansion of g about y 1 .In principle, we get an error term of order 1,wU + 0 m+1