As an approach to approximate solutions of Fredholm integral equations of the second kind, adaptive hp-refinement is used firstly together with Galerkin method and with Sloan iteration method which is applied to Galerkin method solution. The linear hat functions and modified integrated Legendre polynomials are used as basis functions for the approximations. The most appropriate refinement is determined by an optimization problem given by Demkowicz, 2007. During the calculations L2-projections of approximate solutions on four different meshes which could occur between coarse mesh and fine mesh are calculated. Depending on the error values, these procedures could be repeated consecutively or different meshes could be used in order to decrease the error values.
1. Introduction and Preliminaries
The aim of this study is to find approximate solutions for the Fredholm integral equations of the second kind by applying adaptive refinement together with Galerkin method and Sloan iteration method. Our reason to apply adaptive refinement is to search for meshes on which we might obtain better approximations to the solution of the problems with the methods mentioned. In Section 2 we explained how to obtain a finer mesh from a given mesh, which is called coarse mesh, and how we construct the basis functions used for the approximation. In Section 3 we solved the problem given with (10) by Galerkin method and then in order to determine an optimal mesh we solved the optimization problem given with (18) for adaptive refinement. In Section 4, as the subsequent step, we iterate Galerkin method solution by Sloan iteration method and we solved the same optimization problem for this case in order to make adaptive refinement. Finally in Section 5 we presented some problem examples. In this section we will give some basic knowledge about two essential subjects that this study stands on: integral equations and adaptive refinement.
1.1. On Integral Equations
As the theory of integral equations has significant importance in mathematics, it is also closely related to various fields of science. Many problems, such as ordinary and partial differential equations, problems of mathematical physics, can be laid out as integral equations. Hochstadt [1] mentioned that many existence and uniqueness results can be derived from the corresponding results from integral equations and there is almost no area of applied mathematics and mathematical physics where integral equations do not play a role. Many studies can be found that state some of these areas of usage of integral equations. Rahbar and Hashemizadeh [2] indicated to high applicability of integral equations in different areas of applied mathematics, physics, and engineering, and they particularly mentioned some areas in which these equations are widely used such as mechanics, geophysics, electricity and magnetism, kinetic theory of gases, hereditary phenomena in biology, quantum mechanics, mathematical economics, and queuing theory. We can sort some more examples of these areas of usage as follows: automatic control theory, network theory and the dynamics of nuclear reactors [3], acoustics, optics and laser theory, potential theory, radiative transfer theory, cardiology, fluid mechanics and statics [4], continuum mechanics, hereditary phenomena in physics and biology, renewal theory, radiation, optimization, optimal control systems, communication theory, population genetics, medicine and mathematical problems of radiative equilibrium, the particle transport problems of astrophysics and reactor theory, steady state heat conduction, and fracture mechanics [5].
As Pachpatte [3] expressed, the beginning of the integral equations can be traced back to N. H. Abel who found an integral equation in 1812 starting from a problem in mechanics and in 1895 V. Volterra emphasized the significance of the theory of integral equations.
Lonseth [6] stated that Fredholm published his distinguished paper in Acta Mathematica [7] in 1903, in which he gave the first detailed account of the existence and multiplicity of solutions of the following two equations where the kernel K(s,t) and y(s) are known functions and the values of λ (proper values) are values to be determined such that a continuous solution x(s)≢0 exists:
(1)x(s)-∫01K(s,t)x(t)dt=y(s),0≤s≤1,x(s)-λ∫01K(s,t)x(t)dt=y(s),0≤s≤1.
In his book Pachpatte [3] stated a few number of monographs that he accepted as an excellent account of integral equations may be found in the following: Burton [8], Corduneanu [9–11], Gripenberg et al. [12], Krasnoselskii [13], Miller [14], and Tricomi [15].
1.2. On Adaptive Refinement
In 1988 Babuška [16] published his work on the advances in the p and h-p versions of the finite element method and in this study he distinguished three versions of the finite element method (FEM) as follows: the h-version, the p-version, and the h-p version. The main idea in the h-version is to refine the size of the meshes while degrees of the polynomials used for approximation are kept fixed (usually p=1,2); in the p-version it is the opposite: size of the meshes kept fixed, but degrees of the polynomials used for approximation are increased. In the h-p version both changes are done simultaneously: size of the meshes are refined and the degrees of the polynomials used for approximation are increased. In [16] while the h-version of FEM is introduced to be the standard one, the other versions are said to be developed later and the first theoretical papers about the p-version and the h-p version which appeared in 1981 are given with [17] and [18], respectively.
In Demkowicz’s book [19] which is about computation with hp-adaptive finite elements, he studied one- and two-dimensional elliptic and Maxwell problems and he mentioned two major components of the one-dimensional version of their hp-algorithm as fine grid solution and optimal mesh selection. For the first component, a given (coarse) mesh is refined in both h and p to obtain a corresponding fine mesh, and then the problem is solved on this fine mesh to find the fine mesh solution. In the latter component, he used this fine mesh solution to determine optimal mesh refinement of the coarse mesh, by minimizing the projection based interpolation error solving the following discrete optimization problem where u, ∏hpu, ∏hpoptu, Nhpopt, and Nhp denote, respectively, the solution on the fine mesh, the interpolant of the fine grid solution on the original mesh, interpolant of the fine grid solution on the new optimal mesh to be determined, the corresponding number of degrees of freedom on the new optimal mesh to be determined and the corresponding number of degrees of freedom on the original mesh:
(2)u-∏hpuH1(0,l)2-u-∏hpoptuH1(0,l)2Nhpopt-Nhp⟶min.
Here the aim of the optimization problem is said to maximize the rate of decrease of the interpolation error. Asadzadeh and Eriksson [20] gave a few number of references [21–25] on solving integral equations with FEM in their paper in which they have chosen to work on the single layer potential problem for Laplace’s equation with Neumann boundary conditions in order to be concrete. In their paper, the studies on solving integral equations with adaptive FEM in that period are given with [25–28]. Adaptive FEM are usually used to solve partial differential equations; but in literature these methods are also seen to be used for solving different type of problems in various branches of science such as hydrodynamics [29], optimal design [30], elliptic stochastic equations [31], parabolic problems [32], parabolic systems [33], elliptic problems [34], elliptic partial differential equations [35], elliptic boundary value problems [36, 37], electrostatics [38], electromagnetic problems [39], biological flows [40], and Laplace eigenvalue problem [41].
2. Refining a Finer Mesh from a Coarse Mesh and Construction of Basis Functions2.1. Refining a Finer Mesh from a Coarse Mesh
Let a=t1<t2<⋯<tn=b be the node points of a given (finite element) mesh which is accepted as the coarse mesh and denote the list of these node points as follows:
(3)Lc=t1t2⋯tn.
For each i∈{1,2,…,n-1} the interval of the form Ii=[ti,ti+1] is called an element (or finite element) having two parameters: element length hi=ti+1-ti and element local polynomial of order of approximation pi (pi≥2). What is meant by element local polynomial of order of approximation can be explained as follows: “If an element has element local polynomial of order of approximation of order p, it means the nonlinear base functions on that element are the polynomials of degree starting from 2 to p.” Let Dc be the list of n-1 numbers of element local polynomial order of approximation of the coarse mesh elements:
(4)Dc=p1p2⋯pn-1.
Firstly dividing each element of this mesh from the middle (h-refinement) and then increasing the element local polynomial order of approximation by 1 for each new element (p-refinement), we obtain a finer mesh having new lists of node points and the element local polynomial orders of approximation given below which are of length 2n-1 and 2n-2, respectively:(5)Lf=t1t1+t22t2⋯tn-1tn-1+tn2tn,(6)Df=p1+1p1+1p2+1p2+1⋯pn-1+1pn-1+1.
2.2. Construction of Basis Functions
In this study, for each element of a mesh two kinds of basis functions are used: hat functions and bubble functions. The reasons why these are called so can be explained as follows: the linear base functions are called hat functions, because their shapes look like a hat and the nonlinear ones are called bubble functions, because they vanish at node points as bubbles. Considering the coarse mesh given with lists (3) and (4) we explain how to construct the basis functions on the coarse mesh as sample. Basis functions of any mesh can be built up similarly. Formulations of number of n hat functions belonging to coarse mesh are as follows:
(7)φ1(t)=t2-tt2-t1,ift∈[t1,t2]0,otherwise,φit=t-titi+1-ti,ift∈ti-1,titi+1-tti+1-ti,ift∈ti,ti+10,otherwiseiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiii=2,…,n-1,φn(t)=t-tn-1tn-tn-1,ift∈[tn-1,tn]0,otherwise.
For the construction of bubble functions in the coarse mesh the integrated Legendre polynomials are combined with a function ψ mapping any closed interval (which is an element) [ti,ti+1] to the domain of integrated Legendre polynomials that is given as follows:
(8)ψ:ti,ti+1⟶-1,1,ψ(t)=2t-(ti+1+ti)ti+1-ti.
Let Lp denote the integrated Legendre polynomial of degree p. Bubble function of degree p in the ith (i=1,2,…,n-1) element that is used for approximation is defined as follows:
(9)φi,pt=Lp∘ψt,ift∈ti-1,ti0,otherwiseiiiiiiiiiiiiiiiiiiiiiiiiii=2,…,n,p≥2.
3. Galerkin Method Solution and Adaptive Refinement by Using Demkowicz’s Optimization
Firstly we calculate the Galerkin method approximate solution on the fine mesh Lf which was constructed in Section 2. Then in order to decide the optimal mesh, we solve an optimization problem, given by Demkowicz [19], on each element of the fine mesh. For this we need L2-projections of the fine mesh solution on each element of Lf onto the corresponding element of the coarse mesh Lc and on the four possible optimal mesh refinements of the coarse mesh element. The possible four optimal mesh refinements are defined clearly in the latter parts.
3.1. Galerkin Method Solution on the Fine Mesh Lf
Consider the Fredholm integral equation of the second kind:
(10)λx(t)-∫abK(t,s)x(s)ds=f(t),a≤t≤b,λ≠0,
and the fine mesh having lists (5) and (6). On this mesh the total number of basis functions is 2n-1+2(p1+p2+⋯+pn-1) which we denote by df. Let
(11)ut=∑j=1dfcjϕjt,t∈a,b,
be the approximate solution we are looking for where ϕj(t) are the basis functions (hat functions and bubble functions) and cj(t) are the coefficients to be calculated for j=1,2,…,df. Substituting (10) in the residual function of the Galerkin method [42], we obtain the following equality:
(12)r(t)=∑j=1dfcjλϕjt-∫DKt,sϕjtds-y(t),t∈D.
The residual function is required to satisfy the following equalities:
(13)r,ϕi=0,i=1,…,df.
Rearranging (13), for all i∈{1,…,df}(14)∑j=1dfcjλ∫abϕjtϕitdt-∫ab∫abKt,sϕjsϕitdsdt=∫abftϕitdt,
is obtained which is a system of equations and this system can be represented in the matrix form by using the matrices defined as follows:(15)E=E11⋯Edf1E21⋯Edf2⋮⋮⋮Edf1⋯Edfdf,Eij=λ∫abϕj(t)ϕi(t)dt,K=K11⋯Kdf1K21⋯Kdf2⋮⋮⋮Kdf1⋯Kdfdf,Kij=∫ab∫abKt,sϕjsϕitdsdt,F=∫abf(t)ϕ1(t)dt∫abf(t)ϕ2(t)dt⋯∫abf(t)ϕdf(t)dtT,C=c1c2⋯cdfT.The system (14) can be expressed as
(16)E-K·C=F.
Hence the coefficient matrix C is found as follows:
(17)C=E-K-1·F.
By substituting these coefficients to the left hand side of equality (11), the desired approximate solution of the Galerkin method is obtained.
3.2. Adaptive Refinement by Using Demkowicz’s Optimization for Galerkin Method Solution on the Fine Mesh Lf
As explained in the abstract, for adaptive refinement we solve an optimization problem which was originally used by Demkowicz [19] with Sobolev spaces for solving one- and two-dimensional elliptic and Maxwell problems. In this study instead of Sobolev norm, L2-norm is used and naturally inner products are L2-inner product. Under this choice our optimization problem turns into
(18)u-Πhpu2-u-Πhpoptu2Nhpopt-Nhp⟶min.
For determining optimal mesh (18) is solved on each element of the fine mesh. For simplicity we solve the problem on element of the form I=[a,b] as a representative element for all elements of the coarse mesh which were in the form Ii=[ti,ti+1](i=1,2,…,n-1). In other words we will start with a mesh consisting of just one element which is the interval itself. Let Dc=[p] be the list of element local polynomial order of approximation of the element Ic. Refining this element in both h and p we get a fine mesh with the following new list of node points:
(19)Lf=aa+b2b,Df=p+1p+1.
There are also other possible refinements (just in h or p or in both but with different choice of refinements in p) which produce the following four possible optimal mesh choices:
(20)Lopt=ab,Dopt=[p+1],(21)orLopt=aa+b2b,Dopt=pp,(22)orLopt=aa+b2b,Dopt=pp+1,(23)orLopt=aa+b2b,Dopt=p+1p.
Consider the fine mesh solution u given with (11). Firstly L2-projections of u on the coarse mesh Πhpu and on the optimal meshes Πhpoptu are calculated. During these calculations we introduced some matrices and notations that we need. Let dcanddopt be the total number of basis functions and let ξjc, (j=1,2,…,dc) and ξjopt, (j=1,2,…,dopt) be the basis functions of the coarse and optimal mesh cases, respectively:
(24)Πhpu=∑j=1dcξjcϕjc,(25)Πhpoptu=∑j=1doptξjoptϕjopt.
Let ξc=ξ1cξ2c⋯ξdccT, ξopt=ξ1optξ2opt⋯ξdoptoptT be the coefficient matrices corresponding to (24) and (25), respectively. We calculated L2-projections of the Galerkin method solution (11) as Larson and Bengzon explained in [43]. For calculating Πhpu given with (24) above, we define two matrices Zc and Mc that we need as follows:
(26)Zc=Z11c⋯Z1dfcZ21c⋯Z2dfc⋮⋮⋮Zdc1c⋯Zdcdfc,Zijc=∫abϕic(s)ϕjf(s)ds,Mc=M11c⋯M1dccM21c⋯M2dcc⋮⋮⋮Mdc1c⋯Mdcdcc,Mijc=∫abϕic(s)ϕjc(s)ds.
We obtain the coefficient matrix ξc as follows:
(27)ξc=Mc-1·Zc·C.
Substituting these coefficients in (24) we obtain the L2-projection function Πhpu. For calculating Πhpoptu given with (25), we need two new matrices Zopt and Mopt defined as follows:
(28)Zopt=Z11opt⋯Z1dfoptZ21opt⋯Z2dfopt⋮⋮⋮Zdopt1opt⋯Zdoptdfopt,Zijopt=∫abϕioptsϕjfsds,Mopt=M11opt⋯M1doptoptM21opt⋯M2doptopt⋮⋮⋮Mdopt1opt⋯Mdoptdoptopt,Mijopt=∫abϕioptsϕjcsds.
We obtain the coefficient matrix ξopt as follows:
(29)ξopt=Mopt-1·Zopt·C.
Substituting these coefficients in (25) we obtain the L2-projection function Πhpoptu.
We reformulate the right hand side of the optimization problem (18) in terms of the matrices we introduced before by using (27) and (29) as follows:
(30)CT·ZoptT·Mopt-1T·Zopt-ZcTsssdsss·Mc-1T·ZcZoptT·Mopt-1T·Zopt-ZcT·C×dopt-dc-1.
The value (30) is calculated for each of the four possible optimal mesh refinements of the coarse mesh element. The case giving the minimum value is replaced with that element of the coarse mesh. Starting from the first element of the coarse mesh we repeat this procedure for each coarse mesh element, respectively. Joining all these replaced elements at node points of the coarse mesh, we obtain adaptively refined new mesh which is the optimal mesh we are trying to achieve. During this process to guarantee the continuity of the approximate solution at node points of the coarse mesh, the boundary conditions at node points of the coarse mesh have to be fixed. In other words, the values of the fine mesh solution u and its L2-projections is forced to take the same value at the node points of the coarse mesh.
We start with the L2-projection function Πhpu. If we equalize the coefficients of the L2-projection function Πhpu with fine mesh solution u at node points a and b of the sample element Ic=[a,b] we get the following results:
(31)Πhpua=u(a)⟹ξ1c·ϕ1c(a)+0=c1·ϕ1f(a)+0⟹ξ1c=c1,Πhpub=u(b)⟹ξ2c·ϕ2c(b)+0=c3·ϕ3f(b)+0⟹ξ2c=c3.
In this case calculating the coefficients in the coefficient matrix ξc except ξ1c and ξ2c will be sufficient. Removing the first two columns of the matrices given with (26) and the first two elements of the matrix ξc and solving the remaining system brings the desired coefficients. Since the structure of the optimal element given with (20) is similar to the coarse mesh, the calculation for this case is similar with the one done for obtaining the coefficient matrix ξc. For this optimal case, ξ1opt=c1 and ξ2opt=c3. Deleting the first two columns of the matrices given with (28) and the first two elements of the matrix ξopt and solving the remaining system brings the remaining coefficients of the projection function Πhpopt.
For the remaining three optimal cases given with (20), (21), and (22) we follow the same way:
(32)Πhpoptua=u(a)⟹ξ1opt·ϕ1opt(a)+0=c1·ϕ1f(a)+0⟹ξ1opt=c1,Πhpoptub=u(b)⟹ξ3opt·ϕ3opt(b)+0=c3·ϕ3f(b)+0⟹ξ3opt=c3.
For these cases of optimal meshes, the first and the third columns of matrices (28) and the first and the third elements (ξ1optandξ3opt) of the matrix ξopt are deleted and the remaining system is solved in order to get the remaining coefficients of the projection Πhpopt.
4. Sloan Iteration Solution and Adaptive Refinement by Using Demkowicz’s Optimization
The Fredholm integral equation of the second kind given with formula (10) can be reformulated as
(33)x=1λ(f+z),
where z=Kx=∫abK(t,s)x(s)ds, t∈[a,b]. Atkinson [44] defined iterated projection solution x~n for a given projected method solution xn as
(34)x~n=1λ(f+Kxn),
and beside this he mentioned that although such iterations are found in the literature in many places, Sloan [45] first recognized the importance of doing one such iteration and in his honor x~n is often called the Sloan iterate. We express (34) in a more clear and general way as follows:
(35)x~(t)=1λ∫abKt,sxsds+ft,t∈[a,b].
Substituting the Galerkin method solution on the fine mesh given with formula (11) to the right hand side of formula (35), the iterated solution on the fine mesh is obtained as follows:
(36)x~t=1λft+∑j=1dcj∫abKt,sϕjsds,t∈[a,b].
We solve the optimization problem given with (18) for iterated solution as it was solved for Galerkin method solution. In this case u in (18) is taken as the iterated solution x~ given with (36).
We define two matrices Zc and Mc needed in the calculation of Πhpu given with (24) as follows:(37)Zc=Z11c⋯Zdf1cZ12c⋯Zdf2c⋮⋮⋮Z1dcc⋯Zdfdcc,Zc=∫ab∫abKt,sϕjfsϕictdsdt,Fc=∫abftϕ1ctdt∫abftϕ2ctdt⋯∫abftϕdcctdtT.We obtain the coefficient matrix ξc as
(38)ξc=1λMc-1·(Fc+Zc·C).
Substituting these coefficients in (24) we obtain the L2-projection function Πhpu for the iterated solution. We need two new matrices Zopt and Mopt in the calculation of Πhpoptu which are given as follows: (39)Zopt=Z11opt⋯Zdf1optZ12opt⋯Zdf2opt⋮⋮⋮Z1doptopt⋯Zdfdoptopt,Zijopt=∫ab∫abKt,sϕjfsϕiopttdsdt,Fopt=∫abftϕ1opttdt∫abftϕ2opttdt⋯∫abftϕdoptopttdtT.We obtain the coefficient matrix ξopt as follows:
(40)ξopt=1λMopt-1·(Fopt+Zopt·C).
Substituting these coefficients in (25) we obtain the L2-projection function Πhpoptu for the iterated solution.
As the right hand side of the optimization problem (18) was reformulated in matrix form for Galerkin method solution, the same will be done for the iterated solution. We reformulate the right hand side of the optimization problem (18) in terms of the matrices we introduced before by using (38) and (40) as follows:
(41)1λ2FoptT+CT·ZoptT·Mopt-1Tddd·Fopt+Zopt·C-FcT+CT·ZcTddd·Mc-1T·Fc+Zc·CFoptT+CT·ZoptT·Mopt-1T.
As explained for Galerkin method solution at the end of Section 3.2, on each element of the coarse mesh expression (41) should be calculated for each of four possible optimal mesh refinement cases and the case giving the minimum value is replaced with that element of the coarse mesh. For the continuity of the approximate solution at node points of the coarse mesh, the boundary conditions at node points of the coarse mesh are fixed during the calculations in the same way as it was done for the Galerkin method solution.
5. Some Applications
Both methods are applied to some problems in [46] on Fredholm integral equations of the second kind with smooth kernel and discontinuous kernel and results are stated. For each example we have presented error values in two different kinds of tables. In the first kind, we give the error values of the consecutive solutions where n denotes the repeating order. Here resulting refined mesh is used as the coarse mesh for the later turn. In the second kind, we give the error values when we use N number of equidistant node points on coarse mesh and p as element local polynomial order of approximation at each element of the coarse mesh. In both tables while GL2 and SL2 denote the L2-errors, Gmax and Smax denote the maximum absolute error at node points of the fine mesh (obtained from coarse mesh) of the Galerkin method solution and the iterated solution, respectively.
In this study our main and final goal is to reach better approximations by applying adaptive refinement together with Sloan iteration to Galerkin method solutions and to examine them. For this reason in all examples presented, we give two kinds of graphs with relative errors on log-log scale: one with relative error in L2-norm and one with relative error in maximum norm on y-axis and both with number of degrees of freedoms on the x-axis via Sloan iteration results. Graphs clearly show the decrease in relative error while number of degrees of freedoms are increasing. For simplicity log of number of degrees of freedoms is represented by “log(#dofs)” on the x-axis.
In order to illustrate the refinement process better we provide more details for the first example than for the latter ones: besides the error plots for the Sloan iteration we also add the corresponding graphs for the Galerkin method and show the mesh refinement for the five consecutive runs.
Example 1.
The exact solution of the problem,
(42)x(s)+1π∫-ππ0.31-0.64cos2s+t/2x(t)dt=25-16sin(s2),-π≤s≤π,
is given with x(t)=17/2+128/7cos(2t). Let the coarse mesh be given with the lists L=[-ππ], D=[2]. The problem is solved five times consecutively.
As we see in each row of Table 1 Sloan iteration cause a decrease in both GL2 and Gmax error values for each run. Also we see a general decrease in all error types, especially this is much more clear for errors via Sloan iteration given in the last two columns of the table.
Error values of (42) in Example 1.
n
GL2
Gmax
SL2
Smax
I
3.807778247544568e+00
3.962758499440351e+00
6.142245995845852e-02
2.691129516577462e-02
II
3.681501530476621e-01
4.734800657784639e-01
1.436655818926428e-05
5.651797188477303e-06
III
3.980571071622163e-02
7.073630464941472e-02
1.437092335784317e-05
5.601928975806914e-06
IV
1.231396356019893e-03
2.530443837131857e-03
5.107562613500647e-06
2.459469289561866e-06
V
1.705913430341418e-03
3.875831522837103e-03
2.009899309359689e-07
6.851382039485543e-08
When we look to the relative error graphs given with Figures 1 and 2, we see that relative errors via Sloan iteration get smaller values rather than the ones via Galerkin method.
Convergence graphs of relative error in L2-norm and maximum norm via Galerkin method for (42) in Example 1.
Convergence graphs of relative error in L2-norm and maximum norm via Sloan iteration for (42) in Example 1.
In Table 2 we give the error for the computations starting from four different initial coarse meshes with 20, 30, 40, and 50 equidistant node points on the interval [-π,π] and with initial element local polynomial order of p=2. From this table we see that, at just one run of the procedure, we can reach much more smaller error values with lower element local polynomial order of approximation by just increasing the number of nodes. In [46] the error at nodes obtained by using Nyström method was given with the value 1.1e-8. Just for this example as a sample we give a graph including a diagram which shows us the mesh refinement step by step for the five consecutive runs via Galerkin method results and the lists of the obtained optimal mesh at the end of the five consecutive runs for both methods.
Error values of (42) in Example 1.
N,p
GL2
Gmax
SL2
Smax
20, 3
1.919678771148673e-05
3.143456151022406e-05
2.861019689160329e-11
1.624300693947589e-11
30, 3
8.285910426569432e-06
1.378103734439584e-05
8.932393923247606e-14
5.240252676230739e-14
40, 3
4.568972671388127e-06
7.635657150117936e-06
9.976748626959734e-14
6.927791673660977e-14
50, 3
2.892986173063029e-06
4.818969607356394e-06
9.852985133624295e-14
6.750155989720952e-14
In Figure 3 we see the optimal mesh selections chosen by the optimization problem (18). In the first run it chose just to do p refinement and increase the element local polynomial order of approximation from 2 to 3 which corresponds to choice (20). In the second run it chose to make both h and p refinements together by choosing case (22). In the third run it chose to refine the first element as in (23) and the second one as in (20). On the fourth run while it chose to refine the first and third elements of level III by (22), it chose to refine the middle element by (20). Finally in the fifth run refining the first element of level IV by (23), the second, fourth, and fifth elements of level IV by (20), and the third element of level IV by (22), the optimal mesh at the end of five consecutive runs given below for the Galerkin method is obtained.
Representation of mesh refinement via Galerkin method for (42) in Example 1.
For Galerkin method solution,
(43)L=-3.1416e+00-2.7489e+00-2.3562e+00ssss-1.5708e+00-7.8540e-010ssss1.5708e+003.1416e+00,D=5464567.
For iterated solution we obtain the final optimal mesh as
(44)L=-3.1416e+00-1.5708e+00-7.8540e-01ssss01.5708e+002.3562e+003.1416e+00,D=554554.
Example 2.
The exact solution of the problem
(45)-x(s)2-∫010.10.01+s-t2x(t)dt=f(s),0≤s≤1,
is x(t)=0.06-0.8t+t2. Let the coarse mesh be given with lists L=[01] and D=[2]. The problem is solved four times consecutively.
We observe from the rows of Table 3 that Sloan iteration decreases the GL2 and Gmax errors as we saw in Example 1. Besides while the error values obtained by the consecutive runs via Galerkin method do not show much difference, the ones obtained by Sloan iteration are decreasing faster. The graphs given by Figure 4 for Example 2 clearly show the decrease in the relative errors in L2 and maximum norms as expected.
Error values of (45) in Example 2.
n
GL2
Gmax
SL2
Smax
I
1.154995991009992e-05
4.800924769501891e-05
9.079011385069973e-06
2.077330804106659e-05
II
1.144321554963263e-05
7.856882758505712e-05
4.481278123307043e-06
1.430730253160206e-06
III
2.260073176335741e-05
7.192862655699961e-05
3.073165554649129e-07
3.282068198745547e-07
IV
2.373416815667144e-05
7.277671609520753e-05
1.846336864089446e-08
4.620430288371225e-08
Convergence graphs of relative error in L2-norm and maximum norm via Sloan iteration for (45) in Example 2.
Likewise we did in our first example, we used three different initial coarse meshes, having 30, 40, and 50 equidistant node points on the interval [0,1] and p=2 as initial element local polynomial order of approximation at each element of these coarse mesh elements for all three situations. As in Example 1, we again see in Table 4 that we reach much more smaller error values with lower element local polynomial order of approximation by just increasing the number of nodes. In [46] the error at nodes obtained by using Nyström method was given with the value 6.6e-6.
Error values of (45) in Example 2.
N,p
GL2
Gmax
SL2
Smax
30, 2
3.827463454759924e-13
2.371158824843178e-12
3.827929692581457e-13
2.367273044256990e-12
40, 2
6.337936926296469e-14
3.875233467454109e-13
6.344645499378824e-14
3.914091273315989e-13
50, 2
1.595065414892088e-14
1.005862060310392e-13
1.598617548424624e-14
9.842127113302013e-14
Example 3.
The solution of the equation
(46)x(s)-∫01k(t,s)x(t)dt=1-1π2sin(πs),0≤s≤1,
with
(47)kt,s=s(1-t),s≤tt1-s,t≤s,
is x(t)=sin(πt). Let the coarse mesh be given with the lists L=[01] and D=[2]. The problem is solved seven times consecutively.
Table 5 shows that Sloan iteration causes a decrease in the error values obtained via Galerkin method and they are getting smaller in each seven consecutive runs. Graphs given by Figure 5 clearly show the decrease in both relative errors in L2 and maximum norms as in the previous examples.
Error values of (46) in Example 3.
n
GL2
Gmax
SL2
Smax
I
2.323175934794088e-03
4.178803217825712e-03
3.043325952187653e-03
2.499839591130204e-04
II
2.240147347334236e-03
1.853030198079919e-03
7.768764444034127e-04
6.367704404164343e-05
III
6.486006996900690e-04
1.540804529766398e-03
2.275583867630013e-04
2.260998286551796e-05
IV
2.737884048950595e-04
7.494545635317040e-04
1.076817056502688e-04
1.022158821584185e-05
V
4.309359697687976e-04
2.445614088301018e-03
8.482708369079025e-05
6.301343734582687e-06
VI
5.397336277740180e-04
3.256397951289958e-03
8.303708583183078e-05
5.990667999111743e-06
VII
6.252849507196142e-04
4.555052326156391e-03
4.721973545142185e-05
4.470476808404733e-06
Convergence graphs of relative error in L2-norm and maximum norm via Sloan iteration for (46) in Example 3.
By using the same three different coarse meshes used in Example 2, finally we see from Table 6 that we can obtain smaller errors with lower element local polynomial order of approximation by just increasing the number of nodes. In [46] the error at nodes obtained by using Nyström method was given with the value 1.7e-7.
Error values of (46) in Example 3.
N,p
GL2
Gmax
SL2
Smax
30, 2
1.248229331783191e-05
2.667447083548602e-06
1.110735027318169e-05
1.362431358398197e-06
40, 2
7.197918189051699e-06
6.660698562699352e-06
7.498309701859696e-06
8.492780161351021e-07
50, 2
4.754457518136059e-06
6.130650015201411e-06
5.508659819578954e-06
5.732876173780710e-07
6. Conclusions
The two methods presented aimed to find and improve approximate solutions for Fredholm integral equations of the second kind and to observe the effect of adaptive refinement on these solutions for both methods. Using polynomial type functions in the approximation of solutions for many kinds of problems in mathematics is one of the most usual methods. Generally in the whole solution interval, polynomials having the same degree are used. The main idea behind why we preferred to use adaptive refinement in our study is observing the changes in the results when we change this general approach. When we use adaptively refined meshes, this gives us a chance to use polynomials of different degree in different subintervals of the solution interval, which might cause obtaining better approximations. When comparing the maximum absolute error values at nodes of Examples 1 and 2 with the results in [46], we saw that our methods are able to reach much smaller error values by increasing the number of node points even by using polynomials having low degrees for these examples, which have smooth kernels. For Example 3 when the absolute error values at nodes are compared with the ones in [46] we see that they are getting closer to each other as we used more node points again by using polynomials of low degrees. In our study we also see that with Sloan iteration we obtain better approximations rather than Galerkin method as seen from the examples. The relative error graphs show us that Sloan iteration brings the expected decrease in relative error values of L2-error and maximum absolute error at node points of the fine mesh. The results showed that generally error values are better when we use polynomials of degree between 2 and 6, which is an advantage for decreasing the time we spend to solve the problems. Another advantage of our methods is that approximate solutions are found easily by using computer code written in Matlab. It is also observed that using polynomials with higher degrees can cause oscillations in the errors. The methods can be improved not only to get better results, but also to solve other problem models by some modifications.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
Acknowledgment
The authors appreciate helpful and valuable comments from all the referees to improve the paper.
HochstadtH.1973New York, NY, USAWiley-InterscienceMR0390680RahbarS.HashemizadehE.A computational approach to the fredholm integral equation of the second kind2Proceedings of the World Congress on EngineeringJuly 2008London, UK933937PachpatteB. G.1998London, UKAcademic PressMR1487077BakerC. T. H.1977Oxford, UKClarendon PressMR0467215WangW.A new mechanical algorithm for solving the second kind of Fredholm integral equation2006172294696210.1016/j.amc.2005.02.026MR22014952-s2.0-31644440150LonsethA. T.Approximate solutions of Fredholm-type integral equations19546041543010.1090/S0002-9904-1954-09825-7MR0064497ZBL0056.10003FredholmI.Sur une classe d’équations fonctionnelles190327136539010.1007/BF02421317MR15549932-s2.0-0001170302BurtonT. A.1983New York, NY, USAAcademic PressMR715428CorduneanuC.1973New York, NY, USAAcademic PressMR0358245CorduneanuC.19772ndBronx County, NY, USAChelseaMR0440097CorduneanuC.1991Cambridge, UKCambridge University Press10.1017/CBO9780511569395MR1109491GripenbergG.LondenS. O.StaffansO.1990Cambridge, UKCambridge University Press10.1017/CBO9780511662805MR1050319KrasnoselskiiA. M.1964Groningen, The NetherlandsNoordhoff UitgeversMR0181881MillerR. K.1971Menlo Park, Calif, USAW. A. BenjaminMR0511193TricomiF. G.1957New York, NY, USAInterscienceMR0094665BabuškaI.Advances in the p and h-p versions of the finite element method. A survey198886Basel, SwitzerlandBirkhäuser3146International Series of Numerical MathematicsBabuskaI.SzaboB. A.KatzI. N.The p-version of the finite element method198118351554510.1137/0718033MR615529BabuškaI.DorrM. R.Error estimates for the combined h and p-versions of the finite element method198137225727710.1007/BF01398256MR6230442-s2.0-0000240803DemkowiczL.2007Boca Raton, Fla, USAChapman & Hall/CRC10.1201/9781420011692MR2267112AsadzadehM.ErikssonK.On adaptive finite element methods for Fredholm integral equations of the second kind199431383185510.1137/0731045MR12751162-s2.0-0028445788AtkinsonK.1976Philadelphia, Pa, USASociety for Industrial and Applied MathematicsMR0483585IkebeY.The Galerkin method for the numerical solution of Fredholm integral equations of the second kind197214346549110.1137/1014071MR0307515NedelecJ. C.1977Palaiseau, FranceCentre de Mathematiques, Ecole PolytechniqueSloanI. H.AnderssenR. S.HookF.de LukasM.A review of numerical methods for fredholm equations of the second kindThe Application and Numerical Solution of Integral Equations1978Alphen aan den Rijn, The NetherlandsSijthoff and Noordhoff, Australian Nat. Univ., CanberraWendlandW. L.WhitemanJ. R.On some mathematical aspects of boundary element methods for elliptic problems1985London, UKAcademic Press193227MR811035GrahamI. G.ShawR. E.SpenceA.Adaptive numerical solution of integral equations with application to a problem with a boundary layer1989687590MR995856RankE.BabuškaI.ZienkiewiczO. C.GagoJ.OliveiraE. R.Adaptivity and accuracy estimation for finite element and boundary integral element methods1986New York, NY, USAJohn Wiley & Sons7994YuD. H.BrebbiaC. A.WendlandW. L.KuhnG.A posteriori error estimates and adaptive approaches for some boundary element methods1987Berlin, GermanySpringer241256MR965322BergerM. J.ColellaP.Local adaptive mesh refinement for shock hydrodynamics1989821648410.1016/0021-9991(89)90035-12-s2.0-11744289966SchiermeierJ. E.SzabóB. A.Interactive design based on the p-version of the finite element method1987329310710.1016/0168-874X(87)90002-32-s2.0-0023381797LarsenS.1986College Park, Md, USAUniversity of MarylandErikssonK.JohnsonC.Adaptive finite element methods for parabolic problems I: a linear model problem1991281437710.1137/0728003MR10833242-s2.0-0026106415MooreP. K.Comparison of adaptive methods for one-dimensional parabolic systems199516447148810.1016/0168-9274(95)00002-CMR13252602-s2.0-0000646363DemkowiczL.RachowiczW.DevlooP.A fully automatic hp-adaptivity2002171–411714210.1023/A:1015192312705HoustonP.SüliE.A note on the design of hp-adaptive finite element methods for elliptic partial differential equations20051942–522924310.1016/j.cma.2004.04.009MR21051622-s2.0-10444252655HeL.ZhouA.Convergence and complexity of adaptive finite element methods for elliptic partial differential equations201184615640MR28056612-s2.0-79959279188HoppeR. H. W.KiewegM.Adaptive finite element methods for mixed control-state constrained optimal control problems for elliptic boundary value problems201046351153310.1007/s10589-008-9195-4MR26537222-s2.0-77954957802PardoD.DemkowiczL.Torres-VerdínC.TabarovskyL.A goal-oriented hp-adaptive finite element method with electromagnetic applications. I. Electrostatics20066581269130910.1002/nme.1488MR21995432-s2.0-32444441833StephanE. P.MaischakM.LeydeckerF.An hp-adaptive finite element/boundary element coupling method for electromagnetic problems200739567368010.1007/s00466-006-0110-5MR22886492-s2.0-33846676673BottiL.PiccinelliM.Ene-IordacheB.RemuzziA.AntigaL.An adaptive mesh refinement solver for large-scale simulation of biological flows20102618610010.1002/cnm.1257ZBL1180.920212-s2.0-77952009732HoppeR. H.WuH.ZhangZ.Adaptive finite element methods for the Laplace eigenvalue problem2010184281302ZBL1222.651222-s2.0-7865091676310.1515/JNUM.2010.014MR2747809AtkinsonK. E.19974Cambridge, UKCambridge University PressCambridge Monographs on Applied and Computational Mathematics10.1017/CBO9780511626340MR1464941LarsonM. B.BengzonF.2010New York, NY, USASpringerAtkinsonK. E.A personal perspective on the history of the numerical analysis of Fredholm integral equations of the second kind2008Leuven, BelgiumWorld Scientific5372SloanI. H.Improvement by iteration for compact operator equations19763013675876410.1090/S0025-5718-1976-0474802-4MR0474802ZBL0343.45010AtkinsonK. E.ShampineL. F.Algorithm 876: solving Fredholm integral equations of the second kind in Matlab200834412010.1145/1377596.1377601MR24745262-s2.0-48249127983