A gradient recovery operator based on projecting the discrete gradient onto the standard finite element space is considered. We use an oblique projection, where the test and trial spaces are different, and the bases of these two spaces form a biorthogonal system. Biorthogonality allows efficient computation of the recovery operator. We analyze the approximation properties of the gradient recovery operator. Numerical results are presented in the two-dimensional case.
1. Introduction
The gradient reconstruction is a popular technique to develop reliable a posteriori error estimators for approximating the solution of partial differential equations using adaptive finite element methods [1–6]. The main idea of the gradient recovery error estimators is based on postprocessing the computed gradient and showing that the postprocessed gradient has a better convergence rate to the true gradient than the computed gradient. There are many ideas of gradient postprocessing. Most popular techniques are local least-squares fitting or patch recovery [1, 2, 4], weighted averaging [4], local or global L2-projection [4, 5, 7], and polynomial preserving recovery [8].
Recently we have presented a gradient reconstruction operator based on an oblique projection [9], where the global L2-projection is replaced by an oblique projection in order to gain the computational efficiency. The oblique projection operator is constructed by using a biorthogonal system. In fact, for the linear finite element in simplicial meshes, this approach reproduces the so-called gradient reconstruction scheme by the weighted averaging [4, 6, 10]. We proved that the error estimator based on the oblique projection is asymptotically exact in mildly unstructured meshes using the fact that the error estimator based on L2-projection is also asymptotically exact for such meshes [4, 5, 7, 11].
In this paper, we aim at analyzing the approximation property of the recovered gradient in one dimension using an oblique projection. As the approach of using an oblique projection reproduces the weighted average gradient recovery of linear finite elements [4, 6], this construction is quite useful in extending the weighted average gradient recovery of linear finite elements to quadrilaterals and hexahedras.
Let Ω=(a1,a2) with a1,a2∈R and a1<a2. Let Δ=a1=x0<x1<⋯<xn=a2 be a partition of the interval Ω. We define the interior of the grid, denoted by intΔ, as
(1)intΔ=xi∈Δ:1≤i≤n-1.
We also define the set of intervals in the partition Δ as Iii=0n-1, where Ii=[xi,xi+1). Two sets An and Bn of indices are also defined as
(2)An=i∈N:0≤i≤n,Bn=An∖0,n,
respectively. A piecewise linear interpolant of a continuous function u is written as uh∈Vh with
(3)uh(x)=∑i=0nu(xi)ϕi(x),
where ϕi is the standard hat function associated with the point xi, 0≤i≤n. We define a discrete space,
(4)Vh=spanϕ0,…,ϕn⊂H1Ω.
The linear interpolant of u∈H1(Ω) is the continuous function defined by uh=∑i=0nu(xi)ϕi. However, if we compute the derivative of this interpolant uh, the resulting function will not be continuous. To make the derivative continuous we project the derivative of the interpolant, duh/dx=∑i=0nu(xi)dϕi/dx, onto the discrete space Vh. There are two different types of projection. One is an orthogonal projection and the other is an oblique projection. The orthogonal projection operator, Ph, that projects duh/dx onto Vh is to find a gh=Phduh/dx∈Vh that satisfies
(5)∫Ωghϕjdx=∫Ωduhdxϕjdx.
Since gh∈Vh, we can represent it as an n+1-dimensional vector:
(6)g→=g0⋮gnwithgh=∑i=0ngiϕi.
Now the requirement given in (5) is equivalent to a linear system: Mg→=f→, where M is a mass matrix, and
(7)f→=f0⋮fnwithfj=∫Ωduhdxϕjdx.
Here the mass matrix M is tridiagonal. We can reduce computation time greatly if we have a diagonal mass matrix. This can be done if we use a suitable oblique projection instead of an orthogonal projection. We consider the projection
(8)Qh:L2(Ω)⟶Vh,
which is defined as the problem of finding gh=Qhdu/dx∈Vh such that
(9)∫Ωghλhdx=∫Ωduhdxλhdx,λh∈Mh,
where Mh is another piecewise polynomial space, not orthogonal to Vh, with dimMh=dimVh; see [12]. In fact, the projection operator Qh is well-defined due to the following stability condition. There is a constant β>0 independent of the mesh-size h such that [9, 13]
(10)ϕhL2(Ω)≤βsupμh∈Mh∖{0}∫ΩμhϕhdxμhL2(Ω),ϕh∈Vh.
In order to achieve that the mass matrix M is diagonal we need to define a new set of basis functions for Mh, λ0,…,λn, that are biorthogonal to the standard hat basis function (Figure 1) we used previously. This biorthogonality relation is defined as
(11)∫Ωλiϕjdx=cjδij,cj≠0,1≤i,j≤n,
where δij is the Kronecker delta function:
(12)δij=1,ifi=j0,otherwise,
and cj is a positive scaling factor. The basis functions for Mh are simply given by
(13)λ0(x)=2(x-x1)+(x-x0)x0-x1,x0≤x≤x10,otherwiseλn(x)=2(x-xn-1)+(x-xn)xn-xn-1,xn-1≤x≤xn0,otherwise,
and, for 1≤i≤n-1,
(14)λi(x)=2(x-xi-1)+(x-xi)xi-xi-1,xi-1≤x≤xi2(x-xi+1)+(x-xi)xi-xi+1,xi≤x≤xi+10,otherwise.
By using an oblique projection Qh the mass matrix will be diagonal. We let the diagonal mass matrix be D, so that our system is Dg→=f→. The values gi are our estimates of the gradient of u at the point xi. So, we estimate the gradient by finding g→=D-1f→, where
(15)gi=∫x0xnduh/dxλidx∫x0xnϕiλidx.
We want to calculate the error in this approximation and find out when gi approximates u′(xi) exactly for each xi∈Δ. As in [14, 15] we want to see if gi approximates u′(xi) exactly when u is a quadratic polynomial.
The hat basis function (a) and biorthogonal basis function (b) with stepsize h=1.
2. SuperconvergenceTheorem 1.
Let u∈C0(Ω). Then one has
(16)gi=u(xi+1)-u(xi-1)xi+1-xi-1,i∈Bn,g0=u(x1)-u(x0)x1-x0,gn=u(xn)-u(xn-1)xn-xn-1.
Proof.
We note that
(17)λi(x)=2(x-xi-1)+(x-xi)xi-xi-1,xi-1≤x≤xi2(x-xi+1)+(x-xi)xi-xi+1xi≤x≤xi+1,∀i∈Bn.0,otherwise.
Now, we calculate gi for i∈Bn:
(18)gi=∫x0xnduh/dxλidx∫x0xnϕiλidx,
where
(19)∫x0xnϕiλidx=∫xi-1xix-xi-1xi-xi-1∫x0xnϕiλidx=∫xi-1xi·2x-xi-1+x-xixi-xi-1dx∫x0xnϕiλidx+∫xixi+1x-xi+1xi-xi+1∫x0xnϕiλidx=∫xi-1xi·2x-xi+1+x-xixi-xi+1dx∫x0xnϕiλidx=-12xi-1-xi+1,∫x0xnduhdxλidx=∑j=0nu(xj)∫x0xndϕjdxλidx=u(xi-1)∫xi-1xidϕi-1dxλidx+uxi∫xi-1xidϕidxλidx+∫xixi+1dϕidxλidx+uxi+1∫xixi+1dϕi+1dxλidxsinceϕjandλioverlaponlywhensinceϕjandλioverlaponlywhenj∈i-1,i,i+1.
Therefore,
(20)gi=u(xi+1)-u(xi-1)xi+1-xi-1.
Now we look at the endpoints. We note that
(21)g0=∫x0xnduh/dxλ0dx∫x0xnϕ0λ0dx,gn=∫x0xnduh/dxλndx∫x0xnϕnλndx.
Computing as before we get
(22)g0=u(x1)-u(x0)x1-x0,gn=u(xn)-u(xn-1)xn-xn-1.
We have the following super-convergence in L2-norm. This is proved as in [4, 16].
Theorem 2.
Let hi=xi+1-xi for 0≤i≤n-1, n>2, h=max0≤i≤n-1hi, and Ω0=(x1,xn-1). If the point distribution satisfies |hi+1-hi|=O(h2) for 0≤i≤n-1, then one has the estimate
(23)dudx-QhduhdxL2(Ω0)≤h2uW3,∞(Ω0),u∈W3,∞(Ω0),
where, for α∈{0,1,2,3},
(24)W3,∞(Ω)=u∈L∞(Ω):dαudxα∈L∞(Ω),uW3,∞(Ω)=maxαdαudxαL∞(Ω).
If one measures the error in the whole domain Ω=(x0,xn), one has
(25)dudx-QhduhdxL2(Ω)≤h3/2uW3,∞(Ω),u∈W3,∞(Ω).
Proof.
Since the gradient of a quadratic function is exactly reproduced in the interior of the domain, the first estimate (23) follows from the Bramble-Hilbert lemma. The second estimate is proved as in [4, 16].
For the tensor product meshes in two or three dimensions satisfying the above mesh condition this theorem has an easy extension. We will discuss about the simple extension to the two-dimensional case in Section 3.
2.1. Application to Quadratic FunctionsCorollary 3.
Let u be a quadratic polynomial on Ω. Then gi reproduces u′x~i exactly for all xi∈Δ, where(26)x~i=x0+x12,i=0xi-1+xi+12,1≤i≤n-1xn-1+xn2,i=n,gi=∫x0xnduh/dxλidx∫x0xnϕiλidx.
Proof.
We use the result of Theorem 1 to get
(27)gi=axi-12+bxi-1-axi+12-bxi+1xi-1-xi+1=axi-12-xi+12+bxi-1-xi+1xi-1-xi+1=axi-1-xi+1xi-1+xi+1+bxi-1-xi+1xi-1-xi+1=axi-1+xi+1+b.
On the other hand,
(28)u′x~i=2axi-1+xi+12+b=axi-1+xi+1+b=gi.
So, gi reproduces u′x~i exactly for i∈Bn. Now, for i=0 and i=n, we have
(29)g0=ax12+bx1-ax02-bx0x1-x0g0=ax12-x02+bx1-x0x1-x0g0=ax1-x0x1+x0+bx1-x0x1-x0g0=ax1+x0+b,gn=axn-12+bxn-1-axn2-bxnxn-1-xngn=axn-12-xn2+bxn-1-xnxn-1-xngn=axn-1-xnxn-1+xn+bxn-1-xnxn-1-xngn=axn-1+xn+b.
Since
(30)u′x~0=ax0+x1+b,u′x~n=axn-1+xn+b,
we have g0 and gn reproduce u′x~0 and u′x~n, respectively, exactly.
Remark 4 (uniform grid).
Let Δ=a1=x0<x1<⋯<xn=a2 be a uniform grid on the interval Ω so that xi-xi-1=h,∀i∈An∖0, where h is some constant, called the stepsize. We note that if our grid is uniform, then xi=x~i∀xi∈intΔ. So, our gradient recovery operator will reproduce the exact gradient of any quadratic function on the interior of a uniform grid. We cannot recover the gradients at the endpoints exactly, however, since x0≠x~0 and xn≠x~n.
Corollary 5.
Let u(x)=ax2+bx+c on Ω, and let the grid be uniform with step-size h. Then gi-u′(xi)=ah for i=0,n (i.e., for the endpoints of the grid).
Proof.
We will start with the case where i=0 (i.e., the left endpoint). We know from Corollary 3 that g0=ax1+x0+b. Since our grid is uniform with stepsize h, this simplifies to g0=2ax0+ah+b. We also have u′x0=2ax0+b. Thus
(31)g0-u′x0=ah.
The case for i=n (i.e., the right endpoint) is proven similarly.
For a nonuniform grid (Figure 2), we cannot simplify our approximations using the stepsize h, since the spacing between each adjacent node is not always equal. We did not make any assumption about the uniformity of the grid in Corollary 3. Thus |gi-u′(xi)|, i∈An, is not zero for a nonuniform grid. This is estimated in the following corollary.
A nonuniform grid with 8 nodes (vertical lines). The points x~i (dots) are also shown.
Corollary 6.
Let ux=ax2+bx+c. Then,
(32)gi-u′xi=axi-1+xi+1-2xi∀xi∈intΔ,g0-u′x0=ax1-x0,gn-u′xn=axn-1-xn.
Remark 7.
For i∈Bn let hi=xi+1-xi. Then we have
(33)gi-u′xi=axi-1+xi+1-2xi=a(hi+1-hi).
We still get superapproximation of the gradient recovery when hi+1-hi=O(h2) and i∈Bn.
2.2. Application to Cubic FunctionsCorollary 8.
Let u(x)=ax3+bx2+cx+d on Ω. Then,
(34)gi-u′x~i=a4xi-1-xi+12
for all i∈Bn, and
(35)g0-u′x~0=a4x0-x12gn-u′x~n=a4xn-1-xn2,
where x~i is defined as in Corollary 3. Similarly, for all i∈Bn one has
(36)gi-u′(xi)=axi-12+xi-1xi+1+xi+12-3xi2hhhhhhhhhhhh+bxi-1+xi+1-2xi,g0-u′(x0)=ax12+x0x1-2x02+bx1-x0,gn-u′(xn)=axn-12+xn-1xn-2xn2+bxn-1-xn.
Proof.
The proof of this theorem is similar to Corollary 3.
3. Extension to the Two-Dimensional Case
Let Th be a tensor product mesh of the two-dimensional rectangular domain Ω having the mesh-size h, where elements of Th are rectangles. Note that the elements are rectangles for a tensor product mesh in two dimensions. Here we have
(37)Ω¯=⋃T∈ThT¯.
Let Th0 be the collection of all these rectangles not touching the boundary of Ω, and
(38)Ω¯0=⋃T∈Th0T¯.
Let P(T) be the space of bilinear polynomials on T. Then the standard tensor product finite element space is defined as
(39)Vh={uh∈C0(Ω)∣uh2T∈P(T),T∈Th}.
We now use the property of the gradient recovery operator Qh:L2(Ω)→Vh defined above by means of a biorthogonal system as in Theorem 2.
Let uh be the Lagrange interpolation of u∈W3,∞(Ω). We consider a patch as shown in Figure 3 consisting of four elements {Ti}i=14, where the values of u are shown as {uij}i,j=13. Let l1 and l2 be the length of elements T1 and T2, and h1 and h2 the height of T1 and T4, respectively, as shown in Figure 3.
A patch corresponding to the node xp.
Then for the rectangular mesh as shown in Figure 3 we have for the inner vertex xp(40)(Qh∂xuh)(xp)=u32-u12l1+l2,(Qh∂yuh)(xp)=u23-u21h1+h2.
Thus using Taylor’s expansion as in [4, 16] we obtain
(41)∂xu-Qh∂xuhL2(Ω0)≤Ch2uW3,∞(Ω0),∂yu-Qh∂yuhL2(Ω0)≤Ch2uW3,∞(Ω0),
if the tensor product mesh satisfies
(42)h1-h2=O(h2),l1-l2=O(h2).
Hence if the mesh satisfies the above condition, we have
(43)∇u-Qh∇uhL2(Ω0)≤Ch2uW3,∞(Ω0).
If we compute the L2-norm of the error on Ω we have the estimate
(44)∇u-Qh∇uhL2(Ω)≤Ch3/2uW3,∞(Ω).
4. Numerical Examples
We consider two examples. In each example, we compute the L2-norm of the error between the exact gradient ∇u and the recovered gradient Qh∇uh, where uh is the Lagrange interplant of u with respect to the underlying mesh Th. The recovery operator Qh is based on the biorthogonal system in the two-dimensional case. We note that the biorthogonal system in the two-dimensional case is constructed by using the tensor product construction of the one-dimensional case. We also compute the errors in the whole domain Ω=[0,1]×[0,1] and Ω0⊂Ω, where Ω0 consists of elements not touching the boundary in the coarsest mesh. We have also verified that recovered gradient is exact with the absolute error 10-14 except on the boundary when we have a quadratic exact solution.
Example 1.
For Example 1 we choose the exact solution u as
(45)u(x,y)=x2y2sinxyx-1y-1.
We compute L2-norm of the difference between the exact gradient and recovered gradient in Ω and Ω0. The numerical results are tabulated in Table 1. We note that we fix Ω0 from the beginning. From Table 1 we can see the super-convergence of the recovered gradient. As predicted by the theory the L2-errors in Ω converge with order 3/2, whereas the L2-errors in Ω0 have quadratic convergence.
L2-errors for the exact gradient and the recovered gradient, Example 1.
Level l
# elem.
∇u-Qh∇uhL2(Ω)
∇u-Qh∇uhL2(Ω0)
1
256
2.04366e − 03
5.43131e − 04
2
1024
7.32136e − 04
1.48
1.36444e − 04
1.99
3
4096
2.60025e − 04
1.49
3.41525e − 05
2.00
4
16384
9.20937e − 05
1.50
8.54072e − 06
2.00
5
65536
3.25843e − 05
1.50
2.13534e − 06
2.00
Example 2.
For Example 2 we choose the exact solution u as
(46)u(x,y)=exx2+y2+y2cosxy+x2sinxy.
The errors in L2-norm are tabulated in Table 2, where we can observe the super-convergence as in Example 1. We note that the numerical results support the theoretical prediction in both examples.
L2-errors for the exact gradient and the recovered gradient, Example 2.
Level l
# elem.
∇u-Qh∇uhL2(Ω)
∇u-Qh∇uhL2(Ω0)
1
256
2.04366e − 03
5.43131e − 04
2
1024
7.32136e − 04
1.48
1.36444e − 04
1.99
3
4096
2.60025e − 04
1.49
3.41525e − 05
2.00
4
16384
9.20937e − 05
1.50
8.54072e − 06
2.00
5
65536
3.25843e − 05
1.50
2.13534e − 06
2.00
5. Conclusion
We have presented an analysis of approximation property of the reconstructed gradient using an oblique projection. The reconstruction of the gradient is numerically efficient due to the use of a biorthogonal system. Numerical results demonstrate the optimality of the approach. It is useful to investigate the extension to higher order finite elements.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
Acknowledgment
The authors are grateful to the anonymous referees for their valuable suggestions to improve the quality of the earlier version of this work.
ZienkiewiczO. C.ZhuJ. Z.The superconvergent patch recovery and a posteriori error estimates. I. The recovery techniqueZienkiewiczO. C.ZhuJ. Z.The superconvergent patch recovery and a posteriori error estimates. II. Error estimates and adaptivityAinsworthM.OdenJ. T.XuJ.ZhangZ.Analysis of recovery type a posteriori error estimators for mildly structured gridsChenL.Superconvergence of tetrahedral linear finite elementsChenJ.WangD.Three-dimensional finite element superconvergent gradient recovery on Par6 patternsBankR. E.XuJ.Asymptotically exact a posteriori error estimators. I. Grids with superconvergenceNagaA.ZhangZ.A posteriori error estimates based on the polynomial preserving recoveryLamichhaneB. P.A gradient recovery operator based on an oblique projectionGoodsellG.Pointwise superconvergence of the gradient for the linear tetrahedral elementHuangY.XuJ.Superconvergence of quadratic finite elements on mildly structured gridsWohlmuthB. I.LamichhaneB. P.A stabilized mixed finite element method for the biharmonic equation based on biorthogonal systemsZhangZ.Ultraconvergence of the patch recovery techniqueZhangZ.Ultraconvergence of the patch recovery technique. IILiB.ZhangZ.Analysis of a class of superconvergence patch recovery techniques for linear and bilinear finite elements