We determine the weakly asymptotically orders for the average errors
of the Grünwald interpolation sequences based on the Tchebycheff nodes
in the Wiener space. By these results we know that for the Lp-norm
(2≤q≤4) approximation, the p-average (1≤p≤4) error of some Grünwald interpolation sequences is weakly equivalent to the p-average
errors of the best polynomial approximation sequence.

1. Introduction and Main Results

Let F be a real separable Banach space equipped with a probability measure μ on the Borel sets of F. Let H be another normed space such that F is continuously embedded in H. By ∥·∥ we denote the norm in H. Any A:F→H such that f↦∥f-A(f)∥ is a measurable mapping is called an approximation operator (or just approximation). The p-average error of A is defined as
ep(A,H)=(∫F∥f-A(f)∥pμ(df))1/p.

Since in practice the underlying function is usually given via its (exact or noisy) values at finitely many points, the approximation operator A(f) is often considered depending on some function values about f only. Many papers such as [1–4] studied the complexity of computing an ε-approximation in average case setting. Noticed that the polynomial interpolation operators are important approximation tool in the continuous functions space, and they are depending on some function values about f only, we want to know the average error for some polynomial interpolation operators in the Wiener measure. Now we turn to describe the contents in detail.

Let X be the space of continuous function f defined on [0,1] such that f(0)=0. The space X is equipped with the sup norm. The Wiener measure ω is uniquely defined by the following property:
ω(f∈X:(f(t1),…,f(tn))∈B)=∏j=1n12π(tj-tj-1)∫Be∑j=1n(-(uj-uj-1)2/2(tj-tj-1))du1⋯dun,
for every n≥1,B∈ℬ(ℛn), where ℬ(ℛn) denote the set class of all Borel measurable subsets of ℛn, and 0=t0<t1<⋯<tn≤1 with u0=0. Its mean is zero, and its correlation operator is given by Lx1(CωLx2)=min{x1,x2} for Lxi(f)=f(xi), that is,
∫Xf(x1)f(x2)ω(df)=min{x1,x2},∀x1,x2∈[0,1].

In this paper, we specify F={f∈C[-1,1]:g(t)=f(2t-1)∈X}, and for every measurable subset A⊂F, we define
μ(A)=ω({g(t)=f(2t-1),f∈A}).

For 1≤p<∞, denote by Lp the linear normed space of Lp-integrable function f on [-1,1] with the following finite norm:
∥f∥p=(∫-11|f(x)|pdx)1/p.

Let
tk=tkn=cos2k-12nπ,k=1,…,n
be the zeros of Tn(x)=cosnθ(x=cosθ), the nth degree Tchebycheff polynomial of the first kind. The well-known Grünwald interpolation polynomial of f based on {tk}k=1n is given by (see [5])
Gn(f,x)=∑k=1nf(tk)lk2(x),
where
lk(x)=(-1)k+11-tk2Tn(x)n(x-tk),k=1,…,n.

Theorem 1.1.

Let Gn(f,x) be defined as above. Then
ep(Gn,Lp)≍{1n,1≤p≤4,1n2/p,p≥4,
where in the following A(n)≍B(n) means that there exists C independent of n such that A(n)/C≤B(n)≤CA(n), and the constant C may be different in the same expression.

By Hölder inequality, combining Theorem 1.1 with paper [2] we know that for 1≤p,q≤4,
ep(Gn,Lq)≍1n.

Remark 1.2.

Denote by 𝒫n the set of algebraic polynomials of degree ≤n. For f∈F, let Tnf denote the best Lq-approximation polynomial of f from 𝒫n. Then the p-average error of the best Lq-approximation of continuous functions by polynomials from 𝒫n over the Wiener space is given by
ep(Tn,Lq)=(∫F∥f-Tnf∥qpμ(df))1/p.
By Theorem 1.1 and paper [6] we can obtain that for 2≤q≤4 and 1≤p≤4, we have
ep(Gn,Lq)≍ep(Tn,Lq)≍1n.

Remark 1.3.

Let us recall some fundamental notions about the information-based complexity in the average case setting. Let F be a set with a probability measure μ, and let G be a normed linear space with norm ∥·∥. Let S be a measurable mapping from F into G which is called a solution operator. Let N be a measurable mapping from F into ℛn, and let ϕ be a measurable mapping from ℛn into G which are called an information operator and an algorithm, respectively. For 1≤p<+∞, the p-average error of the approximation ϕ∘N with respect to the measure μ is defined by
ep(S,N,ϕ,∥·∥):=(∫F∥Sf-ϕ(N(f))∥pμ(df))1/p,
and the p-average radius of information N with respect to μ is defined by
rp(S,N,∥·∥):=infϕep(S,N,ϕ,∥·∥),
where ϕ ranges over the set of all algorithms. Futhermore, let Λ denote a class of permissible information functional and denote 𝒩nΛ the set of nonadaptive information operators N from Λ of cardinality n, that is,
N(f)=(L1(f),L2(f),…,Ln(f)),Li∈Λ,i=1,…,n.
Let
rp(n,S,Λ,∥·∥)=infN∈𝒩nΛrp(S,N,∥·∥),
denote the nth minimal p-average radius of nonadaptive information in the class Λ.

For example, if F and μ are defined as above, S is the identity mapping I, and Λ is consist of function evaluations at fixed point; then by [2] we know that for Lq-norm approximation, if 1≤p,q<∞, we have
rp(n,I,Λ,Lq)≍1n.

It is easy to understand that Gn(f,x) can be viewed as a composition of a nonadaptive information operator from 𝒩nΛ and a linear algorithm, and for 1≤p,q≤4,
ep(Gn,Lq)≍rp(n,I,Λ,Lq).

In comparison with the result of Theorem 1.1, we consider the following Grünwald interpolation. Let
xk=xkn=coskπn+1,k=1,…,n
be the zeros of un(x)=sin(n+1)θ/sinθ,(x=cosθ), the nth Tchebycheff polynomial of the second kind. The Grünwald interpolation polynomial of f based on {xk}k=1n is given by
G̅n(f,x)=∑k=1nf(xk)l̅k2(x),
where
l̅k(x)=un(x)un′(xk)(x-xk)=(-1)k+1(1-xk2)un(x)(n+1)(x-xk),k=1,…,n.

Theorem 1.4.

Let G̅n(f,x) be defined as above. Then
e2(G̅n,L2)≍1.

2. The Proof of Theorem <xref ref-type="statement" rid="thm1">1.1</xref>

We consider the upper estimate first. From [7, page 107, (28)] we obtain
epp(Gn,∥·∥p)=vpp·∫-11(∫F|f(x)-Gn(f,x)|2μ(df))p/2dx,
where vpp is the pth absolute moment of the standard normal distribution. It is easy to verify
f(x)-Gn(f,x)=(1-∑k=1nlk2(x))f(x)+∑k=1n(f(x)-f(tk))lk2(x).
From (2.2) and Hölder inequality we can obtain
∫F|f(x)-Gn(f,x)|2μ(df)≤2(1-∑k=1nlk2(x))2∫Ff2(x)μ(df)+2∫F(∑k=1n(f(x)-f(tk))lk2(x))2μ(df)=2I1(x)+2I2(x).
By (1.3) we obtain
∫Ff2(x)μ(df)=∫Xg2(1+x2)ω(dg)=1+x2.
Let x=cosθ, then it is easy to verify
∑k=1nlk2(x)-1=Tn(x)n2(xTn′(x)-nTn(x))=cosnθsin(n-1)θnsinθ.
By (2.4), (2.5), and a simple computation we can obtain
∫-11|I1(x)|p/2dx≤1np∫0π|cosnθsin(n-1)θ|p|sinθ|p-1dθ≤{1np,1≤p≤2,lnnnp,p=2,1n2,p>2.
By (1.3), it is easy to verify that for k≥j,
∫F(f(x)-f(tk))(f(x)-f(tj))μ(df)={tk-x2,x<tk,0,tk≤x≤tj,x-tj2,x>tj.
Let t0=1,tn+1=-1. From (2.7) and a simple computation we know that for x∈[tm+1,tm], m=0,…,n,
I2(x)=12∑k=1n|x-tk|lk4(x)+∑k=s+1n-1(x-tk)lk2(x)∑j=k+1nlj2(x)+∑k=1m(tk-x)lk2(x)∑j=1k-1lj2(x)=J1(x)+J2(x)+J3(x).
From [8] we know ∑k=1nlk2(x)≤2, hence
∑k=1nlkp(x)≤C,∀p≥2.
From (1.8) it follows that
|(x-xk)lk(x)|≤1n,k=1,…,n.
From (2.7) and (2.10) it follows that
|J1(x)|≤12n∑k=1n∫-11|lk3(x)|dx≤Cn.
From (2.10) it follows that
J2(x)≤1n∑j=m+1n-1|lj(x)|∑k=j+1nlk2(x).
Let x=cosθ, we have
∑j=m+1n-1|lj(x)|∑k=j+1nlk2(x)=∑j=m+1n-1|sin((2j-1)π/2n)cosnθn(cosθ-cos((2j-1)π/2n))|×∑k=j+1nsin2((2k-1)π/2n)cos2nθn2(cosθ-cos((2k-1)π/2n))2≤1n3∑j=m+1n-1|sin((2j-1)π/2n)(cos((2m-1)π/2n)-cos((2j-1)π/2n))|×∑k=j+1nsin2((2k-1)π/2n)(cos((2m-1)π/2n)-cos((2k-1)π/2n))2=14n3∑j=m+1n-1|sin((2j-1)π/2n)(sin((j-m)π/2n)sin((j+m-1)π/2n))|×∑k=j+1nsin2((2k-1)π/2n)(sin((k-m)π/2n)sin((k+m-1)π/2n))2.
By sinx+siny=2sin((x+y)/2)cos(x-y)/2 we know that for 0<x,y<π, thus
0<sinx≤2sinx+y2.
It is easy to know
∑k=j+1n1(k-m)2<∑k=j+1n1(k-m)(k-m-1)=1j-m-1n-m≤1j-m.
By 2x/π≤sinx,∀x∈(0,π/2], (2.16), (2.17), and (2.18) we can obtain
∑j=mn-1|lj(x)|∑k=j+1nlk2(x)=|lm(x)|∑k=m+1nlk2(x)+∑j=m+1n-1|lj(x)|∑k=j+1nlk2(x)≤C.
From (2.12) and (2.16) we can obtain
|J2(x)|≤Cn.
Similarly
|J3(x)|≤Cn.
From (2.3), (2.8), (2.11), (2.17), and (2.18) we can obtain
∫-11|I2(x)|p/2dx≤Cnp/2.
By (2.1), (2.3), (2.6), and (2.19) we can obtain the upper estimate.

Now we consider the lower estimate. For 1≤p≤4, we can obtain the lower estimate from [2]. For p>4, from (2.4) we know that
∫-11(∫F|(1-∑k=1nlk2(x))f(x)|2μ(df))p/2dx=∫-11|1+x2|p/2|(1-∑k=1nlk2(x))|pdx.
Let x=cosθ, then from (2.5) we know that
∑k=1nlk2(x)-1=cosθsin2nθ2nsinθ-cos2nθn.
Hence we can verify that for θ∈[5π/8n,7π/8n],|∑k=1nlk2(x)-1|≥|sin2nθ|4nsinθ≥17.
From (2.20) and (2.22) and a simple computation we can obtain
∫-11(∫F|(1-∑k=1nlk2(x))f(x)|2μ(df))p/2dx≥cos5π/8n-cos7π/8n14p≥38·14pn2.
From (2.2), (2.3), and (2.19) it follows that
∫-11(∫F|∑k=1n(f(x)-f(tk))lk2(x)|2μ(df))p/2dx≤Cnp/2.
From (2.1), (2.2), (2.23), and (2.24) we can obtain the lower estimate for p>4.

3. The Proof of Theorem <xref ref-type="statement" rid="thm2">1.4</xref>

Let
Qn(f,x)=(1+x2f(1)+1-x2f(-1))un2(x)(n+1)2+∑k=1nf(xk)(1-x2)(1-xxk)(un(x)(n+1)(x-xk))2
be the quasi-Hermite-Fejer interpolation polynomial of degree ≤2n+1 based on the extended Tchebycheff nodes of the second kind (see [8]); then by a simple computation we obtain
G̅n(f,xk)-Qn(f,xk)=0,k=1,…,n,G̅n′(f,xk)-Qn′(f,xk)=f(xk)3xk1-xk2,k=1,…,n,G̅n(f,1)-Qn(f,1)=∑k=1nf(xk)(1+xk)2-f(1),G̅n(f,-1)-Qn(f,-1)=∑k=1nf(xk)(1-xk)2-f(-1).
Denote
φk(x)=(1-x2)(1-xxk)(un(x)(n+1)(x-xk))2,k=1,…,n,φ0(x)=1+x2un2(x)(n+1)2,φn+1(x)=1-x2un2(x)(n+1)2,ϕk(x)=(1-x2)(1-xk2)un2(x)(n+1)2(x-xk),k=1,…,n.
By (3.2) and the unique of the Hermite interpolation polynomial Hn(f,x) which satisfies interpolation conditions,
Hn(f,xk)=f(xk),k=0,…,n+1,Hn′(f,xk)=f′(xk),k=1,…,n,
we obtain
G̅n(f,x)-Qn(f,x)=φ0(x)[∑k=1nf(xk)(1+xk)2-f(1)]+φn+1(x)[∑k=1nf(xk)(1-xk)2-f(-1)]+∑k=1nf(xk)3xk1-xk2ϕk(x)=un2(x)(n+1)2∑k=1nf(xk)(1+xk2)+2xun2(x)(n+1)2∑k=1nf(xk)-un2(x)2(n+1)2[f(1)+f(-1)]-xun2(x)2(n+1)2[f(1)-f(-1)]+∑k=1nf(xk)3xk1-xk2ϕk(x)=A1(x)+A2(x)+A3(x)+A4(x)+A5(x).
By (3.5) and (a+b+c+d)2≤4(a2+b2+c2+d2) we know that
e2(G̅n,L2)=∫F∫-11|f(x)-G̅n(f,x)|2dxμ(df)≤4∫F∫-11|f(x)-Qn(f,x)|2dxμ(df)+4∫F∫-11[A1(x)+A2(x)]2dxμ(df)+4∫F∫-11[A3(x)+A4(x)]2dxμ(df)+4∫F∫-11[A5(x)]2dxμ(df)=4I1+4I2+4I3+4I4.
From [8] we know that for every f∈C[-1,1],
∫-11|f(x)-Qn(f,x)|2dx≤∫-11|f(x)-Qn(f,x)|2(1-x2)-1/2dx≤Cω2(f,1n),
where ω(f,t) is the modulus of continuity of f on [-1,1] defined for every t≥0, and C is independent of n and f. By (3.7) and [6] we can obtain
I1≤C∫Fω2(f,1n)μ(df)≤Clnnn.
By using A1(x) and A2(x) we obtain
I2=∫F∫-11[A1(x)]2dxμ(df)+∫F∫-11[A2(x)]2dxμ(df)=1(n+1)4∫-11un4(x)dx∫F[∑k=1nf(xk)(1+xk2)]2μ(df)+4(n+1)4∫-11x2un4(x)dx∫F[∑k=1nf(xk)]2μ(df).
By (1.3) we obtain
∫F[∑k=1nf(xk)]2μ(df)=∫X[∑k=1ng(1+xk2)]2ω(dg)=∑k=1n∑j=1nmin{1+xk2,1+xj2}≍n2.∫F[∑k=1nf(xk)(1+xk2)]2μ(df)≍n2.
From (3.9) and (3.10) we obtain
I2≍1n2∫-11un4(x)dx≍1.
Similar to (3.11), we have
|I3|=14(n+1)4∫-11(1+x2)un4(x)dx≤Cn2.
By (3.3) and 0≤(1-x2)un2(x)≤1 we obtain
0≤I4=9(n+1)4∫F∫-11(1-x2)2un2(x)[∑k=1nf(xk)xkun(x)x-xk]2dxμ(df)≤9(n+1)4∫F∫-11(1-x2)1/2[∑k=1nf(xk)xkun(x)x-xk]2dxμ(df)=9(n+1)4∑k=1n∑j=1n∫-11(1-x2)1/2un2(x)(x-xk)(x-xj)dx∫Fxkxjf(xk)f(xj)μ(df).
By [8] we know that
∫-11(1-x2)un2(x)(x-xk)(x-xj)11-x2dx={(n+1)π1-xj2,j=k,0,j≠k.
By (1.3), (3.13), (3.14), and (2/π)x<sinx<x,0<x<π/2, we obtain
0≤I4≤9π2(n+1)3∑k=1nxk2(1+xk)1-xk2≤9π(n+1)3∑k=1n1sin2kπ/(n+1)≤Cn.
By (3.6), (3.8), (3.11), (3.12), and (3.15) we can obtain the upper estimate of Theorem 1.4. On the other hand, by (3.5) we can verify that
|f(x)-G̅n(f,x)|2≥[A1(x)+A2(x)]24-|f(x)-Qn(f,x)|2-[A3(x)+A4(x)]2-[A5(x)]2.
From (3.16), (3.8), (3.11), (3.12), and (3.15) we can obtain the lower estimate of Theorem 1.4.

TraubJ. F.WasilkowskiG. W.WoźniakowskiH.RitterK.Approximation and optimization on the Wiener spaceHickernellF. J.WoźniakowskiH.Integration and approximation in arbitrary dimensionsKonM.PlaskotaL.Information-based nonlinear approximation: an average case settingGrünwaldG.On the theory of interpolationSunY. S.WangC. Y.Average error bounds of best approximation of continuous functions on the Wiener spaceRitterK.VarmaA. K.PrasadJ.An analogue of a problem of P. Erdös and E. Feldheim
on Lp convergence of interpolatory processes