JAM Journal of Applied Mathematics 1687-0042 1110-757X Hindawi Publishing Corporation 782960 10.1155/2012/782960 782960 Research Article A Hybrid Gradient-Projection Algorithm for Averaged Mappings in Hilbert Spaces Tian Ming 1 Li Min-Min 1 Xu Hong-Kun College of Science Civil Aviation University of China Tianjin 300300 China cauc.edu.cn 2012 25 07 2012 2012 18 04 2012 21 06 2012 2012 Copyright © 2012 Ming Tian and Min-Min Li. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

It is well known that the gradient-projection algorithm (GPA) is very useful in solving constrained convex minimization problems. In this paper, we combine a general iterative method with the gradient-projection algorithm to propose a hybrid gradient-projection algorithm and prove that the sequence generated by the hybrid gradient-projection algorithm converges in norm to a minimizer of constrained convex minimization problems which solves a variational inequality.

1. Introduction

Let H be a real Hilbert space and C a nonempty closed and convex subset of H. Consider the following constrained convex minimization problem: (1.1)minimizexCf(x), where f:C is a real-valued convex and continuously Fréchet differentiable function. The gradient f satisfies the following Lipschitz condition: (1.2)f(x)-f(y)Lx-y,x,yC, where L>0. Assume that the minimization problem (1.1) is consistent, and let S denote its solution set.

It is well known that the gradient-projection algorithm is very useful in dealing with constrained convex minimization problems and has extensively been studied ( and the references therein). It has recently been applied to solve split feasibility problems . Levitin and Polyak  consider the following gradient-projection algorithm: (1.3)xn+1:=ProjC(xn-λnf(xn)),n0. Let {λn}n=0 satisfy (1.4)0<liminfnλnlimsupnλn<2L. It is proved that the sequence {xn} generated by (1.3) converges weakly to a minimizer of (1.1).

Xu proved that under certain appropriate conditions on {αn} and {λn} the sequence {xn} defined by the following relaxed gradient-projection algorithm: (1.5)xn+1=(1-αn)xn+αnProjC(xn-λnf(xn)),  n0, converges weakly to a minimizer of (1.1) .

Since the Lipschitz continuity of the gradient of f implies that it is indeed inverse strongly monotone (ism) [12, 13], its complement can be an averaged mapping. Recall that a mapping T is nonexpansive if and only if it is Lipschitz with Lipschitz constant not more than one, that a mapping is an averaged mapping if and only if it can be expressed as a proper convex combination of the identity mapping and a nonexpansive mapping, and that a mapping T is said to be ν-inverse strongly monotone if and only if x-y,  Tx-TyνTx-Ty2  for  all  x,yH, where the number ν>0. Recall also that the composite of finitely many averaged mappings is averaged. That is, if each of the mappings {Ti}i=1N is averaged, then so is the composite T1TN . In particular, an averaged mapping is a nonexpansive mapping . As a result, the GPA can be rewritten as the composite of a projection and an averaged mapping which is again an averaged mapping.

Generally speaking, in infinite-dimensional Hilbert spaces, GPA has only weak convergence. Xu  provided a modification of GPA so that strong convergence is guaranteed. He considered the following hybrid gradient-projection algorithm: (1.6)xn+1=θnh(xn)+(1-θn)ProjC(xn-λnf(xn)).

It is proved that if the sequences {θn} and {λn} satisfy appropriate conditions, the sequence {xn} generated by (1.6) converges in norm to a minimizer of (1.1) which solves the variational inequality (1.7)x*S,  (I-h)x*,  x-x*0,  xS.

On the other hand, Ming Tian  introduced the following general iterative algorithm for solving the variational inequality (1.8)xn+1=αnγf(xn)+(I-μαnF)Txn,  n0, where F is a κ-Lipschitzian and η-strongly monotone operator with κ>0, η>0 and f is a contraction with coefficient 0<α<1. Then, he proved that if {αn} satisfying appropriate conditions, the {xn} generated by (1.8) converges strongly to the unique solution of variational inequality (1.9)(μF-γf)x~,  x~-z0,zFix(T).

In this paper, motivated and inspired by the research work in this direction, we will combine the iterative method (1.8) with the gradient-projection algorithm (1.3) and consider the following hybrid gradient-projection algorithm: (1.10)xn+1=θnγh(xn)+(I-μθnF)ProjC(xn-λnf(xn)),n0.

We will prove that if the sequence {θn} of parameters and the sequence {λn} of parameters satisfy appropriate conditions, then the sequence {xn} generated by (1.10) converges in norm to a minimizer of (1.1) which solves the variational inequality (VI)(1.11)x*S,(μF-γh)x*,  x-x*0,xS, where S is the solution set of the minimization problem (1.1).

2. Preliminaries

This section collects some lemmas which will be used in the proofs for the main results in the next section. Some of them are known; others are not hard to derive.

Throughout this paper, we write xnx to indicate that the sequence {xn} converges weakly to x, xnx implies that {xn} converges strongly to x. ωw(xn):={x:xnjx} is the weak ω-limit set of the sequence {xn}n=1.

Lemma 2.1 (see [<xref ref-type="bibr" rid="B19">17</xref>]).

Assume that {an}n=0 is a sequence of nonnegative real numbers such that (2.1)an+1(1-γn)an+γnδn+βn,  n0, where {γn}n=0 and {βn}n=0 are sequences in [0,1] and {δn}n=0 is a sequence in such that

n=0γn=;

either limsupnδn0 or n=0γn|δn|<;

n=0βn<.

Then limnan=0.

Lemma 2.2 (see [<xref ref-type="bibr" rid="B20">18</xref>]).

Let C be a closed and convex subset of a Hilbert space H, and let T:CC be a nonexpansive mapping with FixT. If {xn}n=1 is a sequence in C weakly converging to x and if {(I-T)xn}n=1 converges strongly to y, then (I-T)x=y.

Lemma 2.3.

Let H be a Hilbert space, and let C be a nonempty closed and convex subset of H. h:CC a contraction with coefficient 0<ρ<1, and F:CC a κ-Lipschitzian continuous operator and η-strongly monotone operator with κ,η>0. Then, for 0<γ<μη/ρ, (2.2)x-y,(μF-γh)x-(μF-γh)y(μη-γρ)x-y2,x,yC. That is, μF-γh is strongly monotone with coefficient μη-γρ.

Lemma 2.4.

Let C be a closed subset of a real Hilbert space H, given xH and yC. Then, y=PCx if and only if there holds the inequality (2.3)x-y,  y-z0,  zC.

3. Main Results

Let H be a real Hilbert space, and let C be a nonempty closed and convex subset of H such that C±CC. Assume that the minimization problem (1.1) is consistent, and let S denote its solution set. Assume that the gradient f satisfies the Lipschitz condition (1.2). Since S is a closed convex subset, the nearest point projection from H onto S is well defined. Recall also that a contraction on C is a self-mapping of C such that h(x)-h(y)ρx-y,for  all  x,yC, where ρ[0,1) is a constant. Let F be a κ-Lipschitzian and η-strongly monotone operator on C with κ,η>0. Denote by Π the collection of all contractions on C, namely, (3.1)Π={h:h  is  a  contraction  on  C}. Now given hΠ with 0<ρ<1, s(0,1). Let 0<μ<2η/κ2,   0<γ<μ(η-(μκ2)/2)/ρ=τ/ρ. Assume that λs with respect to s is continuous and, in addition, λs[a,b](0,2/L). Consider a mapping Xs on C defined by (3.2)Xs(x)=sγh(x)+(I-sμF)ProjC(I-λsf)(x),  xC. It is easy to see that Xs is a contraction. Setting Vs:=ProjC(I-λsf). It is obvious that Vs is a nonexpansive mapping. We can rewrite Xs(x) as (3.3)Xs(x)=sγh(x)+(I-sμF)Vs(x). First observe that for s(0,1), we can get (3.4)(I-sμF)Vs(x)-(I-sμF)Vs(y)2=Vs(x)-Vs(y)-sμ(FVs(x)-FVs(y))2=Vs(x)-Vs(y)2-2sμVs(x)-Vs(y),FVs(x)-FVs(y)+s2μ2FVs(x)-FVs(y)2x-y2-2sμηVs(x)-Vs(y)2+s2μ2κ2Vs(x)-Vs(y)2(1-sμ(2η-sμκ2))x-y2(1-sμ(2η-sμκ2)2)2x-y2(1-sμ(η-μκ22))2x-y2=(1-sτ)2x-y2. Indeed, we have (3.5)Xs(x)-Xs(y)=sγh(x)+(I-sμF)Vs(x)-sγh(y)-(I-sμF)Vs(y)sγh(x)-h(y)+(I-sμF)Vs(x)-(I-sμF)Vs(y)sγρx-y+(1-sτ)x-y=(1-s(τ-γρ))x-y. Hence, Xs has a unique fixed point, denoted xs, which uniquely solves the fixed-point equation (3.6)xs=sγh(xs)+(I-sμF)Vs(xs). The next proposition summarizes the properties of {xs}.

Proposition 3.1.

Let xs be defined by (3.6).

{xs} isboundedfor   s(0,(1/τ)).

lims0xs-ProjC(I-λsf)(xs)=0.

xs   definesacontinuouscurvefrom   (0,1/τ)   into   H.

Proof.

(i) Take a x¯S, then we have (3.7)xs-x¯=sγh(xs)+(I-sμF)ProjC(I-λsf)(xs)-x¯=(I-sμF)ProjC(I-λsf)(xs)-(I-sμF)ProjC(I-λsf)(x¯)+  s(γh(xs)-μFProjC(I-λsf)(x¯))(1-sτ)xs-x¯+sγh(xs)-μF(x¯)(1-sτ)xs-x¯+sγρxs-x¯+sγh(x¯)-μF(x¯). It follows that (3.8)xs-x¯γh(x¯)-μF(x¯)τ-γρ. Hence, {xs} is bounded.

(ii) By the definition of {xs}, we have (3.9)xs-ProjC(I-λsf)(xs)=sγh(xs)+(I-sμF)ProjC(I-λsf)(xs)-ProjC(I-λsf)(xs)=sγh(xs)-μFProjC(I-λsf)(xs)0,{xs} is bounded, so are {h(xs)} and {FProjC(I-λsf)(xs)}.

(iii) Take s, s0(0,1/τ), and we have (3.10)xs-xs0=sγh(xs)+(I-sμF)ProjC(I-λsf)(xs)-s0γh(xs0)-(I-s0μF)ProjC(I-λs0f)(xs0)(s-s0)γh(xs)+s0γ(h(xs)-h(xs0))+(I-s0μF)ProjC(I-λs0f)(xs)-(I-s0μF)ProjC(I-λs0f)(xs0)+(I-sμF)ProjC(I-λsf)(xs)-(I-s0μF)ProjC(I-λs0f)(xs)(s-s0)γh(xs)+s0γ(h(xs)-h(xs0))+(I-s0μF)ProjC(I-λs0f)(xs)-(I-s0μF)ProjC(I-λs0f)(xs0)+(I-sμF)ProjC(I-λsf)(xs)-(I-sμF)ProjC(I-λs0f)(xs)+(I-sμF)ProjC(I-λs0f)(xs)-(I-s0μF)ProjC(I-λs0f)(xs)|s-s0|γh(xs)+s0γρxs-xs0+(1-s0τ)xs-xs0+|λs-λs0|f(xs)+sμFProjC(I-λs0f)(xs)-s0μFProjC(I-λs0f)(xs)=|s-s0|γh(xs)+s0γρxs-xs0+(1-s0τ)xs-xs0+|λs-λs0|f(xs)+|s-s0|μFProjC(I-λs0f)(xs)=(γh(xs)+μFProjC(I-λs0f)(xs))|s-s0|+s0γρxs-xs0+(1-s0τ)xs-xs0+|λs-λs0|f(xs). Therefore, (3.11)xs-xs0γh(xs)+μFProjC(I-λs0f)(xs)s0(τ-γρ)|s-s0|+f(xs)s0(τ-γρ)|λs-λs0|. Therefore, xsxs0 as ss0. This means xs is continuous.

Our main result in the following shows that {xs} converges in norm to a minimizer of (1.1) which solves some variational inequality.

Theorem 3.2.

Assume that {xs} is defined by (3.6), then xs converges in norm as s0 to a minimizer of (1.1) which solves the variational inequality (3.12)(μF-γh)x*,  x~-x*0,x~S. Equivalently, we have Projs(I-(μF-γh))x*=x*.

Proof.

It is easy to see that the uniqueness of a solution of the variational inequality (3.12). By Lemma 2.3, μF-γh is strongly monotone, so the variational inequality (3.12) has only one solution. Let x*S denote the unique solution of (3.12).

To prove that xsx*  (s0), we write, for a given x~S, (3.13)xs-x~=sγh(xs)+(I-sμF)ProjC(I-λsf)(xs)-x~=s(γh(xs)-μFx~)+(I-sμF)ProjC(I-λsf)(xs)-(I-sμF)ProjC(I-λsf)(x~). It follows that (3.14)xs-x~2=sγh(xs)-μFx~,  xs-x~+(I-sμF)ProjC(I-λsf)(xs)-(I-sμF)ProjC(I-λsf)(x~),xs-x~(1-sτ)xs-x~2+sγh(xs)-μFx~,  xs-x~. Hence, (3.15)xs-x~21τγh(xs)-μFx~,xs-x~1τ{γρxs-x~2+γh(x~)-μFx~,xs-x~}. To derive that (3.16)xs-x~21τ-γργh(x~)-μFx~,xs-x~. Since {xs} is bounded as s0, we see that if {sn} is a sequence in (0,1) such that sn0 and xsnx¯, then by (3.16), xsnx¯. We may further assume that λsnλ[0,2/L] due to condition (1.4). Notice that ProjC(I-λf) is nonexpansive. It turns out that (3.17)xsn-ProjC(I-λf)xsnxsn-ProjC(I-λsnf)xsn+ProjC(I-λsnf)xsn-ProjC(I-λf)xsnxsn-ProjC(I-λsnf)xsn+(λ-λsn)f(xsn)=xsn-ProjC(I-λsnf)xsn+|λ-λsn|f(xsn). From the boundedness of {xs} and lims0ProjC(I-λsf)xs-xs=0, we conclude that (3.18)limnxsn-ProjC(I-λf)xsn=0. Since xsnx¯, by Lemma 2.2, we obtain (3.19)x¯=ProjC(I-λf)x¯. This shows that x¯S.

We next prove that x¯ is a solution of the variational inequality (3.12). Since (3.20)xs=sγh(xs)+(I-sμF)ProjC(I-λsf)(xs), we can derive that (3.21)(μF-γh)(xs)=-1s(I-ProjC(I-λsf)(xs)+μ(F(xs)-FProjC(I-λsf)(xs)). Therefore, for x~S, (3.22)(μF-γh)(xs),xs-x~=-1s(I-ProjC(I-λsf))(xs)-(I-ProjC(I-λsf))(x~),xs-x~+μF(xs)-FProjC(I-λsf)(xs),xs-x~μF(xs)-FProjC(I-λsf)(xs),xs-x~. Since ProjC(I-λsf) is nonexpansive, we obtain that I-ProjC(I-λsf) is monotone, that is, (3.23)(I-ProjC(I-λsf))(xs)-(I-ProjC(I-λsf))(x~),xs-x~0. Taking the limit through s=sn0 ensures that x¯ is a solution to (3.12). That is to say (3.24)(μF-γh)(x¯),x¯-x~0. Hence x¯=x* by uniqueness. Therefore, xsx* as s0. The variational inequality (3.12) can be written as (3.25)(I-μF+γh)x*-x*,  x~-x*0,x~S. So, by Lemma 2.4, it is equivalent to the fixed-point equation (3.26)PS(I-μF+γh)x*=x*.

Taking F=A, μ=1 in Theorem 3.2, we get the following

Corollary 3.3.

We have that {xs} converges in norm as s0 to a minimizer of (1.1) which solves the variational inequality (3.27)(A-γh)x*,  x~-x*0,x~S. Equivalently, we have Projs(I-(A-γh))x*=x*.

Taking F=I, μ=1, γ=1 in Theorem 3.2, we get the following.

Corollary 3.4.

Let zsH be the unique fixed point of the contraction zsh(z)+(1-s)ProjC(I-λsf)(z). Then, {zs} converges in norm as s0 to the unique solution of the variational inequality (3.28)(I-h)x*,  x~-x*0,  x~S.

Finally, we consider the following hybrid gradient-projection algorithm, (3.29){x0Carbitrarily,xn+1=θnγh(xn)+(I-μθnF)ProjC(xn-λnf(xn)),n0. Assume that the sequence {λn}n=0 satisfies the condition (1.4) and, in addition, that the following conditions are satisfied for {λn}n=0 and {θn}n=0[0,1]:

θn0;

n=0θn=;

n=0|θn+1-θn|<;

n=0|λn+1-λn|<.

Theorem 3.5.

Assume that the minimization problem (1.1) is consistent and the gradient f satisfies the Lipschitz condition (1.2). Let {xn} be generated by algorithm (3.29) with the sequences {θn} and {λn} satisfying the above conditions. Then, the sequence {xn} converges in norm to x* that is obtained in Theorem 3.2.

Proof.

(1) The sequence {xn}n=0 is bounded. Setting (3.30)Vn:=ProjC(I-λnf). Indeed, we have, for x¯S, (3.31)xn+1-x¯=θnγh(xn)+(I-μθnF)Vnxn-x¯=θn(γh(xn)-μF(x¯))+(I-μθnF)Vnxn-(I-μθnF)Vnx¯(1-θnτ)xn-x¯+θnργxn-x¯+θnγh(x¯)-μF(x¯)=(1-θn(τ-γρ))xn-x¯+θnγh(x¯)-μF(x¯)max{xn-x¯,1τ-γργh(x¯)-μF(x¯)},n0. By induction, (3.32)xn-x¯max{x0-x¯,γh(x¯)-μF(x¯)τ-γρ}. In particular, {xn}n=0 is bounded.

(2) We prove that xn+1-xn0 as n. Let M be a constant such that (3.33)M>max{supn0γh(xn),  supκ,n0μFVκxn,  supn0f(xn)}. We compute (3.34)xn+1-xn=θnγh(xn)+(I-μθnF)Vnxn-θn-1γh(xn-1)-(I-μθn-1F)Vn-1xn-1=θnγ(h(xn)-h(xn-1))+γ(θn-θn-1)h(xn-1)+(I-μθnF)Vnxn-  (I-μθnF)Vnxn-1+(I-μθnF)Vnxn-1-(I-μθn-1F)Vn-1xn-1=θnγ(h(xn)-h(xn-1))+γ(θn-θn-1)h(xn-1)+(I-μθnF)Vnxn-  (I-μθnF)Vnxn-1+(I-μθnF)Vnxn-1-(I-μθnF)Vn-1xn-1+  (I-μθnF)Vn-1xn-1-(I-μθn-1F)Vn-1xn-1θnγρxn-xn-1+γ|θn-θn-1|h(xn-1)+(1-θnτ)xn-xn-1+Vnxn-1-Vn-1xn-1+μ|θn-θn-1|FVn-1xn-1θnγρxn-xn-1+M|θn-θn-1|+(1-θnτ)xn-xn-1+Vnxn-1-Vn-1xn-1+M|θn-θn-1|=(1-θn(τ-γρ))xn-xn-1+2M|θn-θn-1|+Vnxn-1-Vn-1xn-1,(3.35)Vnxn-1-Vn-1xn-1=ProjC(I-λnf)xn-1-ProjC(I-λn-1f)xn-1(I-λnf)xn-1-(I-λn-1f)xn-1=|λn-λn-1|f(xn-1)M|λn-λn-1|. Combining (3.34) and (3.35), we can obtain (3.36)xn+1-xn(1-(τ-γρ)θn)xn-xn-1+2M(|θn-θn-1|+|λn-λn-1|). Apply Lemma 2.1 to (3.36) to conclude that xn+1-xn0 as n.

(3) We prove that ωw(xn)S. Let x^ωw(xn), and assume that xnjx^ for some subsequence {xnj}j=1 of {xn}n=0. We may further assume that λnjλ[0,2/L] due to condition (1.4). Set V:=ProjC(I-λf). Notice that V is nonexpansive and FixV=S. It turns out that (3.37)xnj-Vxnjxnj-Vnjxnj+Vnjxnj-Vxnjxnj-xnj+1+xnj+1-Vnjxnj+Vnjxnj-Vxnjxnj-xnj+1+θnjγh(xnj)-μFVnjxnj+ProjC(I-λnjf)xnj-ProjC(I-λf)xnjxnj-xnj+1+θnjγh(xnj)-μFVnjxnj+|λ-λnj|f(xnj)xnj-xnj+1+2M(θnj+|λ-λnj|)0  as  j. So Lemma 2.2 guarantees that ωw(xn)FixV=S.

(4) We prove that xnx* as n, where x* is the unique solution of the VI (3.12). First observe that there is some x^ωw(xn)S Such that (3.38)limsupn(μF-γh)x*,  xn-x*=(μF-γh)x*,  x^-x*0.

We now compute (3.39)xn+1-x*2=θnγh(xn)+(I-μθnF)ProjC(I-λnf)(xn)-x*2=θn(γh(xn)-μFx*)+(I-μθnF)Vn(xn)-(I-μθnF)Vnx*2=θnγ(h(xn)-h(x*))+(I-μθnF)Vn(xn)-(I-μθnF)Vnx*+θn(γh(x*)-μFx*)2θnγ(h(xn)-h(x*))+(I-μθnF)Vn(xn)-(I-μθnF)Vnx*2+2θn(γh-μF)x*,xn+1-x*=θnγ(h(xn)-h(x*))2+(I-μθnF)Vn(xn)-(I-μθnF)Vnx*2+2θnγh(xn)-h(x*),(I-μθnF)Vn(xn)-(I-μθnF)Vnx*+2θn(γh-μF)x*,  xn+1-x*θn2γ2ρ2xn-x*2+(1-θnτ)2xn-x*2+2θnγρ(1-θnτ)xn-x*2+2θn(γh-μF)x*,  xn+1-x*=(θn2γ2ρ2+(1-θnτ)2+2θnγρ(1-θnτ))xn-x*2+2θn(γh-μF)x*,  xn+1-x*(θnγ2ρ2+1-2θnτ+θnτ2+2θnγρ)xn-x*2+2θn(γh-μF)x*,xn+1-x*=(1-θn(2τ-γ2ρ2-τ2-2γρ))xn-x*2+2θn(γh-μF)x*,  xn+1-x*. Applying Lemma 2.1 to the inequality (3.39), together with (3.38), we get xn-x*0 as n.

Corollary 3.6 (see [<xref ref-type="bibr" rid="B13">11</xref>]).

Let {xn} be generated by the following algorithm: (3.40)xn+1=θnh(xn)+(1-θn)ProjC(xn-λnf(xn)),n0. Assume that the sequence {λn}n=0 satisfies the conditions (1.4) and (iv) and that {θn}[0,1] satisfies the conditions (i)–(iii). Then {xn} converges in norm to x* obtained in Corollary 3.4.

Corollary 3.7.

Let {xn} be generated by the following algorithm: (3.41)xn+1=θnγh(xn)+(I-θnA)ProjC(xn-λnf(xn)),n0. Assume that the sequences {θn} and {λn} satisfy the conditions contained in Theorem 3.5, then {xn} converges in norm to x* obtained in Corollary 3.3.

Acknowledgments

Ming Tian is Supported in part by The Fundamental Research Funds for the Central Universities (the Special Fund of Science in Civil Aviation University of China: No. ZXH2012 K001) and by the Science Research Foundation of Civil Aviation University of China (No. 2012KYM03).

Levitin E. S. Poljak B. T. Minimization methods in the presence of constraints Žurnal Vyčislitel' noĭ Matematiki i Matematičeskoĭ Fiziki 1966 6 787 823 0211590 Calamai P. H. Moré J. J. Projected gradient methods for linearly constrained problems Mathematical Programming 1987 39 1 93 116 10.1007/BF02592073 909010 ZBL0634.90064 Polyak B. T. Introduction to Optimization 1987 New York, NY, USA Optimization Software xxvii+438 Translations Series in Mathematics and Engineering 1099605 Su M. Xu H. K. Remarks on the gradient-projection algorithm Journal of Nonlinear Analysis and Optimization 2010 1 35 43 Yao Y. Xu H.-K. Iterative methods for finding minimum-norm fixed points of nonexpansive mappings with applications Optimization 2011 60 6 645 658 10.1080/02331930903582140 2826133 Censor Y. Elfving T. A multiprojection algorithm using Bregman projections in a product space Numerical Algorithms 1994 8 2–4 221 239 10.1007/BF02142692 1309222 ZBL0828.65065 Byrne C. A unified treatment of some iterative algorithms in signal processing and image reconstruction Inverse Problems 2004 20 1 103 120 10.1088/0266-5611/20/1/006 2044608 ZBL1051.65067 Jung J. S. Strong convergence of composite iterative methods for equilibrium problems and fixed point problems Applied Mathematics and Computation 2009 213 2 498 505 10.1016/j.amc.2009.03.048 2536674 ZBL1175.65068 Lopez G. Martin V. Xu : H. K. Censor Y. Jiang M. Wang G. Iterative algorithms for the multiple-sets split feasibility problem Biomedical Mathematics: Promising Directions in Imaging, Therpy Planning and Inverse Problems 2009 Madison, Wis, USA Medical Physics 243 279 Kumam P. A hybrid approximation method for equilibrium and fixed point problems for a monotone mapping and a nonexpansive mapping Nonlinear Analysis 2008 2 4 1245 1255 10.1016/j.nahs.2008.09.017 2479367 ZBL1163.49003 Xu H.-K. Averaged mappings and the gradient-projection algorithm Journal of Optimization Theory and Applications 2011 150 2 360 378 10.1007/s10957-011-9837-z 2818926 ZBL1233.90280 Kumam P. A new hybrid iterative method for solution of equilibrium problems and fixed point problems for an inverse strongly monotone operator and a nonexpansive mapping Journal of Applied Mathematics and Computing 2009 29 1-2 263 280 10.1007/s12190-008-0129-1 2472110 ZBL1220.47102 Brezis H. Operateur Maximaux Monotones et Semi-Groups de Contractions dans les Espaces de Hilbert 1973 Amsterdam, The Netherlands North-Holland Combettes P. L. Solving monotone inclusions via compositions of nonexpansive averaged operators Optimization 2004 53 5-6 475 504 10.1080/02331930412331327157 2115266 ZBL1153.47305 Yao Y. Liou Y.-C. Chen R. A general iterative method for an infinite family of nonexpansive mappings Nonlinear Analysis 2008 69 5-6 1644 1654 10.1016/j.na.2007.07.013 2424535 ZBL1223.47105 Tian M. A general iterative algorithm for nonexpansive mappings in Hilbert spaces Nonlinear Analysis 2010 73 3 689 694 10.1016/j.na.2010.03.058 2653741 ZBL1192.47064 Xu H.-K. Iterative algorithms for nonlinear operators Journal of the London Mathematical Society. Second Series 2002 66 1 240 256 10.1112/S0024610702003332 1911872 ZBL1013.47032 Goebel K. Kirk W. A. Topics in Metric Fixed Point Theory 1990 28 Cambridge, UK Cambridge University Press viii+244 Cambridge Studies in Advanced Mathematics 10.1017/CBO9780511526152 1074005