Splitting methods have recently received much attention due to the fact that many nonlinear problems arising in applied areas such as image recovery, signal processing, and machine learning are
mathematically modeled as a nonlinear operator equation and this operator is decomposed as the sum of two (possibly simpler) nonlinear operators. Most of the investigation on splitting methods is however carried out in the framework of Hilbert spaces. In this paper, we consider these methods in the setting of Banach spaces. We shall introduce two iterative forward-backward splitting methods with relaxations and errors to find zeros of the sum of two accretive operators in the Banach spaces. We shall prove the weak and strong convergence of these methods under mild conditions. We also discuss applications of these methods to variational inequalities, the split feasibility problem, and a constrained convex minimization problem.
1. Introduction
Splitting methods have recently received much attention due to the fact that many nonlinear problems arising in applied areas such as image recovery, signal processing, and machine learning are mathematically modeled as a nonlinear operator equation and this operator is decomposed as the sum of two (possibly simpler) nonlinear operators. Splitting methods for linear equations were introduced by Peaceman and Rachford [1] and Douglas and Rachford [2]. Extensions to nonlinear equations in Hilbert spaces were carried out by Kellogg [3] and Lions and Mercier [4] (see also [5–7]). The central problem is to iteratively find a zero of the sum of two monotone operators A and B in a Hilbert space ℋ, namely, a solution to the inclusion problem
(1.1)0∈(A+B)x.
Many problems can be formulated as a problem of form (1.1). For instance, a stationary solution to the initial value problem of the evolution equation
(1.2)∂u∂t+Fu∋0,u(0)=u0
can be recast as (1.1) when the governing maximal monotone F is of the form F=A+B [4]. In optimization, it often needs [8] to solve a minimization problem of the form
(1.3)minx∈Hf(x)+g(Tx),
where f,g are proper lower semicontinuous convex functions from ℋ to the extended real line ℝ-:=(-∞,∞], and T is a bounded linear operator on ℋ. As a matter of fact, (1.3) is equivalent to (1.1) (assuming that f and g∘T have a common point of continuity) with A:=∂f and B:=T*∘∂g∘T. Here T* is the adjoint of T and ∂f is the subdifferential operator of f in the sense of convex analysis. It is known [8, 9] that the minimization problem (1.3) is widely used in image recovery, signal processing, and machine learning.
A splitting method for (1.1) means an iterative method for which each iteration involves only with the individual operators A and B, but not the sum A+B. To solve (1.1), Lions and Mercier [4] introduced the nonlinear Peaceman-Rachford and Douglas-Rachford splitting iterative algorithms which generate a sequence {υn} by the recursion
(1.4)υn+1=(2JλA-I)(2JλB-I)υn
and respectively, a sequence {υn} by the recursion
(1.5)υn+1=JλA(2JλB-I)υn+(I-JλB)υn.
Here we use JλT to denote the resolvent of a monotone operator T; that is, JλT=(I+λT)-1.
The nonlinear Peaceman-Rachford algorithm (1.4) fails, in general, to converge (even in the weak topology in the infinite-dimensional setting). This is due to the fact that the generating operator (2JλA-I)(2JλB-I) for the algorithm (1.4) is merely nonexpansive. However, the mean averages of {un} can be weakly convergent [5]. The nonlinear Douglashere-Rachford algorithm (1.5) always converges in the weak topology to a point u and u=JλBυ is a solution to (1.1), since the generating operator JλA(2JλB-I)+(I-JλB) for this algorithm is firmly nonexpansive, namely, the operator is of the form (I+T)/2, where T is nonexpansive.
There is, however, little work in the existing literature on splitting methods for nonlinear operator equations in the setting of Banach spaces (though there was some work on finding a common zero of a finite family of accretive operators [10–12]).
The main difficulties are due to the fact that the inner product structure of a Hilbert space fails to be true in a Banach space. We shall in this paper use the technique of duality maps to carry out certain initiative investigations on splitting methods for accretive operators in Banach spaces. Namely, we will study splitting iterative methods for solving the inclusion problem (1.1), where A and B are accretive operators in a Banach space X.
We will consider the case where A is single-valued accretive and B is possibly multivalued m-accretive operators in a Banach space X and assume that the inclusion (1.1) has a solution. We introduce the following two iterative methods which we call Mann-type and respectively, Halpern-type forward-backward methods with errors and which generate a sequence {xn} by the recursions
(1.6)xn+1=(1-αn)xn+αn(JrnB(xn-rn(Axn+an))+bn),(1.7)xn+1=αnu+(1-αn)(JrnB(xn-rn(Axn+αn))+bn),
where JrB is the resolvent of the operator B of order r (i.e., JrB=(I+rB)-1), and {αn} is a sequence in (0,1]. We will prove weak convergence of (1.6) and strong convergence of (1.7) to a solution to (1.1) in some class of Banach spaces which will be made clear in Section 3.
The paper is organized as follows. In the next section we introduce the class of Banach spaces in which we shall study our splitting methods for solving (1.1). We also introduce the concept of accretive and m-accretive operators in a Banach space. In Section 3, we discuss the splitting algorithms (1.6) and (1.7) and prove their weak and strong convergence, respectively. In Section 4, we discuss applications of both algorithms (1.6) and (1.7) to variational inequalities, fixed points of pseudocontractions, convexly constrained minimization problems, the split feasibility problem, and linear inverse problems.
2. Preliminaries
Throughout the paper, X is a real Banach space with norm ∥·∥, distance d, and dual space X*. The symbol 〈x*,x〉 denotes the pairing between X* and X, that is, 〈x*,x〉=x*(x), the value of x* at x. C will denote a nonempty closed convex subset of X, unless otherwise stated, and ℬr the closed ball with center zero and radius r. The expressions xn→x and xn⇀x denote the strong and weak convergence of the sequence {xn}, respectively, and ωw(xn) stands for the set of weak limit points of the sequence {xn}.
The modulus of convexity of X is the function δX(ε):(0,2]→[0,1] defined by
(2.1)δX(ε)=inf{1-‖x+y‖2:‖x‖=‖y‖=1,‖x-y‖≥ε}.
Recall that X is said to be uniformly convex if δX(ε)>0 for any ε∈(0,2]. Let p>1. We say that X is p-uniformly convex if there exists a constant cp>0 so that δX(ε)≥cpεp for any ε∈(0,2].
The modulus of smoothness of X is the function ρX(τ):ℝ+→ℝ+ defined by
(2.2)ρX(τ)=sup{‖x+τy‖+‖x-τy‖2-1:‖x‖=‖y‖=1}.
Recall that X is called uniformly smooth if limτ→0ρX(τ)/τ=0. Let 1<q≤2. We say that X is q-uniformly smooth if there is a cq>0 so that ρX(τ)≤cqτq for τ>0. It is known that X is p-uniformly convex if and only if X* is q-uniformly smooth, where (1/p+1/q=1). For instance, Lp spaces are 2-uniformly convex and p-uniformly smooth if 1<p≤2, whereas p-uniformly convex and 2-uniformly smooth if p≥2.
The norm of X is said to be the Fréchet differentiable if, for each x∈X,
(2.3)limλ→0‖x+λy‖-‖x‖λ
exists and is attained uniformly for all y such that ∥y∥=1. It can be proved that X is uniformly smooth if the limit (2.3) exists and is attained uniformly for all (x,y) such that ∥x∥=∥y∥=1. So it is trivial that a uniformly smooth Banach space has a Fréchet differentiable norm.
The subdifferential of a proper convex function f:X→(-∞,+∞] is the set-valued operator ∂f:X→2X defined as
(2.4)∂f(x)={x*∈X*:〈x*,y-x〉+f(x)≤f(y)}.
If f is proper, convex, and lower semicontinuous, the subdifferential ∂f(x)≠∅ for any x∈int𝒟(f), the interior of the domain of f. The generalized duality mapping 𝒥p:X→2X* is defined by
(2.5)Jp(x)={j(x)∈X*:〈j(x),x〉=‖x‖p,‖j(x)‖=‖x‖p-1}.
If p=2, the corresponding duality mapping is called the normalized duality mapping and denoted by 𝒥. It can be proved that, for any x∈X,
(2.6)Jp(x)=∂(1p‖x‖p).
Thus we have the following subdifferential inequality, for any x,y∈X:
(2.7)‖x+y‖p≤‖x‖p+p〈y,j(x+y)〉,j(x+y)∈Jp(x+y).
In particular, we have, for x,y∈X,
(2.8)‖x+y‖2≤‖x‖2+2〈y,j(x+y)〉,j(x+y)∈J(x+y).
Some properties of the duality mappings are collected as follows.
Proposition 2.1 (see Cioranescu [13]).
Let 1<p<∞.
The Banach space X is smooth if and only if the duality mapping 𝒥p is single valued.
The Banach space X is uniformly smooth if and only if the duality mapping 𝒥p is single-valued and norm-to-norm uniformly continuous on bounded sets of X.
Among the estimates satisfied by p-uniformly convex and p-uniformly smooth spaces, the following ones will come in handy.
Lemma 2.2 (see Xu [14]).
Let 1<p<∞,q∈(1,2],r>0 be given.
If X is uniformly convex, then there exists a continuous, strictly increasing and convex function φ:ℝ+→ℝ+ with φ(0)=0 such that
(2.9)‖λx+(1-λ)y‖p≤λ‖x‖p+λ‖y‖p-Wp(λ)φ(‖x-y‖),x,y∈Br,0≤λ≤1,
where Wp(λ)=λp(1-λ)+(1-λ)λp.
If X is q-uniformly smooth, then there exists a constant κq>0 such that
(2.10)‖x+y‖q≤‖x‖q+q〈Jq(x),y〉+κq‖y‖q,x,y∈X.
The best constant κq satisfying (2.10) will be called the q-uniform smoothness coefficient of X. For instance [14], for 2≤p<∞, Lp is 2-uniformly smooth with κ2=p-1, and for 1<p≤2, Lp is p-uniformly smooth with κp=(1+tpp-1)(1+tp)1-p, where tp is the unique solution to the equation
(2.11)(p-2)tp-1+(p-1)tp-2-1=0,0<t<1.
In a Banach space X with the Fréchet differentiable norm, there exists a function h:[0,∞)→[0,∞) such that limt→0h(t)/t=0 and for all x,u∈X(2.12)12‖x‖2+〈u,J(x)〉≤12‖x+u‖2≤12‖x‖2+〈u,J(x)〉+h(‖u‖).
Recall that T:C→C is a nonexpansive mapping if ∥Tx-Ty∥≤∥x-y∥, for all x,y∈C. From now on, Fix(T) denotes the fixed point set of T. The following lemma claims that the demiclosedness principle for nonexpansive mappings holds in uniformly convex Banach spaces.
Lemma 2.3 (see Browder [15]).
Let C be a nonempty closed convex subset of a uniformly convex space X and T a nonexpansive mapping with Fix(T)≠∅. If {xn} is a sequence in C such that xn⇀x and (I-T)xn→y, then (I-T)x=y. In particular, if y=0, then x∈Fix(T).
A set-valued operator A:X→2X, with domain 𝒟(A) and range ℛ(A), is said to be accretive if, for all t>0 and every x,y∈𝒟(A),
(2.13)‖x-y‖≤‖x-y+t(u-υ)‖,u∈Ax,υ∈Ay.
It follows from Lemma 1.1 of Kato [16] that A is accretive if and only if, for each x,y∈𝒟(A), there exists j(x-y)∈J(x-y) such that
(2.14)〈u-υ,j(x-y)〉≥0,u∈Ax,υ∈Ay.
An accretive operator A is said to be m-accretive if the range ℛ(I+λA)=X for some λ>0. It can be shown that an accretive operator A is m-accretive if and only if ℛ(I+λA)=X for all λ>0.
Given α>0 and q∈(1,∞), we say that an accretive operator A is α-inverse strongly accretive (α-isa) of order q if, for each x,y∈𝒟(A), there exists jq(x-y)∈𝒥q(x-y) such that
(2.15)〈u-υ,jq(x-y)〉≥α‖u-υ‖q,u∈Ax,υ∈Ay.
When q=2, we simply say α-isa, instead of α-isa of order 2; that is, T is α-isa if, for each x,y∈𝒟(A), there exists j(x-y)∈𝒥(x-y) such that
(2.16)〈u-υ,j(x-y)〉≥α‖u-υ‖2,u∈Ax,υ∈Ay.
Given a subset K of C and a mapping T:C→K, recall that T is a retraction of C onto K if Tx=x for all x∈K. We say that T is sunny if, for each x∈C and t≥0, we have
(2.17)T(tx+(1-t)Tx)=Tx,
whenever tx+(1-t)Tx∈C.
The first result regarding the existence of sunny nonexpansive retractions onto the fixed point set of a nonexpansive mapping is due to Bruck.
Theorem 2.4 (see Bruck [17]).
If X is strictly convex and uniformly smooth and if T:C→C is a nonexpansive mapping having a nonempty fixed point set Fix(T), then there exists a sunny nonexpansive retraction of C onto Fix(T).
The following technical lemma regarding convergence of real sequences will be used when we discuss convergence of algorithms (1.6) and (1.7) in the next section.
Lemma 2.5 (see [18, 19]).
Let {an}, {cn}⊂ℝ+, {αn}⊂(0,1), and {bn}⊂ℝ be sequences such that
(2.18)an+1≤(1-αn)an+bn+cn,∀n≥0.
Assume ∑n=0∞cn<∞. Then the following results hold:
(i) If bn≤αnM where M≥0, then {an} is a bounded sequence.
(ii) If ∑n=0∞αn=∞ and limsupn→∞bn/αn≤0, then liman=0.
3. Splitting Methods for Accretive Operators
In this section we assume that X is a real Banach space and C is a nonempty closed subset of X. We also assume that A is a single-valued and α-isa operator for some α>0 and B is an m-accretive operator in X, with 𝒟(A)⊃C and 𝒟(B)⊃C. Moreover, we always use Jr to denote the resolvent of B of order r>0; that is,
(3.1)Jr≡JrB=(I+rB)-1.
It is known that the m-accretiveness of B implies that Jr is single valued, defined on the entire X, and firmly nonexpansive; that is,
(3.2)‖Jrx-Jry‖≤‖s(x-y)+(1-s)(Jrx-Jry)‖,x,y∈X,0≤s≤1.
Below we fix the following notation:
(3.3)Tr:=Jr(I-rA)=(I+rB)-1(I-rA).
Lemma 3.1.
For r>0, Fix(Tr)=(A+B)-1(0).
Proof.
From the definition of Tr, it follows that
(3.4)x=Trx⟺x=(I+rB)-1(x-rAx)⟺x-rAx∈x+rBx⟺0∈Ax+Bx.
This lemma alludes to the fact that in order to solve the inclusion problem (1.1), it suffices to find a fixed point of Tr. Since Tr is already “split,” an iterative algorithm for Tr corresponds to a splitting algorithm for (1.1). However, to guarantee convergence (weak or strong) of an iterative algorithm for Tr, we need good metric properties of Tr such as nonexpansivity. To this end, we need geometric conditions on the underlying space X (see Lemma 3.3).
Lemma 3.2.
Given 0<s≤r and x∈X, there holds the relation
(3.5)‖x-Tsx‖≤2‖x-Trx‖.
Proof.
Note that ((x-Trx)/r)-Ax∈B(Trx). By the accretivity of B, we have js,r∈𝒥(Tsx-Trx) such that
(3.6)〈x-Tsxs-x-Trxr,js,r〉≥0.
It turns out that
(3.7)‖Tsx-Trx‖2≤r-sr〈x-Trx,js,r〉≤|1-sr|‖x-Trx‖‖Tsx-Trx‖.
This along with the triangle inequality yields that
(3.8)‖x-Tsx‖≤‖x-Trx‖+‖Trx-Tsx‖≤‖x-Trx‖+|1-sr|‖x-Trx‖≤2‖x-Trx‖.
We notice that though the resolvent of an accretive operator is always firmly nonexpansive in a general Banach space, firm nonexpansiveness is however insufficient to estimate useful bounds which are required to prove convergence of iterative algorithms for solving nonlinear equations governed by accretive operations. To overcome this difficulty, we need to impose additional properties on the underlying Banach space X. Lemma 3.3 below establishes a sharper estimate than nonexpansiveness of the mapping Tr, which is useful for us to prove the weak and strong convergence of algorithms (1.6) and (1.7).
Lemma 3.3.
Let X be a uniformly convex and q-uniformly smooth Banach space for some q∈(1,2]. Assume that A is a single-valued α-isa of order q in X. Then, given s>0, there exists a continuous, strictly increasing and convex function ϕq:ℝ+→ℝ+ with ϕq(0)=0 such that, for all x,y∈ℬs,
(3.9)‖Trx-Try‖q≤‖x-y‖q-r(αq-rq-1κq)‖Ax-Ay‖q-ϕq(‖(I-Jr)(I-rA)x-(I-Jr)(I-rA)y‖),
where κq is the q-uniform smoothness coefficient of X (see Lemma 2.2).
Proof.
Put x^=x-rAx and y^=y-rAy. Since (x^-Jrx^)/r∈B(Jrx^), it follows from the accretiveness of B that
(3.10)‖Jrx^-Jry^‖≤‖(Jrx^-Jry^)+r2(x^-Jrx^r-y^-Jry^r)‖=‖12(x^-y^)+12(Jrx^-Jry^)‖.
Since x,y∈ℬs, by the accretivity of A it is easy to show that there exists t>0 such that x^-y^∈ℬt; hence, Jrx^-Jry^∈ℬt for Jr is nonexpansive. Now since X is uniformly convex, we can use Lemma 2.2 to find a continuous, strictly increasing and convex function φ:ℝ+→ℝ+, with φ(0)=0, satisfying
(3.11)‖12(x^-y^)+12(Jrx^-Jry^)‖q≤12‖x^-y^‖q+12‖Jrx^-Jry^‖q-Wq(12)φ(‖(I-Jr)x^-(I-Jr)y^‖)≤‖x^-y^‖q-12qφ(‖(I-Jr)x^-(I-Jr)y^‖),
where the last inequality follows from the nonexpansivity of the resolvent Jr. Letting ϕq=φ/2q and combining (3.10) and (3.11) yield
(3.12)‖Trx-Try‖q≤‖x^-y^‖q-ϕq(‖(I-Jr)x^-(I-Jr)y^‖).
On the other hand, since X is also q-uniformly smooth and A is α-isa of order q, we derive that
(3.13)‖x^-y^‖q=‖(x-y)-r(Ax-Ay)‖q≤‖x-y‖q+κqrq‖Ax-Ay‖q-rq〈Ax-Ay,Jq(x-y)〉≤‖x-y‖q-r(αq-rq-1κq)‖Ax-Ay‖q.
Finally the required inequality (3.9) follows from (3.12) and (3.13).
Remark 3.4.
Note that from Lemma 3.3 one deduces that, under the same conditions, if r≤(αq/κq)1/(q-1), then the mapping Tr is nonexpansive.
3.1. Weak Convergence
Mann's iterative method [20] is a widely used method for finding a fixed point of nonexpansive mappings [21]. We have proved that a splitting method for solving (1.1) can, under certain conditions, be reduced to a method for finding a fixed point of a nonexpansive mapping. It is therefore the purpose of this subsection to introduce and prove its weak convergence of a Mann-type forward-backward method with errors in a uniformly convex and q-uniformly smooth Banach space. (See [22] for a similar treatment of the proximal point algorithm [23, 24] for finding zeros of monotone operators in the Hilbert space setting.) To this end we need a lemma about the uniqueness of weak cluster points of a sequence, whose proof, included here, follows the idea presented in [21, 25].
Lemma 3.5.
Let C be a closed convex subset of a uniformly convex Banach space X with a Fréchet differentiable norm, and let {Tn} be a sequence of nonexpansive self-mappings on C with a nonempty common fixed point set F. If x0∈C and xn+1:=Tnxn+en, where ∑n=1∞∥en∥<∞, then 〈z1-z2,J(y1-y2)〉=0 for all y1,y2∈F and all z1,z2 weak limit points of {xn}.
Proof.
We first claim that the sequence {xn} is bounded. As a matter of fact, for each fixed p∈F and any n∈ℕ,
(3.14)‖xn+1-p‖=‖Tnxn-Tnp+en‖≤‖xn-p‖+‖en‖.
As ∑n=1∞∥en∥<∞, we can apply Lemma 2.5 to find that limn→∞∥xn-p∥ exists. In particular, {xn} is bounded.
Let us next prove that, for every y1,y2∈F and 0<t<1, the limit
(3.15)limn→∞‖txn+(1-t)y1-y2‖
exists. To see this, we set Sn,m=Tn+m-1Tn+m-2⋯Tn which is nonexpansive. It is to see that we can rewrite {xn} in the manner
(3.16)xn+m=Sn,mxn+cn,m,n,m≥1,
where
(3.17)cn,m=Tn+m-1(Tn+m-2(⋯Tn-1(Tnxn+en)+en-1⋯)+en+m-2)+en+m-1-Sn,mxn.
By nonexpansivity, we have that
(3.18)‖cn,m‖≤∑k=nn+m-1‖ek‖,
and the summability of {en} implies that
(3.19)limn,m→∞‖cn,m‖=0.
Set
(3.20)an=‖txn+(1-t)y1-y2‖,dn,m=‖Sn,m(txn+(1-t)y1)-(tSn,mxn+(1-t)y1)‖.
Let K be a closed bounded convex subset of X containing {xn} and {y1,y2}. A result of Bruck [26] assures the existence of a strictly increasing continuous function g:[0,∞)→[0,∞) with g(0)=0 such that
(3.21)g(‖U(tx+(1-t)y)-(tUx+(1-t)Uy)‖)≤‖x-y‖-‖Ux-Uy‖
for all U:K→X nonexpansive, x,y∈K and 0≤t≤1. Applying (3.21) to each Sn,m, we obtain
(3.22)g(dn,m)≤‖xn-y1‖-‖Sn,mxn-Sn,my1‖=‖xn-y1‖-‖xn+m-y1-cn,m‖≤‖xn-y1‖-‖xn+m-y1‖+‖cn,m‖.
Now since limn→∞∥xn-y1∥ exists, (3.19) and (3.22) together imply that
(3.23)limn,m→∞dn,m=0.
Furthermore, we have
(3.24)an+m≤an+dn,m+‖cn,m‖.
After taking first limsupm→∞ and then liminfn→∞ in (3.24) and using (3.19) and (3.23), we get
(3.25)limsupm→∞am≤liminfn→∞an+limn,m→∞(dn,m+‖cn,m‖)=liminfn→∞an.
Hence the limit (3.15) exists.
If we replace now x and u in (2.12) with y1-y2 and t(xn-y1), respectively, we arrive at
(3.26)12‖y1-y2‖2+t〈xn-y1,J(y1-y2)〉≤12‖txn+(1-t)y1-y2‖2≤12‖y1-y2‖2+t〈xn-y1,J(y1-y2)〉+h(t‖xn-y1‖).
Since the limn→∞∥xn-y1∥ exists, we deduce that
(3.27)12‖y1-y2‖2+tlimsupn→∞〈xn-y1,J(y1-y2)〉≤limn→∞12‖txn+(1-t)y1-y2‖2≤12‖y1-y2‖2+tliminfn→∞〈xn-y1,J(y1-y2)〉+o(t),
where limt→0o(t)/t=0. Consequently, we deduce that
(3.28)limsupn→∞〈xn-y1,J(y1-y2)〉≤liminfn→∞〈xn-y1,J(y1-y2)〉+o(t)t.
Setting t tend to 0, we conclude that limn→∞〈xn-y1,𝒥(y1-y2)〉 exists. Therefore, for any two weak limit points z1 and z2 of {xn}, 〈z1-y1,𝒥(y1-y2)〉=〈z2-y1,𝒥(y1-y2)〉; that is, 〈z1-z2,𝒥(y1-y2)〉=0.
Theorem 3.6.
Let X be a uniformly convex and q-uniformly smooth Banach space. Let A:X→X be an α-isa of order q and B:X→2X an m-accretive operator. Assume that S=(A+B)-1(0)≠∅. We define a sequence {xn} by the perturbed iterative scheme
(3.29)xn+1=(1-αn)xn+αn(Jrn(xn-rn(Axn+an))+bn),
where Jrn=(I+rnB)-1, {an},{bn}⊂X,{αn}⊂(0,1], and {rn}⊂(0,+∞). Assume that
∑n=1∞∥an∥<∞ and ∑n=1∞∥bn∥<∞;
0<liminfn→∞αn;
0<liminfn→∞rn≤limsupn→∞rn<(αq/κq)1/(q-1).
Then {xn} converges weakly to some x∈S.
Proof.
Write Tn=(I+rnB)-1(I-rnA). Notice that we can write
(3.30)Jrn(xn-rn(Axn+an))+bn=Tnxn+en,
where en=Jrn(xn-rn(Axn+an))+bn-Tnxn. Then the iterative formula (3.29) turns into the form
(3.31)xn+1=(1-αn)xn+αn(Tnxn+en).
Thus, by nonexpansivity of Jrn,
(3.32)‖en‖≤‖Jrn(xn-rn(Axn+an))-Tnxn‖‖bn‖≤rn‖an‖+‖bn‖.
Therefore, condition (i) implies
(3.33)∑n=1∞‖en‖<∞.
Take z∈S to deduce that, as S=Fix(Tn) and Tn is nonexpansive,
(3.34)‖xn+1-z‖≤(1-αn)‖xn-z‖+αn‖Tnxn-Tnz+en‖≤‖xn-z‖+αn‖en‖.
Due to (3.33), Lemma 2.5 is applicable and we get that limn→∞∥xn-z∥ exists; in particular, {xn} is bounded. Let M>0 be such that ∥xn∥<M, for all n∈ℕ, and let s=q(M+∥z∥)q-1. By (2.7) and Lemma 3.3, we have
(3.35)‖xn+1-z‖q≤‖(1-αn)(xn-z)+αn(Tnxn-z)+αnen‖q≤‖(1-αn)(xn-z)+αn(Tnxn-z)‖q+αn〈en,J(xn+1-z)〉≤(1-αn)‖xn-z‖q+αn‖Tnxn-Tnz‖q+αnq‖en‖‖xn+1-z‖q-1≤‖xn-z‖q-αnrn(qα-rnq-1κq)‖Axn-Az‖q-ϕq(‖xn-rnAxn-Tnxn+rnAz‖)+s‖en‖.
From (3.35), assumptions (ii) and (iii), and (3.33), it turns out that
(3.36)limn→∞‖Axn-Az‖q+‖xn-rnAxn-Tnxn+rnAz‖=0.
Consequently,
(3.37)limn→∞‖Tnxn-xn‖=0.
Since liminfn→∞rn>0, there exists ε>0 such that rn≥ε for all n≥0. Then, by Lemma 3.2,
(3.38)limn→∞‖Tεxn-xn‖≤2limn→∞‖Tnxn-xn‖=0.
By Lemmas 3.3 and 3.1, Tε is nonexpansive and Fix(Tε)=S≠∅. We can therefore make use of Lemma 2.3 to assure that
(3.39)ωw(xn)⊂S.
Finally we set Un=(1-αn)I+αnTn and rewrite scheme (3.31) as
(3.40)xn+1=Unxn+en′,
where the sequence {en′} satisfies ∑n=1∞∥en′∥<∞. Since {Un} is a sequence of nonexpansive mappings with S as its nonempty common fixed point set, and since the space X is uniformly convex with a Fréchet differentiable norm, we can apply Lemma 3.5 together with (3.39) to assert that the sequence {xn} has exactly one weak limit point; it is therefore weakly convergent.
3.2. Strong Convergence
Halpern's method [27] is another iterative method for finding a fixed point of nonexpansive mappings. This method has been extensively studied in the literature [28–30] (see also the recent survey [31]). In this section we aim to introduce and prove the strong convergence of a Halpern-type forward-backward method with errors in uniformly convex and q-uniformly smooth Banach spaces. This result turns out to be new even in the setting of Hilbert spaces.
Theorem 3.7.
Let X be a uniformly convex and q-uniformly smooth Banach space. Let A:X→X be an α-isa of order q and B:X→2X an m-accretive operator. Assume that S=(A+B)-1(0)≠∅. We define a sequence {xn} by the iterative scheme
(3.41)xn+1=αnu+(1-αn)(Jrn(xn-rn(Axn+an))+bn),
where u∈X, Jrn=(I+rnB)-1,{an},{bn}⊂X,{αn}⊂(0,1],and{rn}⊂(0,+∞). Assume the following conditions are satisfied:
Then {xn} converges in norm to z=Q(u), where Q is the sunny nonexpansive retraction of X onto S.
Proof.
Let z=Q(u), where Q is the sunny nonexpansive retraction of X onto S whose existence is ensured by Theorem 2.4. Let (yn) be a sequence generated by
(3.42)yn+1=αnu+(1-αn)Tnyn,
where we abbreviate Tn:=Jrn(I-rnA). Hence to show the desired result, it suffices to prove that yn→z. Indeed, since Jrn and I-rnA are both nonexpansive, it follows that
(3.43)‖xn+1-yn+1‖≤(1-αn)‖Jrn(xn-rn(Axn+an))+bn-Jrn(yn-rnAyn)‖≤(1-αn)‖(I-rnA)xn-(I-rnA)yn-rnan‖+‖bn‖=(1-αn)‖xn-yn‖+L(‖an‖+‖bn‖),
where L:=max(1,(αq/κq)1/(q-1)). According to condition (i), we can apply Lemma 2.5(ii) to conclude that ∥xn-yn∥→0 as n→∞.
We next show yn→z. Indeed, since S=Fix(Tn) and Tn is nonexpansive, we have
(3.44)‖yn+1-z‖≤αn‖u-z‖+(1-αn)‖Tnyn-Tnz‖≤αn‖u-z‖+(1-αn)‖yn-z‖.
Hence, we can apply Lemma 2.5(i) to claim that {yn} is bounded.
Using the inequality (2.7) with p=q, we derive that
(3.45)‖yn+1-z‖q=‖αn(u-z)+(1-αn)Tnyn-z‖q≤(1-αn)q‖Tnyn-z‖q+qαn〈u-z,Jq(yn+1-z)〉.
By condition (iii), we have some δ>0 such that
(3.46)1-αn≥δ,(1-αn)rn(αq-rnq-1κq)≥δ,
for all n∈ℕ. Hence, by Lemma 3.3 we get from (3.45) that
(3.47)‖yn+1-z‖q≤(1-αn)‖yn-z‖q-δϕq(‖yn-rnAyn-Tnyn+rnAz‖)-δ‖Ayn-Az‖q+qαn〈u-z,Jq(yn+1-z)〉.
Let us define sn=∥yn-z∥q for all n≥0. Depending on the asymptotic behavior of the sequence {sn} we distinguish two cases.
Case 1.
Suppose that there exists N∈ℕ such that the sequence {sn}n≥N is nonincreasing; thus, limn→∞sn exists. Since αn→0 and ∥en∥→0, it follows immediately from (3.47) that
(3.48)limn→∞‖Ayn-Az‖q+‖yn-rnAyn-Tnyn+rnAz‖=0.
Consequently,
(3.49)limn→∞‖Tnyn-yn‖=0.
By condition (iii), there exists ε>0 such that rn≥ε for all n≥0. Then, by Lemma 3.2, we get
(3.50)limn→∞‖Tεyn-yn‖≤limn→∞‖Tnyn-yn‖=0.
The demiclosedness principle (i.e., Lemma 2.3) implies that
(3.51)ωw(yn)⊂S.
Note that from inequality (3.47) we deduce that
(3.52)sn+1≤(1-αn)sn+qαn〈u-z,Jq(yn+1-z)〉.
Next we prove that
(3.53)limsupn→∞〈u-z,Jq(yn-z)〉≤0.
Equivalently (should ∥yn-z∥↛0), we need to prove that
(3.54)limsupn→∞〈u-z,J(yn-z)〉≤0.
To this end, let zt satisfy zt=tu+Tεzt. By Reich's theorem [32], we get zt→QSu=z as t→0. Using subdifferential inequality, we deduce that
(3.55)‖zt-yn‖2=‖t(u-yn)+(1-t)(Tεzt-yn)‖2≤(1-t)2‖Tεzt-yn‖2+2t〈u-yn,J(zt-yn)〉≤(1-t)2(‖Tεzt-Tεyn‖+‖Tεyn-yn‖)2+2t‖zt-yn‖2+2t〈u-zt,J(zt-yn)〉≤(1+t2)‖zt-yn‖2+M‖Tεyn-yn‖+2t〈u-zt,J(zt-yn)〉,
where M>0 is a constant such that
(3.56)M>max{‖zt-yn‖2,2‖zt-yn‖+‖Tεyn-yn‖},t∈(0,1),n∈N.
Then it follows from (3.55) that
(3.57)〈u-zt,J(yn-zt)〉≤M2t+M2t‖Tεyn-yn‖.
Then, letting t→0 and noting the fact that the duality map 𝒥 is norm-to-norm uniformly continuous on bounded sets, we get (3.54) as desired. Due to (3.53), we can apply Lemma 2.5(ii) to (3.52) to conclude that sn→0; that is, yn→z.
Case 2.
Suppose that there exists n1∈ℕ such that sn1≤sn1+1. Let us define
(3.59)In:={n1≤k≤n:sk≤sk+1},n≥n1.
Obviously In≠∅ since n1∈In for any n≥n1. Set
(3.60)τ(n)=maxIn.
Note that the sequence {τ(n)} is nonincreasing and limn→∞τ(n)=∞. Moreover, τ(n)≤n and
(3.61)sτ(n)≤sτ(n)+1,(3.62)sn≤sτ(n)+1,
for any n≥n1 (see Lemma 3.1 of Maingé [33] for more details). From inequality (3.47) we get
(3.63)sτ(n)+1≤(1-ατ(n))sτ(n)-δϕ(‖yτ(n)-rτ(n)Ayτ(n)-Tτ(n)yτ(n)+rτ(n)Az‖)-δ‖Ayτ(n)-Az‖q+qατ(n)〈u-z,Jq(yτ(n)+1-z)〉.
It turns out that
(3.64)limn→∞‖Ayτ(n)-Az‖=0,limn→∞‖yτ(n)-rτ(n)Ayτ(n)-Tτ(n)yτ(n)+rτ(n)Az‖=0.
Consequently,
(3.65)limn→∞‖Tτ(n)yτ(n)-yτ(n)‖=0.
Now repeating the argument of the proof of (3.53) in Case 1, we can get
(3.66)limsupn→∞〈u-z,Jq(yτ(n)-z)〉≤0.
By the asymptotic regularity of {yτ(n)} and (3.65), we deduce that
(3.67)limn→∞‖yτ(n)+1-yτ(n)‖=0.
This implies that
(3.68)limsupn→∞〈u-z,J(yτ(n)+1-z)〉≤0.
On the other hand, it follows from (3.64) that
(3.69)sτ(n)+1-sτ(n)+ατ(n)sτ(n)≤qατ(n)〈u-z,Jq(yτ(n)+1-z)〉.
Taking the limsupn→∞ in (3.69) and using condition (i) we deduce that limsupn→∞sτ(n)≤0; hence limn→∞sτ(n)=0. That is, ∥yτ(n)-z∥→0. Using the triangle inequality,
(3.70)‖yτ(n)+1-z‖≤‖yτ(n)+1-yτ(n)‖+‖yτ(n)-z‖,
we also get that limn→∞sτ(n)+1=0 which together with (3.42) guarantees that ∥yn-z∥→0.
4. Applications
The two forward-backward methods previously studied, (3.29) and (3.41), find applications in other related problems such as variational inequalities, the convex feasibility problem, fixed point problems, and optimization problems.
Throughout this section, let C be a nonempty closed and convex subset of a Hilbert space ℋ. Note that in this case the concept of monotonicity coincides with the concept of accretivity.
Regarding the problem we concern, of finding a zero of the sum of two accretive operators in a Hilbert space ℋ, as a direct consequence of Theorem 3.7, we first obtain the following result due to Combettes [34].
Corollary 4.1.
Let A:ℋ→ℋ be monotone and B:ℋ→ℋ maximal monotone. Assume that κA is firmly nonexpansive for some κ>0 and that
limn→∞αn>0,
0<liminfn→∞λn≤limsupn→∞λn<2κ,
∑n=1∞∥an∥<∞ and ∑n=1∞∥bn∥<∞,
S:=(A+B)-1(0)≠∅.
Then the sequence {xn} generated by the algorithm
(4.1)xn+1=(1-αn)xn+αnJλn((xn-λn(Axn+an))+bn)
converges weakly to a point in S.
Proof.
It suffices to show that κA is firmly nonexpansive if and only if A is κ-inverse strongly monotone. This however follows from the following straightforward observation:
(4.2)〈κAx-κAy,x-y〉≥‖κAx-κAy‖2⟺〈Ax-Ay,x-y〉≥κ‖Ax-Ay‖2,
for all x,y∈ℋ.
4.1. Variational Inequality Problems
A monotone variational inequality problem (VIP) is formulated as the problem of finding a point x∈C with the property:
(4.3)〈Ax,z-x〉≥0,∀z∈C,
where A:C→ℋ is a nonlinear monotone operator. We shall denote by S the solution set of (4.3) and assume S≠∅.
One method for solving VIP (4.3) is the projection algorithm which generates, starting with an arbitrary initial point x0∈ℋ, a sequence {xn} satisfying
(4.4)xn+1=PC(xn-rAxn),
where r is properly chosen as a stepsize. If in addition A is κ-inverse strongly monotone (ism), then the iteration (4.4) with 0<r<2κ converges weakly to a point in S whenever such a point exists.
By [35, Theorem 3], VIP (4.3) is equivalent to finding a point x so that
(4.5)0∈(A+B)x,
where B is the normal cone operator of C. In other words, VIPs are a special case of the problem of finding zeros of the sum of two monotone operators. Note that the resolvent of the normal cone is nothing but the projection operator and that if A is κ-ism, then the set Ω is closed and convex [36]. As an application of the previous sections, we get the following results.
Corollary 4.2.
Let A:C→ℋ be κ-ism for some κ>0, and let the following conditions be satisfied:
limn→∞αn>0,
0<liminfn→∞λn≤limsupn→∞λn<2κ.
Then the sequence {xn} generated by the relaxed projection algorithm
(4.6)xn+1=(1-αn)xn+αnPC(xn-λnAxn)
converges weakly to a point in S.
Corollary 4.3.
Let A:C→ℋ be κ-ism and let the following conditions be satisfied:
limn→∞αn=0,∑n=1∞αn=∞;
0<limn→∞λn≤limsupn→∞λn<2κ.
Then, for any given u∈C, the sequence {xn} generated by
(4.7)xn+1=αnu+(1-αn)PC(xn-λnAxn),
converges strongly to P𝒮u.
Remark 4.4.
Corollary 4.3 improves Iiduka-Takahashi's result [37, Corollary 3.2], where apart from hypotheses (i)-(ii), the conditions ∑n=1∞|αn-αn+1|<∞ and ∑n=1∞|λn-λn+1|<∞ are required.
4.2. Fixed Points of Strict Pseudocontractions
An operator T:C→C is said to be a strict κ-pseudocontraction if there exists a constant κ∈[0,1) such that
(4.8)‖Tx-Ty‖2≤‖x-y‖2+κ‖(I-T)x-(I-T)y‖2
for all x,y∈C. It is known that if T is strictly κ-pseudocontractive, then A=I-T is (1-κ)/2-ism (see [38]). To solve the problem of approximating fixed points for such operators, an iterative scheme is provided in the following result.
Corollary 4.5.
Let T:C→C be strictly κ-pseudocontractive with a nonempty fixed point set Fix(T). Suppose that
limn→∞αn=0 and ∑n=1∞αn=∞,
0<limn→∞λn≤limsupn→∞λn<1-κ.
Then, for any given u∈C, the sequence {xn} generated by the algorithm
(4.9)xn+1=αnu+(1-αn)((1-λn)xn+λnTxn)
converges strongly to the point PFix(T)u.
Proof.
Set A=I-T. Hence A is (1-κ)/2-ism. Moreover we rewrite the above iteration as
(4.10)xn+1=αnu+(1-αn)(xn-λnAxn).
Then, by setting B the operator constantly zero, Corollary 4.3 yields the result as desired.
4.3. Convexly Constrained Minimization Problem
Consider the optimization problem
(4.11)minx∈Cf(x),
where f:ℋ→ℝ is a convex and differentiable function. Assume (4.11) is consistent, and let Ω denote its set of solutions.
The gradient projection algorithm (GPA) generates a sequence {xn} via the iterative procedure:
(4.12)xn+1=PC(xn-r∇f(xn)),
where ∇f stands for the gradient of f. If in addition ∇f is (1/κ)-Lipschitz continuous; that is, for any x,y∈ℋ,
(4.13)‖∇f(x)-∇f(y)‖≤(1κ)‖x-y‖,
then the GPA with 0<r<2κ converges weakly to a minimizer of f in C (see, e.g, [39, Corollary 4.1]).
The minimization problem (4.11) is equivalent to VIP [40, Lemma 5.13]:
(4.14)〈∇f(x),z-x〉≥0,z∈C.
It is also known [41, Corollary 10] that if ∇f is (1/κ)-Lipschitz continuous, then it is also κ-ism. Thus, we can apply the previous results to (4.11) by taking A=∇f.
Corollary 4.6.
Assume that f:ℋ→ℝ is convex and differentiable with (1/κ)-Lipschitz continuous gradient ∇f. Assume also that
limn→∞αn>0,
0≤liminfn→∞λn≤limsupn→∞λn≤2κ.
Then the sequence {xn} generated by the algorithm
(4.15)xn+1=(1-αn)xn+αnPC(xn-λn∇f(xn))
converges weakly to x∈Ω.
Corollary 4.7.
Assume that f:ℋ→ℝ is convex and differentiable with (1/κ)-Lipschitz continuous gradient ∇f. Assume also that
limn→∞αn=0 and ∑n=1∞αn=∞;
0<limn→∞λn≤limsupn→∞λn<2κ.
Then for any given u∈C, the sequence {xn} generated by the algorithm
(4.16)xn+1=αnu+(1-αn)PC(xn-λn∇f(xn))
converges strongly to PΩu whenever such point exists.
4.4. Split Feasibility Problem
The split feasibility problem (SFP) [42] consists of finding a point x^ satisfying the property:
(4.17)x^∈C,Ax^∈Q,
where C and Q are, respectively, closed convex subsets of Hilbert spaces ℋ and K and A:ℋ→K is a bounded linear operator. The SFP (4.17) has attracted much attention due to its applications in signal processing [42]. Various algorithms have, therefore, been derived to solve the SFP (4.17) (see [39, 43, 44] and reference therein). In particular, Byrne [43] introduced the so-called CQ algorithm:
(4.18)xn+1=PC(xn-λA*(I-PQ)Axn),
where 0<λ<2ν with ν=1/∥A∥2.
To solve the SFP (4.17), it is very useful to investigate the following convexly constrained minimization problem (CCMP):
(4.19)minx∈Cf(x),
where
(4.20)f(x):=12‖(I-PQ)Ax‖2.
Generally speaking, the SFP (4.17) and CCMP (4.19) are not fully equivalent: every solution to the SFP (4.17) is evidently a minimizer of the CCMP (4.19); however a solution to the CCMP (4.19) does not necessarily satisfy the SFP (4.17). Further, if the solution set of the SFP (4.17) is nonempty, then it follows from [45, Lemma 4.2] that
(4.21)C∩(∇f)-1(0)≠∅,
where f is defined by (4.20). As shown by Xu [46], the CQ algorithm need not converge strongly in infinite-dimensional spaces. We now consider an iteration process with strong convergence for solving the SFP (4.17).
Corollary 4.8.
Assume that the SFP (4.17) is consistent, and let S be its nonempty solution set. Assume also that
limn→∞αn=0 and ∑n=1∞αn=∞;
0<limn→∞λn≤limsupn→∞λn<2ν.
Then for any given u∈C, the sequence (xn) generated by the algorithm
(4.22)xn+1=αnu+(1-αn)PC[xn-λnA*(I-PQ)Axn]
converges strongly to the solution PSu of the SFP (4.17).
Proof.
Let f be defined by (4.19). According to [39, page 113], we have
(4.23)∇f=A*(I-PQ)A,
which is (1/ν)-Lipschitz continuous with ν=1/∥A∥2. Thus Corollary 4.7 applies, and the result follows immediately.
Remark 4.9.
Corollary 4.8 improves and recovers the result of [44, Corollary 3.7], which uses the additional condition ∑n=1∞|αn+1-αn|<∞, condition (i), and the special case of condition (ii) where λn≡λ for all n∈ℕ.
4.5. Convexly Constrained Linear Inverse Problem
The constrained linear system
(4.24)Ax=bx∈C,
where A:ℋ→K is a bounded linear operator and b∈K, is called convexly constrained linear inverse problem (cf. [47]). A classical way to deal with this problem is the well-known projected Landweber method (see [40]):
(4.25)xn+1=PC[xn-λA*(Axn-b)],
where 0<λ<2ν with ν=1/∥A∥2. A counterexample in [8, Remark 5.12] shows that the projected Landweber iteration converges weakly in infinite-dimensional spaces, in general. To get strong convergence, Eicke introduced the so-called damped projection method (see [47]). In what follows, we present another algorithm with strong convergence, for solving (4.24).
Corollary 4.10.
Assume that (4.24) is consistent. Assume also that
limn→∞αn=0 and ∑n=1∞αn=∞;
0<limn→∞λn≤limsupn→∞λn<2ν.
Then, for any given u∈ℋ, the sequence {xn} generated by the algorithm
(4.26)xn+1=αnu+(1-αn)PC[xn-λnA*(Axn-b)]
converges strongly to a solution to problem (4.24) whenever it exists.
Proof.
This is an immediate consequence of Corollary 4.8 by taking Q={b}.
Acknowledgments
The work of G. López, V. Martín-Márquez, and H.-K. Xu was supported by Grant MTM2009-10696-C02-01. This work was carried out while F. Wang was visiting Universidad de Sevilla under the support of this grant. He was also supported by the Basic and Frontier Project of Henan 122300410268 and the Peiyu Project of Luoyang Normal University 2011-PYJJ-002. The work of G. López and V. Martín-Márquez was also supported by the PlanAndaluz de Investigacin de la Junta de Andaluca FQM-127 and Grant P08-FQM-03543. The work of H.-K. Xu was also supported in part by NSC 100-2115-M-110-003-MY2 (Taiwan). He extended his appreciation to the Deanship of Scientific Research at King Saud University for funding the work through a visiting professor-ship program (VPP).
PeacemanD. H.Rachford,H. H.The numerical solution of parabolic and elliptic differential equations1955328410071874Douglas,J.Rachford,H. H.On the numerical solution of heat conduction problems in two and three space variables1956824214390084194KelloggR. B.Nonlinear alternating direction algorithm19692323280238507LionsP. L.MercierB.Splitting algorithms for the sum of two nonlinear operators197916696497910.1137/0716071551319PasstyG. B.Ergodic convergence to a zero of the sum of monotone operators in Hilbert space197972238339010.1016/0022-247X(79)90234-8559375TsengP.Applications of a splitting algorithm to decomposition in convex programming and variational inequalities199129111913810.1137/03290061088222ChenG. H.-G.RockafellarR. T.Convergence rates in forward-backward splitting19977242144410.1137/S10526234952901791443627CombettesP. L.WajsV. R.Signal recovery by proximal forward-backward splitting2005441168120010.1137/0506260902203849SraS.NowozinS.WrightS. J.2011AoyamaK.IidukaH.TakahashiW.Weak convergence of an iterative sequence for accretive operators in Banach spaces200613353902235489ZegeyeH.ShahzadN.Strong convergence theorems for a common zero of a finite family of maccretive mappings20076651161116910.1016/j.na.2006.01.0122286626ZegeyeH.ShahzadN.Strong convergence theorems for a common zero point of a finite family of α-inverse strongly accretive mappings200891951042408339CioranescuI.1990Kluwer Academic Publishers10.1007/978-94-009-2121-41079061XuH. K.Inequalities in Banach spaces with applications199116121127113810.1016/0362-546X(91)90200-K1111623BrowderF. E.Nonexpansive nonlinear operators in a Banach space196554104110440187120KatoT.Nonlinear semigroups and evolution equations1967195085200226230BruckR. E.Nonexpansive projections on subsets of Banach spaces1973473413550341223MaingéP. E.Approximation methods for common fixed points of nonexpansive mappings in Hilbert spaces2007325146947910.1016/j.jmaa.2005.12.0662273538XuH. K.Iterative algorithms for nonlinear operators200266124025610.1112/S00246107020033321911872MannW. R.Mean value methods in iteration195345065100054846ReichS.Weak convergence theorems for nonexpansive mappings in Banach spaces197967227427610.1016/0022-247X(79)90024-6528688MarinoG.XuH. K.Convergence of generalized proximal point algorithms20043479180810.3934/cpaa.2004.3.7912106300RockafellarR. T.Monotone operators and the proximal point algorithm19761458778980410483KamimuraS.TakahashiW.Approximating solutions of maximal monotone operators in Hilbert spaces2000106222624010.1006/jath.2000.34931788273TanK. K.XuH. K.Approximating fixed points of nonexpansive mappings by the Ishikawa iteration process1993178230130810.1006/jmaa.1993.13091238879BruckR. E.A simple proof of the mean ergodic theorem for nonlinear contractions in Banach spaces1979322-310711610.1007/BF02764907531254HalpernB. R.Fixed points of nonexpanding maps1967739579610218938LionsP. L.Approximation de points fixes de contractions197728421A1357A13590470770WittmannR.Approximation of fixed points of nonexpansive mappings199258548649110.1007/BF011901191156581XuH. K.Viscosity approximation methods for nonexpansive mappings2004298127929110.1016/j.jmaa.2004.04.0592086546LópezG.MartínV.XuH. K.Halpern's iteration for nonexpansive mappings201051321123010.1090/conm/513/100852668248ReichS.Strong convergence theorems for resolvents of accretive operators in Banach spaces198075128729210.1016/0022-247X(80)90323-6576291MaingéP. E.Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization2008167-889991210.1007/s11228-008-0102-z2466027CombettesP. L.Solving monotone inclusions via compositions of nonexpansive averaged operators2004535-647550410.1080/023319304123313271572115266RockafellarR. T.On the maximality of sums of nonlinear monotone operators197014975880282272BarbuV.1976Noordhoff0390843IidukaH.TakahashiW.Strong convergence theorems for nonexpansive mappings and inverse-strongly monotone mappings200561334135010.1016/j.na.2003.07.0232123081BrowderF. E.PetryshynW. V.Construction of fixed points of nonlinear mappings in Hilbert space1967201972280217658ByrneC.A unified treatment of some iterative algorithms in signal processing and image reconstruction200420110312010.1088/0266-5611/20/1/0062044608EnglH. W.HankeM.NeubauerA.1996Dordrecht, The NetherlandsKluwer Academic Publishers Group10.1007/978-94-009-1740-81408680BaillonJ. B.HaddadG.Quelques proprietes des operateurs angle-bornes et cycliquement monotones19772621371500500279CensorY.ElfvingT.A multiprojection algorithm using Bregman projections in a product space199482–422123910.1007/BF021426921309222ByrneC.Iterative oblique projection onto convex sets and the split feasibility problem200218244145310.1088/0266-5611/18/2/3101910248XuH. K.A variable Krasnosel'skii-Mann algorithm and the multiple-set split feasibility20062262021203410.1088/0266-5611/22/6/0072277527WangF.XuH. K.Approximating curve and strong convergence of the CQ algorithm for the split feasibility problem2010201010208510.1155/2010/1020852603074XuH. K.Iterative methods for the split feasibility problem in infinite-dimensional Hilbert spaces2010261010501810.1088/0266-5611/26/10/1050182719779EickeB.Iteration methods for convexly constrained ill-posed problems in Hilbert space1992135-641342910.1080/016305692088164891187903