We consider and study the modified extragradient methods for finding a common element of the solution set Γ of a split feasibility problem (SFP) and the fixed point set Fix(S) of a strictly pseudocontractive mapping S in the setting of infinite-dimensional Hilbert spaces. We propose an extragradient algorithm for finding an element of Fix(S)∩Γ where S is strictly pseudocontractive. It is proven that the sequences generated by the proposed algorithm converge weakly to an element of Fix(S)∩Γ. We also propose another extragradient-like algorithm for finding an element of Fix(S)∩Γ where S:C→C is nonexpansive. It is shown that the sequences generated by the proposed algorithm converge strongly to an element of Fix(S)∩Γ.
1. Introduction
Let ℋ be a real Hilbert space with inner product 〈·,·〉 and norm ∥·∥. Let C be a nonempty closed convex subset of ℋ and let PC be the metric projection from ℋ onto C. Let S:C→C be a self-mapping on C. We denote by Fix(S) the set of fixed points of S and by R the set of all real numbers.
A mapping A:C→ℋ is called α-inverse strongly monotone, if there exists a constant α>0 such that
(1.1)〈Ax-Ay,x-y〉≥α∥x-y∥2,∀x,y∈C.
For a given mapping A:C→ℋ, we consider the following variational inequality (VI) of finding x*∈C such that
(1.2)〈Ax*,x-x*〉≥0,∀x∈C.
The solution set of the VI (1.2) is denoted by VI(C,A). The variational inequality was first discussed by Lions [1] and now is well known. Variational inequality theory has been studied quite extensively and has emerged as an important tool in the study of a wide class of obstacle, unilateral, free, moving, equilibrium problems; see, for example, [2–4].
A mapping S:C→C is called k-strictly pseudocontractive if there exists a constant k∈[0,1) such that
(1.3)∥Sx-Sy∥2≤∥x-y∥2+k∥(I-S)x-(I-S)y∥2,∀x,y∈C;
see [5]. We denote by Fix(S) the fixed point set of S; that is, Fix(S)={x∈C:Sx=x}. In particular, if k=0, then S is called a nonexpansive mapping. In 2003, for finding an element of Fix(S)∩VI(C,A) when C⊂ℋ is nonempty, closed and convex, S:C→C is nonexpansive and A:C→ℋ is α-inverse strongly monotone, Takahashi and Toyoda [6] introduced the following Mann’s type iterative algorithm:
(1.4)xn+1=αnxn+(1-αn)SPC(xn-λnAxn),∀n≥0,
where x0∈C chosen arbitrarily, {αn} is a sequence in (0,1), and {λn} is a sequence in (0,2α). They showed that if Fix(S)∩VI(C,A)≠∅, then the sequence {xn} converges weakly to some z∈Fix(S)∩VI(C,A). Further, motivated by the idea of Korpelevich’s extragradient method [7], Nadezhkina and Takahashi [8] introduced an iterative algorithm for finding a common element of the fixed point set of a nonexpansive mapping and the solution set of a variational inequality problem for a monotone, Lipschitz continuous mapping in a real Hilbert space. They obtained a weak convergence theorem for two sequences generated by the proposed algorithm. Here the so-called extragradient method was first introduced by Korpelevich [7]. In 1976, She applied this method for finding a solution of a saddle point problem and proved the convergence of the proposed algorithm to a solution of this saddle point problem. Very recently, Jung [9] introduced a new composite iterative scheme by the viscosity approximation method and proved the strong convergence of the proposed scheme to a common element of the fixed point set of a nonexpansive mapping and the solution set of a variational inequality for an inverse-strongly monotone mapping in a Hilbert space.
On the other hand, let C and Q be nonempty closed convex subsets of infinite-dimensional real Hilbert spaces ℋ1 and ℋ2, respectively. The split feasibility problem (SFP) is to find a point x* with the following property:
(1.5)x*∈C,Ax*∈Q,
where A∈B(ℋ1,ℋ2) and B(ℋ1,ℋ2) denote the family of all bounded linear operators from ℋ1 to ℋ2.
In 1994, the SFP was first introduced by Censor and Elfving [10], in finite-dimensional Hilbert spaces, for modeling inverse problems which arise from phase retrievals and in medical image reconstruction. A number of image reconstruction problems can be formulated as the SFP; see, for example, [11] and the references therein. Recently, it is found that the SFP can also be applied to study intensity-modulated radiation therapy (IMRT) [12–14]. In the recent past, a wide variety of iterative methods have been used in signal processing and image reconstruction and for solving the SFP; see, for example, [11, 13, 15–19] and the references therein (see also [20] for relevant projection methods for solving image recovery problems). A special case of the SFP is the following convex constrained linear inverse problem [21] of finding an element x such that
(1.6)x∈C,Ax=b.
It has been extensively investigated in the literature using the projected Landweber iterative method [22]. Comparatively, the SFP has received much less attention so far, due to the complexity resulting from the set Q. Therefore, whether various versions of the projected Landweber iterative method [23] can be extended to solve the SFP remains an interesting open topics. The original algorithm given in [10] involves the computation of the inverse A-1 (assuming the existence of the inverse of A), and thus, did not become popular. A seemingly more popular algorithm that solves the SFP is the CQ algorithm of Byrne [11, 15] which is found to be a gradient-projection method (GPM) in convex minimization. It is also a special case of the proximal forward-backward splitting method [24]. The CQ algorithm only involves the computation of the projections PC and PQ onto the sets C and Q, respectively, and is therefore implementable in the case where PC and PQ have closed-form expressions; for example, C and Q are closed balls or half-spaces. However, it remains a challenge how to implement the CQ algorithm in the case where the projections PC and/or PQ fail to have closed-form expressions, though theoretically, we can prove the (weak) convergence of the algorithm.
In 2010, Xu [25] gave a continuation of the study on the CQ algorithm and its convergence. He applied Mann’s algorithm to the SFP and proposed an averaged CQ algorithm which was proved to be weakly convergent to a solution of the SFP. He derived a weak convergence result, which shows that for suitable choices of iterative parameters (including the regularization), the sequence of iterative solutions can converge weakly to an exact solution of the SFP.
Very recently, Ceng et al. [26] introduced and studied an extragradient method with regularization for finding a common element of the solution set Γ of the SFP and the set Fix(S) of fixed points of a nonexpansive mapping S in the setting of infinite-dimensional Hilbert spaces. By combining the regularization method and extragradient method due to Nadezhkina and Takahashi [8], the authors proposed an iterative algorithm for finding an element of Fix(S)∩Γ. The authors proved that the sequences generated by the proposed algorithm converge weakly to an element z∈Fix(S)∩Γ.
The purpose of this paper is to investigate modified extragradient methods for finding a common element of the solution set Γ of the SFP and the fixed point set Fix(S) of a strictly pseudocontractive mapping S in the setting of infinite-dimensional Hilbert spaces. Assume that Fix(S)∩Γ≠∅. By combining the regularization method and Nadezhkina and Takahashi’s extragradient method [8], we propose an extragradient algorithm for finding an element of Fix(S)∩Γ. It is proven that the sequences generated by the proposed algorithm converge weakly to an element of Fix(S)∩Γ. This result represents the supplementation, improvement, and extension of the corresponding results in [25, 26]; for example, [25, Theorem 5.7] and [26, Theorem 3.1]. On the other hand, by combining the regularization method and Jung’s composite viscosity approximation method [9], we also propose another extragradient-like algorithm for finding an element of Fix(S)∩Γ where S:C→C is nonexpansive. It is shown that the sequences generated by the proposed algorithm converge strongly to an element of Fix(S)∩Γ. Such a result substantially develops and improves the corresponding results in [9, 25, 26]; for example, [25, Theorem 5.7], [26, Theorem 3.1], and [9, Theorem 3.1]. It is worth pointing out that our results are new and novel in the Hilbert spaces setting. Essentially new approaches for finding the fixed points of strictly pseudocontractive mappings (including nonexpansive mappings) and solutions of the SFP are provided.
2. Preliminaries
Let ℋ be a real Hilbert space whose inner product and norm are denoted by 〈·,·〉 and ∥·∥, respectively. Let K be a nonempty closed convex subset of ℋ. We write xn⇀x to indicate that the sequence {xn} converges weakly to x and xn→x to indicate that the sequence {xn} converges strongly to x. Moreover, we use ωw(xn) to denote the weak ω-limit set of the sequence {xn}, that is,
(2.1)ωw(xn):={x:xni⇀xforsomesubsequence{xni}of{xn}}.
Recall that the metric (or nearest point) projection from ℋ onto K is the mapping PK:ℋ→K which assigns to each point x∈ℋ the unique point PKx∈K satisfying the property
(2.2)∥x-PKx∥=infy∈K∥x-y∥=:d(x,K).
Some important properties of projections are gathered in the following proposition.
Proposition 2.1.
For given x∈ℋ and z∈K,
z=PKx⇔〈x-z,y-z〉≤0, for all y∈K;
z=PKx⇔∥x-z∥2≤∥x-y∥2-∥y-z∥2, for all y∈K;
〈PKx-PKy,x-y〉≥∥PKx-PKy∥2, for all y∈ℋ, which hence implies that PK is nonexpansive and monotone.
Definition 2.2.
A mapping T:ℋ→ℋ is said to be
nonexpansive if
(2.3)∥Tx-Ty∥≤∥x-y∥,∀x,y∈ℋ;
firmly nonexpansive if 2T-I is nonexpansive, or equivalently,
(2.4)〈x-y,Tx-Ty〉≥∥Tx-Ty∥2,∀x,y∈ℋ;
alternatively, T is firmly nonexpansive if and only if T can be expressed as
(2.5)T=12(I+S),
where S:ℋ→ℋ is nonexpansive, projections are firmly nonexpansive.
Definition 2.3.
Let T be a nonlinear operator with domain D(T)⊆ℋ and range R(T)⊆ℋ, and let β>0 and ν>0 be given constants. The operator T is called:
monotone if
(2.6)〈x-y,Tx-Ty〉≥0,∀x,y∈D(T).
β-strongly monotone if
(2.7)〈x-y,Tx-Ty〉≥β∥x-y∥2,∀x,y∈D(T).
ν-inverse strongly monotone (ν-ism) if
(2.8)〈x-y,Tx-Ty〉≥ν∥Tx-Ty∥2,∀x,y∈D(T).
It can be easily seen that if T is nonexpansive, then I-T is monotone. It is also easy to see that a projection PK is 1-ism.
Inverse strongly monotone (also referred to as cocoercive) operators have been applied widely to solve practical problems in various fields, for instance, in traffic assignment problems; see, for example, [27, 28].
Definition 2.4.
A mapping T:ℋ→ℋ is said to be an averaged mapping if it can be written as the average of the identity I and a nonexpansive mapping, that is,
(2.9)T≡(1-α)I+αS,
where α∈(0,1) and S:ℋ→ℋ is nonexpansive. More precisely, when the last equality holds, we say that T is α-averaged. Thus firmly nonexpansive mappings (in particular, projections) are (1/2)-averaged maps.
Proposition 2.5 (see [15]).
Let T:ℋ→ℋ be a given mapping.
T is nonexpansive if and only if the complement I-T is (1/2)-ism.
If T is ν-ism, then for γ>0,γT is (ν/γ)-ism.
T is averaged if and only if the complementI-T is ν-ism for someν>1/2. Indeed, for α∈(0,1),T is α-averaged if and only if I-T is (1/2α)-ism.
Proposition 2.6 (see [15, 29]).
Let S,T,V:ℋ→ℋ be given operators.
If T=(1-α)S+αV for some α∈(0,1), S is averaged and V is nonexpansive, then T is averaged.
T is firmly nonexpansive if and only if the complement I-T is firmly nonexpansive.
If T=(1-α)S+αV for some α∈(0,1), S is firmly nonexpansive and V is nonexpansive, then T is averaged.
The composite of finitely many averaged mappings is averaged. That is, if each of the mappings {Ti}i=1N is averaged, then so is the composite T1∘T2∘⋯∘TN. In particular, if T1 is α1-averaged and T2is α2-averaged, where α1,α2∈(0,1), then the composite T1∘T2 is α-averaged, where α=α1+α2-α1α2.
If the mappings {Ti}i=1N are averaged and have a common fixed point, then(2.10)⋂i=1NFix(Ti)=Fix(T1⋯TN).
The notation Fix(T) denotes the set of all fixed points of the mapping T, that is, Fix(T)={x∈ℋ:Tx=x}.
On the other hand, it is clear that, in a real Hilbert space ℋ, S:C→C is k-strictly pseudocontractive if and only if there holds the following inequality:
(2.11)〈Sx-Sy,x-y〉≤∥x-y∥2-1-k2∥(I-S)x-(I-S)y∥2,∀x,y∈C.
This immediately implies that if S is a k-strictly pseudocontractive mapping, then I-S is ((1-k)/2)-inverse strongly monotone; for further detail, we refer to [5] and the references therein. It is well known that the class of strict pseudocontractions strictly includes the class of nonexpansive mappings. The so-called demiclosedness principle for strict pseudocontractive mappings in the following lemma will often be used.
Lemma 2.7 (see [5, Proposition 2.1]).
Let C be a nonempty closed convex subset of a real Hilbert space ℋ and S:C→C be a mapping.
If S is a k-strict pseudocontractive mapping, then S satisfies the Lipschitz condition
(2.12)∥Sx-Sy∥≤1+k1-k∥x-y∥,∀x,y∈C.
If S is a k-strict pseudocontractive mapping, then the mapping I-S is semiclosed at 0; that is, if {xn} is a sequence in C such that xn⇀x~ and (I-S)xn→0, then (I-S)x~=0.
If S is k-(quasi-)strict pseudo-contraction, then the fixed point set Fix(S) of S is closed and convex so that the projection PFix(S) is well defined.
The following elementary result on real sequences is quite well known.
Lemma 2.8 (see [30, page 80]).
Let {an}n=1∞,{bn}n=1∞ and {σn}n=1∞ be sequences of nonnegative real numbers satisfying the inequality
(2.13)an+1≤(1+σn)an+bn,∀n≥1.
If ∑n=1∞σn<∞ and ∑n=1∞bn<∞, then limn→∞an exists. If, in addition, {an}n=1∞ has a subsequence which converges to zero, then limn→∞an=0.
Corollary 2.9 (see [31, page 303]).
Let {an}n=0∞ and {bn}n=0∞ be two sequences of nonnegative real numbers satisfying the inequality
(2.14)an+1≤an+bn,∀n≥1.
If ∑n=0∞bn converges, then limn→∞an exists.
It is easy to see that the following lemma holds.
Lemma 2.10 (see [32]).
Let ℋ be a real Hilbert space. Then, for all x,y∈ℋ and λ∈[0,1],
(2.15)∥λx+(1-λ)y∥2=λ∥x∥2+(1-λ)∥y∥2-λ(1-λ)∥x-y∥2.
The following lemma plays a key role in proving weak convergence of the sequences generated by our algorithm.
Lemma 2.11 (see [33]).
Let C be a nonempty closed convex subset of a real Hilbert space ℋ. Let S:C→C be a k-strictly pseudocontractive mapping. Let γ and δ be two nonnegative real numbers such that (γ+δ)k≤γ. Then
(2.16)∥γ(x-y)+δ(Sx-Sy)∥≤(γ+δ)∥x-y∥,∀x,y∈C.
The following result is useful when we prove the weak convergence of a sequence.
Lemma 2.12 (see [25, Proposition 2.6]).
Let K be a nonempty closed convex subset of a real Hilbert space ℋ. Let {xn} be a bounded sequence which satisfies the following properties:
every weak limit point of {xn} lies in K;
limn→∞∥xn-x∥ exists for every x∈K.
Then {xn} converges weakly to a point in K.
Let K be a nonempty closed convex subset of a real Hilbert space ℋ and let F:K→ℋ be a monotone mapping. The variational inequality (VI) is to find x∈K such that
(2.17)〈Fx,y-x〉≥0,∀y∈K.
The solution set of the VIP is denoted by VI(K,F). It is well known that
(2.18)x∈VI(K,F)⟺x=PK(x-λFx),∀λ>0.
A set-valued mapping T:ℋ→2ℋ is called monotone if for all x,y∈ℋ,f∈Tx and g∈Ty imply
(2.19)〈x-y,f-g〉≥0.
A monotone mapping T:ℋ→2ℋ is called maximal if its graph G(T) is not properly contained in the graph of any other monotone mapping. It is known that a monotone mapping T is maximal if and only if, for (x,f)∈ℋ×ℋ,〈x-y,f-g〉≥0 for every (y,g)∈G(T) implies f∈Tx. Let F:K→ℋ be a monotone and L-Lipschitz continuous mapping and let NKv be the normal cone to K at v∈K, that is,
(2.20)NKv={w∈ℋ:〈v-y,w〉≥0,∀y∈K}.
Define
(2.21)Tv={Fv+NKv,ifv∈K,∅,ifv∉K.
Then, T is maximal monotone and 0∈Tv if and only if v∈VI(K,F); see [34] for more details.
3. Some Modified Extragradient Methods
Throughout the paper, we assume that the SFP is consistent; that is, the solution set Γ of the SFP is nonempty. Let f:ℋ1→R be a continuous differentiable function. the minimization problem
(3.1)minx∈Cf(x):=12∥Ax-PQAx∥2
is ill posed. Therefore, Xu [25] considered the following Tikhonov regularized problem:
(3.2)minx∈Cfα(x):=12∥Ax-PQAx∥2+12α∥x∥2,
where α>0 is the regularization parameter.
We observe that the gradient
(3.3)∇fα(x)=∇f(x)+αI=A*(I-PQ)A+αI
is (α+∥A∥2)-Lipschitz continuous and α-strongly monotone.
We can use fixed point algorithms to solve the SFP on the basis of the following observation.
Let λ>0 and assume that x*∈Γ. Then Ax*∈Q, which implies that (I-PQ)Ax*=0, and thus, λA*(I-PQ)Ax*=0. Hence, we have the fixed point equation (I-λA*(I-PQ)A)x*=x*. Requiring that x*∈C, we consider the fixed point equation
(3.4)PC(I-λ∇f)x*=PC(I-λA*(I-PQ)A)x*=x*.
It is proven in [25, Proposition 3.2] that the solutions of the fixed point equation (3.4) are exactly the solutions of the SFP; namely, for given x*∈ℋ1,x* solves the SFP if and only if x* solves the fixed point equation (3.4).
Proposition 3.1 (see [26, proposition 3.1]).
Given x*∈ℋ1, the following statements are equivalent:
x* solves the SFP;
x* solves the fixed point equation (3.4);
x* solves the variational inequality problem (VIP) of finding x*∈Csuch that(3.5)〈∇f(x*),x-x*〉≥0,∀x∈C.
Remark 3.2.
It is clear from Proposition 3.1 that
(3.6)Γ=Fix(PC(I-λ∇f))=VI(C,∇f)
for all λ>0, where Fix(PC(I-λ∇f)) and VI(C,∇f) denote the set of fixed points of PC(I-λ∇f) and the solution set of VIP (3.4), respectively.
We are now in a position to propose a modified extragradient method for solving the SFP and the fixed point problem of a k-strictly pseudocontractive mapping S:C→C and prove that the sequences generated by the proposed method converge weakly to an element of Fix(S)∩Γ.
Theorem 3.3.
Let S:C→C be a k-strictly pseudocontractive mapping such that Fix(S)∩Γ≠∅. Let {xn} and {yn} be the sequences in C generated by the following modified extragradient algorithm:
(3.7)x0=x∈Cchosenarbitrarily,yn=PC(I-λn∇fαn)xn,xn+1=βnxn+γnPC(xn-λn∇fαn(yn))+δnSPC(xn-λn∇fαn(yn)),∀n≥0,
where {αn}⊂(0,∞),{λn}⊂(0,1/∥A∥2) and {βn},{γn},{δn}⊂[0,1] such that
∑n=0∞αn<∞;
0<liminfn→∞λn≤limsupn→∞λn<1/∥A∥2;
βn+γn+δn=1 and (γn+δn)k≤γn for all n≥0;
0<liminfn→∞βn≤limsupn→∞βn<1 and liminfn→∞δn>0.
Then, both the sequences {xn} and {yn} converge weakly to an element x^∈Fix(S)∩Γ.
Proof.
First, taking into account 0<liminfn→∞λn≤limsupn→∞λn<1/∥A∥2, without loss of generality, we may assume that {λn}⊂[a,b] for some a,b∈(0,1/∥A∥2).
We observe that PC(I-λ∇fα) is ζ-averaged for each λ∈(0,2/(α+∥A∥2)), where
(3.8)ζ=2+λ(α+∥A∥2)4∈(0,1).
See, for example, [35] and from which it follows that PC(I-λ∇fα) and PC(I-λn∇fαn) are nonexpansive for all n≥0.
Next, we show that the sequence {xn} is bounded. Indeed, take a fixed p∈Fix(S)∩Γ arbitrarily. Then, we get Sp=p and PC(I-λ∇f)p=p for λ∈(0,2/∥A∥2). For simplicity, we write vn=PC(xn-λn∇fαn(yn)) for all n≥0. Then we get xn+1=βnxn+γnvn+δnSvn for all n≥0. From (3.7), it follows that
(3.9)∥yn-p∥=∥PC(I-λn∇fαn)xn-PC(I-λn∇f)p∥≤∥PC(I-λn∇fαn)xn-PC(I-λn∇fαn)p∥+∥PC(I-λn∇fαn)p-PC(I-λn∇f)p∥≤∥xn-p∥+∥PC(I-λn∇fαn)p-PC(I-λn∇f)p∥≤∥xn-p∥+∥(I-λn∇fαn)p-(I-λn∇f)p∥=∥xn-p∥+λnαn∥p∥.
Also, by Proposition 2.1(ii), we have
(3.10)∥vn-p∥2≤∥xn-λn∇fαn(yn)-p∥2-∥xn-λn∇fαn(yn)-vn∥2=∥xn-p∥2-∥xn-vn∥2+2λn〈∇fαn(yn),p-vn〉=∥xn-p∥2-∥xn-vn∥2+2λn(〈∇fαn(yn)-∇fαn(p),p-yn〉+〈∇fαn(p),p-yn〉+〈∇fαn(yn),yn-vn〉)≤∥xn-p∥2-∥xn-vn∥2+2λn(〈∇fαn(p),p-yn〉+〈∇fαn(yn),yn-vn〉)=∥xn-p∥2-∥xn-vn∥2+2λn[〈(αnI+∇f)p,p-yn〉+〈∇fαn(yn),yn-vn〉]≤∥xn-p∥2-∥xn-vn∥2+2λn[αn〈p,p-yn〉+〈∇fαn(yn),yn-vn〉]=∥xn-p∥2-∥xn-yn∥2-2〈xn-yn,yn-vn〉-∥yn-vn∥2+2λn[αn〈p,p-yn〉+〈∇fαn(yn),yn-vn〉]=∥xn-p∥2-∥xn-yn∥2-∥yn-vn∥2+2〈xn-λn∇fαn(yn)-yn,vn-yn〉+2λnαn〈p,p-yn〉.
Further, by Proposition 2.1(i), we have
(3.11)〈xn-λn∇fαn(yn)-yn,vn-yn〉=〈xn-λn∇fαn(xn)-yn,vn-yn〉+〈λn∇fαn(xn)-λn∇fαn(yn),vn-yn〉≤〈λn∇fαn(xn)-λn∇fαn(yn),vn-yn〉≤λn∥∇fαn(xn)-∇fαn(yn)∥∥vn-yn∥≤λn(αn+∥A∥2)∥xn-yn∥∥vn-yn∥.
So, we obtain
(3.12)∥vn-p∥2≤∥xn-p∥2-∥xn-yn∥2-∥yn-vn∥2+2〈xn-λn∇fαn(yn)-yn,vn-yn〉+2λnαn〈p,p-yn〉≤∥xn-p∥2-∥xn-yn∥2-∥yn-vn∥2+2λn(αn+∥A∥2)∥xn-yn∥∥vn-yn∥+2λnαn∥p∥∥p-yn∥≤∥xn-p∥2-∥xn-yn∥2-∥yn-vn∥2+λn2(αn+∥A∥2)2∥xn-yn∥2-∥yn-vn∥2+2λnαn∥p∥∥p-yn∥=∥xn-p∥2+2λnαn∥p∥∥p-yn∥+(λn2(αn+∥A∥2)2-1)∥xn-yn∥2≤∥xn-p∥2+2λnαn∥p∥∥p-yn∥.
Since (γn+δn)k≤γn, utilizing Lemmas 2.10 and 2.11, from (3.9) and the last inequality, we conclude that
(3.13)∥xn+1-p∥2=∥βnxn+γnvn+δnSvn-p∥2=∥βn(xn-p)+(γn+δn)1γn+δn[γn(vn-p)+δn(Svn-p)]∥2=βn∥xn-p∥2+(γn+δn)∥1γn+δn[γn(vn-p)+δn(Svn-p)]∥2-βn(γn+δn)∥1γn+δn[γn(vn-xn)+δn(Svn-xn)]∥2≤βn∥xn-p∥2+(1-βn)∥vn-p∥2-βn1-βn∥xn+1-xn∥2≤βn∥xn-p∥2+(1-βn)[∥xn-p∥2+2λnαn∥p∥∥p-yn∥asdasdasdaaaaaaaaaaaaai+(λn2(αn+∥A∥2)2-1)∥xn-yn∥2]-βn1-βn∥xn+1-xn∥2≤∥xn-p∥2+2λnαn∥p∥∥p-yn∥+(1-βn)(λn2(αn+∥A∥2)2-1)∥xn-yn∥2-βn1-βn∥xn+1-xn∥2≤∥xn-p∥2+αn(λn2∥p∥2+∥p-yn∥2)+(1-βn)(λn2(αn+∥A∥2)2-1)∥xn-yn∥2-βn1-βn∥xn+1-xn∥2≤∥xn-p∥2+αn[λn2∥p∥2+(∥xn-p∥+λnαn∥p∥)2]+(1-βn)(λn2(αn+∥A∥2)2-1)∥xn-yn∥2-βn1-βn∥xn+1-xn∥2≤∥xn-p∥2+αn[λn2∥p∥2+2∥xn-p∥2+2λn2αn2∥p∥2]+(1-βn)(λn2(αn+∥A∥2)2-1)∥xn-yn∥2-βn1-βn∥xn+1-xn∥2=(1+2αn)∥xn-p∥2+αnλn2∥p∥2(1+2αn2)+(1-βn)(λn2(αn+∥A∥2)2-1)∥xn-yn∥2-βn1-βn∥xn+1-xn∥2≤(1+2αn)∥xn-p∥2+αnλn2∥p∥2(1+2αn2)=(1+σn)∥xn-p∥2+bn,
where σn=2αn and bn=αnλn2∥p∥2(1+2αn2). Since ∑n=0∞αn<∞ and {λn}⊂[a,b] for some a,b∈(0,1/∥A∥2), we conclude that ∑n=0∞σn<∞ and ∑n=0∞bn<∞. Therefore, by Lemma 2.8, we deduce that
(3.14)limn→∞∥xn-p∥existsforeachp∈Fix(S)∩Γ,
and the sequence {xn} is bounded and so are {yn} and {vn}. From the last relations, we also obtain
(3.15)(1-βn)(1-λn2(αn+∥A∥2)2)∥xn-yn∥2+βn1-βn∥xn+1-xn∥2≤(1+2αn)∥xn-p∥2-∥xn+1-p∥2+αnλn2∥p∥2(1+2αn2).
Since {λn}⊂[a,b] for some a,b∈(0,1/∥A∥2),0<liminfn→∞βn≤limsupn→∞βn<1 and limn→∞αn=0, we have
(3.16)limn→∞∥xn-yn∥=limn→∞∥xn+1-xn∥=0.
Furthermore, we obtain
(3.17)∥yn-vn∥=∥PC(xn-λn∇fαn(xn))-PC(xn-λn∇fαn(yn))∥≤∥(xn-λn∇fαn(xn))-(xn-λn∇fαn(yn))∥=λn∥∇fαn(xn)-∇fαn(yn)∥≤λn(αn+∥A∥2)∥xn-yn∥.
This together with (3.16) implies that
(3.18)limn→∞∥yn-vn∥=0.
Note that
(3.19)∥vn-xn∥≤∥vn-yn∥+∥yn-xn∥,∥δn(Svn-xn)∥=∥xn+1-xn-γn(vn-xn)∥≤∥xn+1-xn∥+γn∥vn-xn∥≤∥xn+1-xn∥+∥vn-xn∥.
This together with (3.16), (3.18), and liminfδn>0 implies that
(3.20)limn→∞∥vn-xn∥=limn→∞∥Svn-xn∥=0.
So, we derive
(3.21)limn→∞∥Svn-vn∥=0.
Since ∇f=A*(I-PQ)A is Lipschitz continuous, from (3.18), we have
(3.22)limn→∞∥∇f(yn)-∇f(vn)∥=0.
As {xn} is bounded, there is a subsequence {xni} of {xn} that converges weakly to some x^. We obtain that x^∈Fix(S)∩Γ. First, we show that x^∈Γ. Since ∥xn-vn∥→0 and ∥yn-vn∥→0, it is known that vni⇀x^ and yni⇀x^. Let
(3.23)Tv={∇f(v)+NCv,ifv∈C,∅,ifv∉C,
where NCv={w∈ℋ1:〈v-u,w〉≥0,forallu∈C}. Then, T is maximal monotone and 0∈Tv if and only if v∈VI(C,∇f); see [34] for more details. Let (v,w)∈G(T). Then, we have
(3.24)w∈Tv=∇f(v)+NCv
and hence,
(3.25)w-∇f(v)∈NCv.
So, we have
(3.26)〈v-u,w-∇f(v)〉≥0,∀u∈C.
On the other hand, from
(3.27)yn=PC(xn-λn∇fαn(xn)),v∈C,
we have
(3.28)〈xn-λn∇fαn(xn)-yn,yn-v〉≥0,
and hence,
(3.29)〈v-yn,yn-xnλn+∇fαn(xn)〉≥0.
Therefore, from
(3.30)w-∇f(v)∈NCv,yni∈C,
we have
(3.31)〈v-yni,w〉≥〈v-yni,∇f(v)〉≥〈v-yni,∇f(v)〉-〈v-yni,yni-xniλni+∇fαni(xni)〉=〈v-yni,∇f(v)〉-〈v-yni,yni-xniλni+∇f(xni)〉-αni〈v-yni,xni〉=〈v-yni,∇f(v)-∇f(yni)〉+〈v-yni,∇f(yni)-∇f(xni)〉-〈v-yni,yni-xniλni〉-αni〈v-yni,xni〉≥〈v-yni,∇f(yni)-∇f(xni)〉-〈v-yni,yni-xniλni〉-αni〈v-yni,xni〉.
Hence, we obtain
(3.32)〈v-x^,w〉≥0,asi→∞.
Since T is maximal monotone, we have x^∈T-10, and hence, x^∈VI(C,∇f). Thus it is clear that x^∈Γ.
We show that x^∈Fix(S). Indeed, since vni⇀x^ and ∥vni-Svni∥→0 by (3.21), by Lemma 2.7(ii), we get x^∈Fix(S). Therefore, we have x^∈Fix(S)∩Γ. This shows that ωw(xn)⊂Fix(S)∩Γ, where
(3.33)ωw(xn):={x:xni⇀xforsomesubsequence{xni}of{xn}}.
Since the limit limn→∞∥xn-p∥ exists for every p∈Fix(S)∩Γ, by Lemma 2.12, we know that
(3.34)xn⇀x^∈Fix(S)∩Γ.
Further, from ∥xn-yn∥→0, it follows that yn⇀x^. This shows that both sequences {xn} and {yn} converge weakly to x^∈Fix(S)∩Γ.
Remark 3.4.
It is worth emphasizing that the modified extragradient algorithm in Theorem 3.3 is essentially the predictor-corrector algorithm. Indeed, the first iterative step yn=PC(I-λn∇fαn)xn is the predictor one, and the second iterative step xn+1 = βnxn+γnPC(xn-λn∇fαn(yn))+δnSPC(xn-λn∇fαn(yn)) is actually the corrector one. In addition, Theorem 3.3 extends the extragradient method due to Nadezhkina and Takahashi [8, Theorem 3.1].
Corollary 3.5.
Let S:C→C be a nonexpansive mapping such that Fix(S)∩Γ≠∅. Let {xn} and {yn} be the sequences in C generated by the following extragradient algorithm:
(3.35)x0=x∈Cchosenarbitrarily,yn=PC(I-λn∇fαn)xn,xn+1=βnxn+(1-βn)SPC(xn-λn∇fαn(yn)),∀n≥0,
where {αn}⊂(0,∞),{λn}⊂(0,1/∥A∥2), and {βn}⊂[0,1] such that
∑n=0∞αn<∞;
0<liminfn→∞λn≤limsupn→∞λn<1/∥A∥2;
0<liminfn→∞βn≤limsupn→∞βn<1.
Then, both the sequences {xn} and {yn} converge weakly to an element x^∈Fix(S)∩Γ.
Proof.
In Theorem 3.3, putting γn=0 for every n≥0, we obtain that βn+δn=βn+γn+δn=1 and(3.36)x0=x∈Cchosenarbitrarily,yn=PC(I-λn∇fαn)xn,xn+1=βnxn+γnPC(xn-λn∇fαn(yn))+δnSPC(xn-λn∇fαn(yn))=βnxn+δnSPC(xn-λn∇fαn(yn)),∀n≥0.
Since S:C→C is a nonexpansive mapping, S:C→C must be a k-strictly pseudocontractive mapping with coefficient k=0. It is clear that (γn+δn)k≤γn for every n≥0 and liminfn→∞δn=1-limsupn→∞βn>0. In this case, all conditions in Theorem 3.3 are satisfied. Therefore, by Theorem 3.3, we derive the desired result.
Remark 3.6.
Compared with [26, Theorem 3.1], Corollary 3.5 is essentially coincident with [26, Theorem 3.1]. Hence our Theorem 3.3 includes [26, Theorem 3.1] as a special case. Utilizing [8, Theorem 3.1], Ceng et al. gave the following weak convergence result [26, Theorem3.2].
Let S:C→C be a nonexpansive mapping such that Fix(S)∩Γ≠∅. Let {xn} and {yn} be the sequences in C generated by the following Nadezhkina and Takahashi extragradient algorithm
(3.37)x0=x∈Cchosenarbitrarily,yn=PC(I-λn∇f)xn,xn+1=βnxn+(1-βn)SPC(xn-λn∇f(yn)),∀n≥0,
where {λn}⊂[a,b] for some a,b∈(0,1/∥A∥2) and {βn}⊂[c,d] for some c,d∈(0,1). Then, both the sequences {xn} and {yn} converge weakly to an element x^∈Fix(S)∩Γ.
Obviously, there is no doubt that [26, Theorem 3.2] is a weak convergence result for {αn} satisfying αn=0, for all n≥0. However, Corollary 3.5 is another weak convergence one for the sequence of regularization parameters {αn}⊂(0,∞).
Remark 3.7.
Theorem 3.3 improves, extends, and develops [25, theorem 5.7] and [26, Theorem 3.1] in the following aspects.
The corresponding iterative algorithms in [25, Theorem 5.7] and [26, Theorem 3.1] are extended for developing our modified extragradient algorithm with regularization in Theorem 3.3.
The technique of proving weak convergence in Theorem 3.3 is different from those in [25, Theorem 5.7] and [26, Theorem 3.1] because our technique depends on the properties of maximal monotone mappings and strictly pseudocontractive mappings (e.g., Lemma 2.11) and the demiclosedness principle for strictly pseudocontractive mappings (e.g., Lemma 2.7) in Hilbert spaces.
The problem of finding an element of Fix(S)∩Γ with S:C→C being strictly pseudocontractive is more general than the problem of finding a solution of the SFP in [25, Theorem 5.7] and the problem of finding an element of Fix(S)∩Γ with S:C→C being nonexpansive in [26, Theorem 3.1].
The second iterative step xn+1 = βnxn+γnPC(xn-λn∇fαn(yn))+δnSPC(xn-λn∇fαn(yn)) in our algorithm reduces to the the second iterative one xn+1=βnxn+(1-βn)SPC(xn-λn∇fαn(yn)) in the algorithm of [26, Theorem 3.1] whenever γn=0 for all n≥0.
Utilizing Theorem 3.3, we have the following two new results in the setting of real Hilbert spaces.
Corollary 3.8.
Let S:ℋ1→ℋ1 be a k-strictly pseudocontractive mapping such that Fix(S)∩(∇f)-10≠∅. Let {xn} and {yn} be two sequences generated by
(3.38)x0=x∈Cchosenarbitrarily,yn=(I-λn∇fαn)xn,xn+1=βnxn+γn(xn-λn∇fαn(yn))+δnS(xn-λn∇fαn(yn)),∀n≥0,
where {αn}⊂(0,∞),{λn}⊂(0,1/∥A∥2) and {βn},{γn},{δn}⊂[0,1] such that
∑n=0∞αn<∞;
0<liminfn→∞λn≤limsupn→∞λn<1/∥A∥2;
βn+γn+δn=1 and (γn+δn)k≤γn for all n≥0;
0<liminfn→∞βn≤limsupn→∞βn<1 and liminfn→∞δn>0.
Then, both the sequences {xn} and {yn} converge weakly to an element x^∈Fix(S)∩(∇f)-10.
Proof.
In Theorem 3.3, putting C=ℋ1, we have
(3.39)(∇f)-10=VI(ℋ1,∇f)=Γ
and Pℋ1=I the identity mapping. By Theorem 3.3, we obtain the desired result.
Remark 3.9.
In Corollary 3.8, putting γn=0 for every n≥0 and letting S:ℋ1→ℋ1 be a nonexpansive mapping, Corollary 3.8 essentially reduces to [26, Corollary 3.2]. Hence, Corollary 3.8 includes [26, Corollary 3.2] as a special case.
Corollary 3.10.
Let B:ℋ1→2ℋ1 be a maximal monotone mapping such that B-10∩(∇f)-10≠∅. Let JrB be the resolvent of B for each r>0. Let {xn} and {yn} be the sequences generated by
(3.40)x0=x∈Cchosenarbitrarily,yn=(I-λn∇fαn)xn,xn+1=βnxn+γn(xn-λn∇fαn(yn))+δnJrB(xn-λn∇fαn(yn)),∀n≥0,
where {αn}⊂(0,∞),{λn}⊂(0,1/∥A∥2) and {βn},{γn},{δn}⊂[0,1] such that
∑n=0∞αn<∞;
0<liminfn→∞λn≤limsupn→∞λn<1/∥A∥2;
βn+γn+δn=1 and (γn+δn)k≤γn for all n≥0;
0<liminfn→∞βn≤limsupn→∞βn<1 and liminfn→∞δn>0.
Then, both the sequences {xn} and {yn} converge weakly to an element x^∈B-10∩(∇f)-10.
Proof.
In Theorem 3.3, putting C=ℋ1 and S=JrB the resolvent of B, we know that Pℋ1=I the identity mapping and S is nonexpansive. In this case, we get Fix(S)=Fix(JrB)=B-10 and
(3.41)(∇f)-10=VI(ℋ1,∇f)=Γ.
By Theorem 3.3, we obtain the desired result.
Remark 3.11.
In Corollary 3.10, putting γn=0 for every n≥0, Corollary 3.10 essentially reduces to [26, Corollary 3.3]. Hence, Corollary 3.10 includes [26, Corollary 3.3] as a special case.
On the other hand, by combining the regularization method and Jung’s composite viscosity approximation method [9], we introduce another new composite iterative scheme for finding an element of Fix(S)∩Γ, where S:C→C is nonexpansive, and prove strong convergence of this scheme. To attain this object, we need to use the following lemmas.
Lemma 3.12 (see [36]).
Let {an} be a sequence of nonnegative real numbers satisfying the property
(3.42)an+1≤(1-sn)an+sntn+rn,∀n≥0,
where {sn}⊂(0,1] and {tn} are such that
∑n=0∞sn=∞;
either limsupn→∞tn≤0 or ∑n=0∞|sntn|<∞;
∑n=0∞rn<∞ where rn≥0,foralln≥0.
Then, limn→∞an=0.
Lemma 3.13.
In a real Hilbert space ℋ, there holds the following inequality:
(3.43)∥x+y∥2≤∥x∥2+2〈y,x+y〉,∀x,y∈ℋ.
Theorem 3.14.
Let Q:C→C be a contractive mapping with coefficient ρ∈[0,1) and S:C→C be a nonexpansive mapping such that Fix(S)∩Γ≠∅. Assume that 0<λ<2/∥A∥2, and let {xn} and {yn} be the sequences in C generated by the following composite extragradient-like algorithm:
(3.44)x0=x∈Cchosenarbitrarily,yn=βnQxn+(1-βn)SPC(xn-λ∇fαn(xn)),xn+1=(1-γn)yn+γnSPC(yn-λ∇fαn(yn)),∀n≥0,
where the sequences of parameters {αn}⊂(0,∞) and {βn},{γn}⊂[0,1] satisfy the following conditions:
∑n=0∞αn<∞;
limn→∞βn=0,∑n=0∞βn=∞ and ∑n=0∞|βn+1-βn|<∞;
limsupn→∞γn<1 and ∑n=0∞|γn+1-γn|<∞.
Then, both the sequences {xn} and {yn} converge strongly to q∈Fix(S)∩Γ, which is a unique solution of the following variational inequality:
(3.45)〈(I-Q)q,q-p〉≤0,∀p∈Fix(S)∩Γ.
Proof.
Repeating the same argument as in the proof of Theorem 3.3, we obtain that for each λ∈(0,2/(α+∥A∥2)),PC(I-λ∇fα) is ζ-averaged with
(3.46)ζ=12+λ(α+∥A∥2)2-12·λ(α+∥A∥2)2=2+λ(α+∥A∥2)4∈(0,1).
This shows that PC(I-λ∇fα) is nonexpansive. Furthermore, for λ∈(0,2/∥A∥2), utilizing the fact that limn→∞(2/(αn+∥A∥2))=2/∥A∥2 we may assume that
(3.47)0<λ<2αn+∥A∥2,∀n≥0.
Consequently, it follows that for each integer n≥0, PC(I-λ∇fαn) is ζn-averaged with
(3.48)ζn=12+λ(αn+∥A∥2)2-12·λ(αn+∥A∥2)2=2+λ(αn+∥A∥2)4∈(0,1).
This immediately implies that PC(I-λ∇fαn) is nonexpansive for all n≥0. Next, we divide the remainder of the proof into several steps.
Step 1. {xn} is bounded.
Indeed, put un=PC(xn-λ∇fαn(xn)) and vn=PC(yn-λ∇fαn(yn)) for every n≥0. Take a fixed p∈Fix(S)∩Γ arbitrarily. Then, we get Sp=p and PC(I-λ∇f)p=p for λ∈(0,2/∥A∥2). Hence, we have
(3.49)∥un-p∥=∥PC(I-λ∇fαn)xn-PC(I-λ∇f)p∥≤∥PC(I-λ∇fαn)xn-PC(I-λ∇fαn)p∥+∥PC(I-λ∇fαn)p-PC(I-λ∇f)p∥≤∥xn-p∥+∥PC(I-λ∇fαn)p-PC(I-λ∇f)p∥≤∥xn-p∥+λαn∥p∥.
Similarly we get ∥vn-p∥≤∥yn-p∥+λαn∥p∥. Thus, from (3.44), we have
(3.50)∥yn-p∥=∥βn(Qxn-p)+(1-βn)(Sun-p)∥≤βn∥Qxn-p∥+(1-βn)∥un-p∥≤βn∥Qxn-Qp∥+βn∥Qp-p∥+(1-βn)(∥xn-p∥+λαn∥p∥)≤βnρ∥xn-p∥+βn∥Qp-p∥+(1-βn)(∥xn-p∥+λαn∥p∥)=(1-βn(1-ρ))∥xn-p∥+βn∥Qp-p∥+(1-βn)λαn∥p∥=(1-βn(1-ρ))∥xn-p∥+βn(1-ρ)∥Qp-p∥1-ρ+(1-βn)λαn∥p∥≤max{∥xn-p∥,∥Qp-p∥1-ρ}+λαn∥p∥,
and hence,
(3.51)∥xn+1-p∥=∥(1-γn)(yn-p)+γn(Svn-p)∥≤(1-γn)∥yn-p∥+γn∥vn-p∥≤(1-γn)∥yn-p∥+γn(∥yn-p∥+λαn∥p∥)≤∥yn-p∥+λαn∥p∥≤max{∥xn-p∥,∥Qp-p∥1-ρ}+λαn∥p∥+λαn∥p∥=max{∥xn-p∥,∥Qp-p∥1-ρ}+2λαn∥p∥.
By induction, we get
(3.52)∥xn+1-p∥≤max{∥x0-p∥,∥Qp-p∥1-ρ}+2λ∥p∥·∑i=0nαi,∀n≥0.
This implies that {xn} is bounded and so are {yn},{un},{vn}. It is clear that both {Sun} and {Svn} are also bounded. By condition (ii), we also obtain
(3.53)∥yn-Sun∥=βn∥Qxn-Sun∥→0(n→∞).Step 2. limn→∞∥xn+1-xn∥=0.
Indeed, from (3.44), we have
(3.54)yn=βnQxn+(1-βn)Sun,yn-1=βn-1Qxn-1+(1-βn-1)Sun-1,∀n≥1.
Simple calculations show that
(3.55)yn-yn-1=(1-βn)(Sun-Sun-1)+(βn-βn-1)(Qxn-1-Sun-1)+βn(Qxn-Qxn-1).
Since
(3.56)∥un-un-1∥≤∥PC(I-λ∇fαn)xn-PC(I-λ∇fαn)xn-1∥+∥PC(I-λ∇fαn)xn-1-PC(I-λ∇fαn-1)xn-1∥≤∥xn-xn-1∥+∥(I-λ∇fαn)xn-1-(I-λ∇fαn-1)xn-1∥=∥xn-xn-1∥+∥λ∇fαn(xn-1)-λ∇fαn-1(xn-1)∥=∥xn-xn-1∥+λ|αn-αn-1|∥xn-1∥
for every n≥1, we have
(3.57)∥yn-yn-1∥≤(1-βn)∥Sun-Sun-1∥+|βn-βn-1|∥Qxn-1-Sun-1∥+βn∥Qxn-Qxn-1∥≤(1-βn)∥un-un-1∥+|βn-βn-1|∥Qxn-1-Sun-1∥+βnρ∥xn-xn-1∥≤(1-βn)[∥xn-xn-1∥+λ|αn-αn-1|∥xn-1∥]+|βn-βn-1|∥Qxn-1-Sun-1∥+βnρ∥xn-xn-1∥≤(1-βn(1-ρ))∥xn-xn-1∥+λ|αn-αn-1|∥xn-1∥+|βn-βn-1|∥Qxn-1-Sun-1∥≤(1-βn(1-ρ))∥xn-xn-1∥+M1[|αn-αn-1|+|βn-βn-1|]
for every n≥1, where M1=sup{λ∥xn-1∥+∥Qxn-1-Sun-1∥:n≥1}.
On the other hand, from (3.44), we have
(3.58)xn+1=(1-γn)yn+γnSvn,xn=(1-γn-1)yn-1+γn-1Svn-1.
Also, simple calculations show that
(3.59)xn+1-xn=(1-γn)(yn-yn-1)+γn(Svn-Svn-1)+(γn-γn-1)(Svn-1-yn-1).
Since
(3.60)∥vn-vn-1∥≤∥PC(I-λ∇fαn)yn-PC(I-λ∇fαn)yn-1∥+∥PC(I-λ∇fαn)yn-1-PC(I-λ∇fαn-1)yn-1∥≤∥yn-yn-1∥+∥(I-λ∇fαn)yn-1-(I-λ∇fαn-1)yn-1∥=∥yn-yn-1∥+∥λ∇fαn(yn-1)-λ∇fαn-1(yn-1)∥=∥yn-yn-1∥+λ|αn-αn-1|∥yn-1∥
for every n≥1, it follows from (3.57) that
(3.61)∥xn+1-xn∥≤(1-γn)∥yn-yn-1∥+γn∥Svn-Svn-1∥+|γn-γn-1|∥Svn-1-yn-1∥≤(1-γn)∥yn-yn-1∥+γn∥vn-vn-1∥+|γn-γn-1|∥Svn-1-yn-1∥≤(1-γn)∥yn-yn-1∥+γn[∥yn-yn-1∥+λ|αn-αn-1|∥yn-1∥]+|γn-γn-1|∥Svn-1-yn-1∥≤∥yn-yn-1∥+λ|αn-αn-1|∥yn-1∥+|γn-γn-1|∥Svn-1-yn-1∥≤∥yn-yn-1∥+M2[|αn-αn-1|+|γn-γn-1|]≤(1-βn(1-ρ))∥xn-xn-1∥+M1[|αn-αn-1|+|βn-βn-1|]+M2[|αn-αn-1|+|γn-γn-1|]=(1-βn(1-ρ))∥xn-xn-1∥+(M1+M2)|αn-αn-1|+M1|βn-βn-1|+M2|γn-γn-1|
for every n≥1, where M2=sup{λ∥yn-1∥+∥Svn-1-yn-1∥:n≥1}. From conditions (i), (ii), (iii), it is easy to see that
(3.62)limn→∞βn(1-ρ)=0,∑n=0∞βn(1-ρ)=∞,∑n=0∞[(M1+M2)|αn-αn-1|+M1|βn-βn-1|+M2|γn-γn-1|]<∞.
Applying Lemma 3.12 to (3.61), we have
(3.63)limn→∞∥xn+1-xn∥=0.
From (3.57), we also have that ∥yn+1-yn∥→0 as n→∞.
Step 3. limn→∞∥xn-yn∥=limn→∞∥xn-Sun∥=0.
Indeed, it follows that
(3.64)∥xn+1-yn∥=γn∥Svn-yn∥≤γn(∥Svn-Sun∥+∥Sun-yn∥)≤γn(∥vn-un∥+∥Sun-yn∥)=γn(∥PC(I-λ∇fαn)yn-PC(I-λ∇fαn)xn∥+∥Sun-yn∥)≤γn(∥yn-xn∥+∥Sun-yn∥)≤γn(∥yn-xn+1∥+∥xn+1-xn∥+∥Sun-yn∥),
which implies that
(3.65)(1-γn)∥yn-xn+1∥≤γn(∥xn+1-xn∥+∥Sun-yn∥)≤∥xn+1-xn∥+∥Sun-yn∥.
Obviously, utilizing (3.53), ∥xn+1-xn∥→0 and limsupn→∞γn<1, we have ∥xn+1-yn∥→0 as n→∞. This implies that
(3.66)∥xn-yn∥≤∥xn-xn+1∥+∥xn+1-yn∥→0asn→∞.
From (3.53) and (3.66), we also have
(3.67)∥xn-Sun∥≤∥xn-yn∥+∥yn-Sun∥→0asn→∞.
Step 4. limn→∞∥xn-un∥=0.
Indeed, take a fixed p∈Fix(S)∩Γ arbitrarily. Then, by the convexity of ∥·∥2, we have
(3.68)∥yn-p∥2=∥βn(Qxn-p)+(1-βn)(Sun-p)∥2≤βn∥Qxn-p∥2+(1-βn)∥Sun-p∥2≤βn∥Qxn-p∥2+(1-βn)∥un-p∥2≤βn∥Qxn-p∥2+(1-βn)∥(I-λ∇fαn)xn-(I-λ∇f)p∥2=βn∥Qxn-p∥2+(1-βn)∥(I-λ∇f)xn-(I-λ∇f)p-λαnxn∥2≤βn∥Qxn-p∥2+(1-βn)[∥(I-λ∇f)xn-(I-λ∇f)p∥2asdaaaaaaaaaaaaaaaaaaaai-2λαn〈xn,(I-λ∇fαn)xn-(I-λ∇f)p〉∥(I-λ∇f)xn-(I-λ∇f)p∥2]≤βn∥Qxn-p∥2+(1-βn)[∥xn-p∥2+2λ(λ-2∥A∥2)∥∇f(xn)-∇f(p)∥2asdasdasdasdaaaaaaaaaaaa+2λαn∥xn∥∥(I-λ∇fαn)xn-(I-λ∇f)p∥2∥A∥2]≤βn∥Qxn-p∥2+(1-βn)[∥xn-p∥2+2λαn∥xn∥∥(I-λ∇fαn)xn-(I-λ∇f)p∥].
So we obtain
(3.69)(1-βn)2λ(2∥A∥2-λ)∥∇f(xn)-∇f(p)∥2≤βn∥xn-p∥2+(1-βn)∥xn-p∥2-∥yn-p∥2+(1-βn)2λαn∥xn∥∥(I-λ∇fαn)xn-(I-λ∇f)p∥≤βn∥xn-p∥2+(∥xn-p∥+∥yn-p∥)(∥xn-p∥-∥yn-p∥)+2λαn∥xn∥∥(I-λ∇fαn)xn-(I-λ∇f)p∥≤βn∥xn-p∥2+(∥xn-p∥+∥yn-p∥)∥xn-yn∥+2λαn∥xn∥∥(I-λ∇fαn)xn-(I-λ∇f)p∥.
Since αn→0,βn→0,∥xn-yn∥→0, and 0<λ<2/∥A∥2, from the boundedness of {xn} and {yn}, it follows that limn→∞∥∇f(xn)-∇f(p)∥=0, and hence
(3.70)limn→∞∥∇fαn(xn)-∇f(p)∥=0.
Moreover, from the firm nonexpansiveness of PC, we obtain
(3.71)∥un-p∥2=∥PC(I-λ∇fαn)xn-PC(I-λ∇f)p∥2≤〈(I-λ∇fαn)xn-(I-λ∇f)p,un-p〉=12{∥(I-λ∇fαn)xn-(I-λ∇f)p∥2+∥un-p∥2-∥(I-λ∇fαn)xn-(I-λ∇f)p-(un-p)∥2}≤12{∥xn-p∥2+2λαn∥xn∥∥(I-λ∇fαn)xn-(I-λ∇f)p∥+∥un-p∥2-∥xn-un∥2+2λ〈xn-un,∇fαn(xn)-∇f(p)〉-λ2∥∇fαn(xn)-∇f(p)∥2},
and so
(3.72)∥un-p∥2≤∥xn-p∥2-∥xn-un∥2+2λαn∥xn∥∥(I-λ∇fαn)xn-(I-λ∇f)p∥+2λ〈xn-un,∇fαn(xn)-∇f(p)〉-λ2∥∇fαn(xn)-∇f(p)∥2.
Thus, we have
(3.73)∥yn-p∥2≤βn∥Qxn-p∥2+(1-βn)∥un-p∥2≤βn∥Qxn-p∥2+∥xn-p∥2-(1-βn)∥xn-un∥2+2λαn∥xn∥∥(I-λ∇fαn)xn-(I-λ∇f)p∥+2(1-βn)λ〈xn-un,∇fαn(xn)-∇f(p)〉-(1-βn)λ2∥∇fαn(xn)-∇f(p)∥2,
which implies that
(3.74)(1-βn)∥xn-un∥2≤βn∥xn-p∥2+(∥xn-p∥+∥yn-p∥)(∥xn-p∥-∥yn-p∥)+2λαn∥xn∥∥(I-λ∇fαn)xn-(I-λ∇f)p∥+2(1-βn)λ〈xn-un,∇fαn(xn)-∇f(p)〉-(1-βn)λ2∥∇fαn(xn)-∇f(p)∥2≤βn∥xn-p∥2+(∥xn-p∥+∥yn-p∥)(∥xn-yn∥)+2λαn∥xn∥∥(I-λ∇fαn)xn-(I-λ∇f)p∥+2(1-βn)λ〈xn-un,∇fαn(xn)-∇f(p)〉-(1-βn)λ2∥∇fαn(xn)-∇f(p)∥2.
Since αn→0,βn→0,∥xn-yn∥→0, and ∥∇fαn(xn)-∇f(p)∥→0, from the boundedness of {xn},{yn}, and {un}, it follows that limn→∞∥xn-un∥=0 and hence
(3.75)limn→∞∥yn-un∥=0.
Step 5. limsupn→∞〈Qq-q,yn-q〉≤0 for q∈Fix(S)∩Γ, where q is a unique solution of the variational inequality
(3.76)〈(I-Q)q,q-p〉≤0,∀p∈Fix(S)∩Γ.
Indeed, we choose a subsequence {uni} of {un} such that
(3.77)limsupn→∞〈Qq-q,Sun-q〉=limi→∞〈Qq-q,Suni-q〉.
Since {uni} is bounded, there exists a subsequence {unij} of {uni} which converges weakly to u-. Without loss of generality we may assume that uni⇀u-. Then we can obtain u-∈Fix(S)∩Γ. Let us first show that u-∈Γ. Define
(3.78)Tv={∇f(v)+NCv,ifv∈C,∅,ifv∉C,
where NCv={w∈ℋ1:〈v-u,w〉≥0,forallu∈C}. Then, T is maximal monotone and 0∈Tv if and only if v∈VI(C,∇f); see [34] for more details. Let (v,w)∈G(T). Then, we have
(3.79)w∈Tv=∇f(v)+NCv
and hence,
(3.80)w-∇f(v)∈NCv.
So, we have
(3.81)〈v-u,w-∇f(v)〉≥0,∀u∈C.
On the other hand, from
(3.82)un=PC(xn-λ∇fαn(xn)),v∈C,
we have
(3.83)〈xn-λ∇fαn(xn)-un,un-v〉≥0,
and hence,
(3.84)〈v-un,un-xnλ+∇fαn(xn)〉≥0.
Therefore, from
(3.85)w-∇f(v)∈NCv,uni∈C,
we have
(3.86)〈v-uni,w〉≥〈v-uni,∇f(v)〉≥〈v-uni,∇f(v)〉-〈v-uni,uni-xniλ+∇fαni(xni)〉=〈v-uni,∇f(v)〉-〈v-uni,uni-xniλ+∇f(xni)〉-αni〈v-uni,xni〉=〈v-uni,∇f(v)-∇f(uni)〉+〈v-uni,∇f(uni)-∇f(xni)〉-〈v-uni,uni-xniλ〉-αni〈v-uni,xni〉≥〈v-uni,∇f(uni)-∇f(xni)〉-〈v-uni,uni-xniλ〉-αni〈v-uni,xni〉.
Hence, we obtain
(3.87)〈v-u-,w〉≥0,asi→∞.
Since T is maximal monotone, we have u-∈T-10, and hence, u-∈VI(C,∇f). Thus it is clear that u-∈Γ.
On the other hand, by Steps 3 and 4, ∥un-Sun∥≤∥un-xn∥+∥xn-Sun∥→0. So, by Lemma 2.7(ii), we derive u-∈Fix(S) and hence u-∈Fix(S)∩Γ. Then from (3.76) and (3.77), we have
(3.88)limsupn→∞〈Qq-q,Sun-q〉=limi→∞〈Qq-q,Suni-q〉=〈Qq-q,u--q〉=〈(I-Q)q,q-u-〉≤0.
Thus, from (3.53), we obtain
(3.89)limsupn→∞〈Qq-q,yn-q〉≤limsupn→∞〈Qq-q,yn-Sun〉+limsupn→∞〈Qq-q,Sun-q〉≤limsupn→∞∥Qq-q∥∥yn-Sun∥+limsupn→∞〈Qq-q,Sun-q〉≤0.Step 6. limn→∞∥xn-q∥=0 for q∈Fix(S)∩Γ, where q is a unique solution of the variational inequality
(3.90)〈(I-Q)q,q-p〉≤0,∀p∈Fix(S)∩Γ.
Indeed, utilizing (3.49), (3.51), and Lemma 3.13, we have
(3.91)∥xn+1-q∥2≤(∥yn-q∥+λαn∥q∥)2=∥yn-q∥2+λαn∥q∥(2∥yn-q∥+λαn∥q∥)≤∥yn-q∥2+M3αn=∥βn(Qxn-q)+(1-βn)(Sun-q)∥2+M3αn≤(1-βn)2∥Sun-q∥2+2βn〈Qxn-q,yn-q〉+M3αn≤(1-βn)2∥un-q∥2+2βn〈Qxn-Qq,yn-q〉+2βn〈Qq-q,yn-q〉+M3αn≤(1-βn)2(∥xn-q∥+λαn∥q∥)2+2βnρ∥xn-q∥∥yn-q∥+2βn〈Qq-q,yn-q〉+M3αn≤(1-βn)2(∥xn-q∥2+M3αn)+2βnρ∥xn-q∥(∥yn-xn∥+∥xn-q∥)+2βn〈Qq-q,yn-q〉+M3αn≤(1-βn)2∥xn-q∥2+2βnρ∥xn-q∥(∥yn-xn∥+∥xn-q∥)+2βn〈Qq-q,yn-q〉+2M3αn=[1-2βn(1-ρ)]∥xn-q∥2+βn[2ρ∥xn-q∥∥yn-xn∥+βn∥xn-q∥2+2〈Qq-q,yn-q〉]+2M3αn
for every n≥0, where M3=sup{λ∥q∥[2(∥xn-q∥+∥yn-q∥)+λαn∥q∥]:n≥0}. Now, put an=∥xn-q∥2,sn=2βn(1-ρ),tn=(1/2(1-ρ))[2ρ∥xn-q∥∥yn-xn∥+βn∥xn-q∥2+2〈Qq-q,yn-q〉], and rn=2M3αn. Then (3.91) is rewritten as
(3.92)an+1≤(1-sn)an+sntn+rn.
It is easy to see that ∑n=0∞sn=∞,limsupn→∞tn≤0 and ∑n=0∞rn<∞. Thus by Lemma 3.12, we obtain xn→q. This completes the proof.
Corollary 3.15.
Let Q:ℋ1→ℋ1 be a contractive mapping with coefficient ρ∈[0,1) and S:ℋ1→ℋ1 be a nonexpansive mapping such that Fix(S)∩(∇f)-10≠∅. Assume that 0<λ<2/∥A∥2, and let {xn} and {yn} be the sequences in ℋ1 generated by
(3.93)x0=x∈ℋ1chosenarbitrarily,yn=βnQxn+(1-βn)S(xn-λ∇fαn(xn)),xn+1=(1-γn)yn+γnS(yn-λ∇fαn(yn)),∀n≥0,
where the sequences of parameters {αn}⊂(0,∞) and {βn},{γn}⊂[0,1] satisfy the following conditions:
∑n=0∞αn<∞;
limn→∞βn=0,∑n=0∞βn=∞, and ∑n=0∞|βn+1-βn|<∞;
limsupn→∞γn<1 and ∑n=0∞|γn+1-γn|<∞.
Then, both the sequences {xn} and {yn} converge strongly to q∈Fix(S)∩(∇f)-10, which is a unique solution of the following variational inequality
(3.94)〈(I-Q)q,q-p〉≤0,∀p∈Fix(S)∩(∇f)-10.
Proof.
In Theorem 3.14, putting C=ℋ1, we deduce that Pℋ1=I the identity mapping, Γ=VI(ℋ1,∇f)=(∇f)-10 and
(3.95)x0=x∈C(=ℋ1)chosenarbitrarily,yn=βnQxn+(1-βn)SPC(xn-λ∇fαn(xn))=βnQxn+(1-βn)S(xn-λ∇fαn(xn)),xn+1=(1-γn)yn+γnSPC(yn-λ∇fαn(yn))=(1-γn)yn+γnS(yn-λ∇fαn(yn)),
for every n≥0. Then, by Theorem 3.14, we obtain the desired result.
Remark 3.16.
Theorem 3.14 improves and develops [25, Theorem5.7], [26, Theorem 3.1], and [9, Theorem 3.1] in the following aspects.
The corresponding iterative algorithm in [9, Theorem 3.1] is extended for developing our composite extragradient-like algorithm with regularization in Theorem 3.14.
The technique of proving strong convergence in Theorem 3.14 is very different from those in [25, Theorem 5.7], [26, Theorem 3.1], and [9, Theorem 3.1] because our technique depends on Lemmas 3.12 and 3.13.
Compared with [25, Theorem 5.7], and [26, Theorem 3.1], two weak convergence results, Theorem 3.14 is a strong convergence result. Thus, Theorem 3.14 is quite interesting and very valuable.
In [9, Theorem 3.1], Jung actually introduced the following composite iterative algorithm:
(3.96)x0=x∈Cchosenarbitrarily,yn=βnQxn+(1-βn)SPC(xn-λnAxn),xn+1=(1-γn)yn+γnSPC(yn-λnAyn),∀n≥0,
where A is inverse-strongly monotone and S is nonexpansive. Now, via replacing λnA by λ∇fαn, we obtain the composite extragradient-like algorithm in Theorem 3.14. Consequently, this algorithm is very different from Jung’s algorithm.
Furthermore, utilizing Jung [9, Theorem 3.1], we can immediately obtain the following strong convergence result.
Theorem 3.17.
Let Q:C→C be a contractive mapping with coefficient ρ∈[0,1) and S:C→C be a nonexpansive mapping such that Fix(S)∩Γ≠∅. Let {xn} and {yn} be the sequences in C generated by the following composite extragradient-like algorithm:
(3.97)x0=x∈Cchosenarbitrarily,yn=βnQxn+(1-βn)SPC(xn-λn∇f(xn)),xn+1=(1-γn)yn+γnSPC(yn-λn∇f(yn)),∀n≥0,
where the sequences of parameters {λn}⊂(0,2/∥A∥2) and {βn},{γn}⊂[0,1] satisfy the following conditions:
limn→∞βn=0,∑n=0∞βn=∞, and ∑n=0∞|βn+1-βn|<∞;
0<liminfn→∞λn≤limsupn→∞λn<2/∥A∥2 and ∑n=0∞|λn+1-λn|<∞;
limsupn→∞γn<1 and ∑n=0∞|γn+1-γn|<∞.
Then, both the sequences {xn} and {yn} converge strongly to q∈Fix(S)∩Γ, which is a unique solution of the following variational inequality:
(3.98)〈(I-Q)q,q-p〉≤0,∀p∈Fix(S)∩Γ.
Remark 3.18.
It is not hard to see that ∇f is (1/∥A∥2)-ism. Thus, Theorem 3.17 is an immediate consequence of Jung [9, Theorem 3.1].
Acknowledgments
In this research, the first and second authors were partially supported by the National Science Foundation of China (11071169), Innovation Program of Shanghai Municipal Education Commission (09ZZ133) and Leading Academic Discipline Project of Shanghai Normal University (DZL707). Third author was partially supported by the Grant NSC 101–2115-M-037-001.
LionsJ. L.1969Paris, FranceDunodGlowinskiR.1984New York, NY, USASpringerOdenJ. T.1986Englewood Cliffs, NJ, USAPrentice-HallZeidlerE.1985New York, NY, USASpringerMarinoG.XuH. K.Weak and strong convergence theorems for strict pseudo-contractions in Hilbert spaces200732913363462-s2.0-3384630321310.1016/j.jmaa.2006.06.055ZBL1116.47053TakahashiW.ToyodaM.Weak convergence theorems for nonexpansive mappings and monotone mappings200311824174282-s2.0-004137545710.1023/A:1025407607560ZBL1055.47052KorpelevichG. M.An extragradient method for finding saddle points and for other problems197612747756NadezhkinaN.TakahashiW.Weak convergence theorem by an extragradient method for nonexpansive mappings and monotone mappings200612811912012-s2.0-3114445701810.1007/s10957-005-7564-zZBL1130.90055JungJ. S.A new iteration method for nonexpansive mappings and monotone mappings in Hilbert spaces2010201016251761ZBL1187.47049CensorY.ElfvingT.A multiprojection algorithm using Bregman projections in a product space1994822212392-s2.0-000042433710.1007/BF02142692ZBL0828.65065ByrneC.Iterative oblique projection onto convex sets and the split feasibility problem20021824414532-s2.0-003653828610.1088/0266-5611/18/2/310CensorY.BortfeldT.MartinB.TrofimovA.A unified approach for inversion problems in intensity-modulated radiation therapy20065110235323652-s2.0-3364648079210.1088/0031-9155/51/10/001CensorY.ElfvingT.KopfN.BortfeldT.The multiple-sets split feasibility problem and its applications for inverse problems2005216207120842-s2.0-2854445113110.1088/0266-5611/21/6/017ZBL1089.65046CensorY.MotovaA.SegalA.Perturbed projections and subgradient projections for the multiple-sets split feasibility problem20073272124412562-s2.0-3375102136910.1016/j.jmaa.2006.05.010ByrneC.A unified treatment of some iterative algorithms in signal processing and image reconstruction20042011031202-s2.0-134226591910.1088/0266-5611/20/1/006ZBL1051.65067QuB.XiuN.A note on the CQ algorithm for the split feasibility problem2005215165516652-s2.0-2544447731110.1088/0266-5611/21/5/009ZBL1080.65033XuH. K.A variable Krasnosel'skii-Mann algorithm and the multiple-set split feasibility problem2006226, article 007202120342-s2.0-3384607802910.1088/0266-5611/22/6/007ZBL1126.47057YangQ.The relaxed CQ algorithm solving the split feasibility problem2004204126112662-s2.0-404310260510.1088/0266-5611/20/4/014ZBL1066.65047ZhaoJ.YangQ.Several solution methods for the split feasibility problem2005215179117992-s2.0-2544447943010.1088/0266-5611/21/5/017ZBL1080.65035SezanM. I.StarkH.StarkH.Applications of convex projection theory to image recovery in tomography and related areas1987Orlando, Fla, USAAcademic Press415462EickeB.Iteration methods for convexly constrained ill-posed problems in Hilbert spaces19921341342910.1080/01630569208816489LandweberL.An iterative formula for Fredholm integral equations of the first kind19517361562410.2307/2372313PotterL. C.ArunK. S.Dual approach to linear inverse problems with convex constraints1993314108010922-s2.0-002762532310.1137/0331049ZBL0797.49019CombettesP. L.WajsV. R.Signal recovery by proximal forward-backward splitting200544116812002-s2.0-3084443817710.1137/050626090ZBL1179.94031XuH. K.Iterative methods for the split feasibility problem in infinite-dimensional Hilbert spaces201026102-s2.0-7804942929110.1088/0266-5611/26/10/105018105018ZBL1213.65085CengL. C.AnsariQ. H.YaoJ. C.An extragradient method for solving split feasibility and fixed point problems20126463364210.1016/j.camwa.2011.12.074BertsekasD. P.GafniE. M.Projection methods for variational inequalities with applications to the traffic assignment problem198217139159HanD.LoH. K.Solving non-additive traffic assignment problems: a descent method for co-coercive variational inequalities200415935295442-s2.0-334295203710.1016/S0377-2217(03)00423-5ZBL1065.90015CombettesP. L.Solving monotone inclusions via compositions of nonexpansive averaged operators2004535-64755042-s2.0-1324429557610.1080/02331930412331327157ZBL1153.47305OsilikeM. O.AniagbosorS. C.AkuchuB. G.Fixed points of asymptotically demicontractive mappings in arbitrary Banach space2002127788TanK. K.XuH. K.Approximating fixed points of nonexpansive mappings by the ishikawa iteration process199317823013082-s2.0-000105771410.1006/jmaa.1993.1309ZBL0895.47048GoebelK.KirkW. A.Topics in metric fixed point theory199028Cambridge University PressZBL0708.47031YaoY.LiouY. C.KangS. M.Approach to common elements of variational inequality problems and fixed point problems via a relaxed extragradient method20105911347234802-s2.0-7795313782710.1016/j.camwa.2010.03.036ZBL1197.49008RockafellarR. T.On the maximality of sums of nonlinear monotone operators1970149758810.1090/S0002-9947-1970-0282272-5ZBL0222.47017CengL. C.PetruşelA.YaoJ. C.Relaxed extragradient methods with regularization for general system of variational inequalities with constraints of split feasibility and fixed point problemsAbstract and Applied Analysis. In pressXuH. K.Iterative algorithms for nonlinear operators20026612402562-s2.0-003669246310.1112/S0024610702003332ZBL1013.47032