We introduce an iterative method to approximate a common solution of split variational inclusion problem and fixed point problem for nonexpansive semigroups with a way of selecting the stepsizes which does not need any prior information about the operator norms in Hilbert spaces. We prove that the sequences generated by the proposed algorithm converge strongly to a common element of the set of solutions of a split variational inclusion and the set of common fixed points of one-parameter nonexpansive semigroups. Moreover, numerical results demonstrate the performance and convergence of our result, which may be viewed as a refinement and improvement of the previously known results announced by many other researchers.
1. Introduction
Recently, Moudafi [1] proposed the following split monotone variational inclusion problem (SMVIP): find a point x∗∈H1 such that (1)0∈f1x∗+B1x∗,y∗=Ax∗∈H2 solves 0∈f2y∗+B2y∗,where H1 and H2 are two real Hilbert spaces with inner product 〈·,·〉 and induced norm · and B1:H1→2H1 and B2:H2→2H1 are multivalued maximal monotone mappings.
Moudafi [1] shows that SMVIP (1) includes, as special cases, the split variational inequality problem, the split common fixed point problem, split zero problem, and split feasibility problem [1–7] which have already been studied and used in practice as a model in intensity-modulated radiation therapy treatment planning (see [5, 6]). This formalism is also at the core of modeling of many inverse problems arising for phase retrieval and other real-world problems, for instance, in sensor networks in computerized tomography and data compression [8, 9].
If f1≡0 and f2≡0, then SMVIP (1) can reduce to the following split variational inclusion problem (SVIP): find a point x∗∈H1 such that(2)0∈B1x∗,(3)y∗=Ax∗∈H2solves0∈B2y∗.
We know that (2) is the variational inclusion problem and denote its solution set by SOLVIP(B1). SVIP ((2)-(3)) contains a pair of variational inclusion problems which need to be solved so that the image y∗=Ax∗ under a given bounded linear operator A, the solution x∗ of SVIP (2) in H1, is the solution of another SVIP (3) in another space H2; we denote the solution set of SVIP (3) by SOLVIP(B2). The solution set of SVIP ((2)-(3)) is denoted by U={x∗∈H1:x∗∈SOLVIP(B1) and Ax∗∈SOLVIP(B2)}.
Many works were devoted to the split variational inclusion problem ((2)-(3)). In 2012, Byrne et al. [4] proposed the weak and strong convergence of the following iterative method for SVIP ((2)-(3)): for x0∈H1, compute iterative sequence xn generated by the following scheme:(4)xn+1=JλB1xn+γA∗JλB2-IAxn,where A∗ is the adjoint of A, L is the spectral radius of the operator A∗A, and γ∈(0,2/L) and λ>0.
In 2014, Kazmi and Rizvi [10] considered the strong convergence of the following iterative method: (5)un=JλB1xn+γA∗JλB2-IAxn,xn+1=anfxn+I-anSun,where λ>0, A∗ is the adjoint of A, L is the spectral radius of the operator A∗A, and γ∈(0,1/L). They proved the sequence {xn} generated by (5) strongly converges to the fixed point of nonexpansive mapping S and the solution set Γ of SVIP ((2)-(3)).
In 2015, Sitthithakerngkiet et al. [11] proposed the hybrid steepest descent method:(6)un=JλB1xn+γA∗JλB2-IAxn,xn+1=anξfxn+I-anDSnun,where Sn is a sequence of nonexpansive mappings, D is a strongly positive bounded linear operator, λ>0, A∗ is the adjoint of A, L is the spectral radius of the operator A∗A, and γ∈(0,1/L) and λ>0. They revealed that the sequence {xn} converges strongly to a point z, where z=PΩ(I-D+ξf)(z) is a unique solution of the variational inequalities:(7)D-ξfz,z-x≤0,x∈Ω.
Note that, in algorithms (4), (5), and (6) mentioned above, the determination of the stepsize γ depends on the operator (matrix) norms A (or the largest eigenvalues of A∗A). This means that, in order to implement algorithms (4), (5), and (6), one has first to compute (or, at least, estimate) operator norms of A, which is not an easy work in practice.
To overcome this difficulty, López et al. [12] and Zhao and Yang [13] presented useful method for choosing the stepsizes which do not need prior knowledge of the operator norms for solving the split feasibility problems and multiple-set split feasibility problems, respectively.
Motivated by the above results, we introduce a new choice of the stepsize sequence {τn} which depends on(8)τn=σnJλB2-IAxn2A∗JλB2-IAxn2,where σn∈[a,b]⊂(0,1). The advantage of our choice (8) of the stepsizes lies in the fact that no prior information about the operator norms of A is required, and still convergence is guaranteed.
Following the work of Moudafi [1], Kazmi and Rizvi [10], and Byrne et al. [4], we introduce and study an iterative method to approximate a common solution of split variational inclusion problem and fixed point problem for nonexpansive semigroups with a way of selecting the stepsizes which does not need any prior information about the operator norms in Hilbert spaces. We also prove that the sequences generated by the proposed algorithm converge strongly to a common element of the set of solutions of a split variational inclusion and the set of common fixed points of one-parameter nonexpansive semigroups. Numerical results are proposed to show that our algorithm is more suitable for SVIP ((2)-(3)) than the proposed algorithms (4) and (6).
2. Preliminaries
Throughout this paper, we denote C to be a nonempty closed convex subset of H1. Let T:H1→H1 be a mapping. A point x∈H1 is said to be a fixed point of T provided x=Tx. We use F(T) to denote the fixed point set of T. We write xn⇀x to indicate that the sequence {xn} converges weakly to x, and xn→x implies that {xn} converges strongly to x. We use ωw(xk)={x:∃xkj⇀x} standing for the weak ω-limit set of {xk}. For any x∈H, there exists a unique nearest point in C, denoted by PCx, such that(9)x-PCx≤x-y,∀y∈C.
Before proceeding further, we need to introduce a few concepts.
A mapping T:H1→H1 is called contraction, if there exists a constant α∈(0,1) such that (10)Tx-Ty≤αx-y,∀x,y∈H1.
If α=1, then T is called nonexpansive.
A mapping T:H1→H1 is said to be firmly nonexpansive, if (11)Tx-Ty,x-y≥Tx-Ty2,∀x,y∈H1.
One-parameter family mapping Γ={T(t):0≤t<∞} from H1 into itself is said to be a nonexpansive semigroup if it satisfies the following conditions:
T(0)x=x, for all x∈H1.
T(s+t)=T(s)T(t), for all s,t≥0.
For each x∈H1, the mapping T(t)x is continuous.
T(t)x-T(t)y≤x-y, for all x,y∈H1 and t≥0.
We denote by F(Γ) the set of all common fixed points of Γ; that is, F(Γ):={x∈H1:T(t)x=x,∀t>0}. It is well known that F(Γ) is closed and convex [14].
Now, we give an example of a nonexpansive semigroup.
Example 1.
Let H1=R and Γ={T(t):0≤t<∞}, where T(t)x=1/10tx. Then, Γ={T(t):0≤t<∞} is said to be a nonexpansive semigroup. In fact,
T(0)x=1/100x=x, for all x∈H1,
T(s+t)=1/10s+t=1/10s1/10t=T(s)T(t), for all s,t≥0,
for each x∈H1, the mapping T(t)x=1/10tx is continuous,
T(t)x-T(t)y=1/10tx-1/10ty=1/10tx-y≤x-y, for all x,y∈H1 and t≥0.
A set-valued mapping Q:H1→2H1 is called monotone if, for all x,y∈H1, f∈Qx and g∈Qy imply 〈x-y,f-g〉≥0.
A monotone mapping Q:H1→2H1 is called maximal if the graph G(Q) of Q is not properly contained in the graph of any other monotone mapping. It is well known that a monotone mapping Q is maximal if and only if for (x,f)∈H×H, 〈x-y,f-g〉≥0 for every (y,g)∈G(Q) implies f∈Qx.
Let M:H1→2H1 be a multivalued maximal monotone mapping. Then, the resolvent mapping JλM:H1→H1, associated with M, is defined by (12)JλMx=I+λM-1x,∀x∈H1,for some λ>0, where I stands for identity operator on H1. We note that, for all λ>0, the resolvent operator JλM is single-valued, nonexpansive, and firmly nonexpansive.
The following principles play an important role in our argument.
A mapping T:H1→H1 is called demiclosed at the origin if for each sequence {xn} which weakly converges to x, and the sequence {Txn} strongly converges to 0, then Tx=0.
T is said to be semicompact, if, for any bounded sequence {xn}⊂H1, limn→∞xn-Txn=0; then, there exists a subsequence {xni}⊂{xn} such that {xni} converges strongly to some point x∗∈H1.
To establish our results, we need the following technical lemmas.
Lemma 2 (see [<xref ref-type="bibr" rid="B17">15</xref>]).
If x,y,z∈H, then
x+y2≤x2+2〈y,x+y〉;
for any λ∈[0,1];(13)λx+1-λy2=λx2+1-λy2-λ1-λx-y2.
for a,b,c∈[0,1] with a+b+c=1,(14)ax+by+cz2=ax2+by2+cz2-abx-y2-acx-z2-bcy-z2.
Lemma 3 (see [<xref ref-type="bibr" rid="B1">1</xref>]).
SVIP ((2)-(3)) is equivalent to finding x∗∈H1 such that y∗=Ax∗∈H2, (15)x∗=JλB1x∗,y∗=JλB2y∗,for some λ>0.
Lemma 4 (see [<xref ref-type="bibr" rid="B14">16</xref>]).
Let C be a nonempty bounded closed and convex subset of a real Hilbert space H. Let Γ={T(s):s≥0} from C be a nonexpansive semigroup on C; then, for all h≥0, (16)limt→∞supx∈C1t∫0tTsxds-Th1t∫0tTsxds=0.
Lemma 5 (see [<xref ref-type="bibr" rid="B14">16</xref>]).
Let C be a nonempty bounded closed and convex subset of a real Hilbert space H, let {xn} be a sequence, and let Γ={T(s):s≥0} from C be a nonexpansive semigroup on C; if the following conditions are satisfied,
xn⇀z,
limsups→∞limsupn→∞T(s)xn-xn=0,
then, z∈F(Γ).Lemma 6 (see [<xref ref-type="bibr" rid="B11">17</xref>]).
Let H be a Hilbert space and let {un} be a sequence in H such that there exists a nonempty set W⊂H satisfying the following:
For every w∈W, limn→∞un-w exists.
Each weak cluster point of the sequence {wn} is in W.
Then, there exists w∗∈W such that {un} weakly converges to w∗.
Lemma 7.
Let Γ={T(s):s≥0} from H1 into itself be a nonexpansive semigroup; then, for p∈F(Γ), x∈H1, and t>0, (17)1t∫0tTsxds-x2≤2x-p,x-1t∫0tTsxds.
Proof.
For p∈F(Γ), x∈H1, and t>0, it follows from the definition of nonexpansive semigroup that (18)1t∫0tTsxds-p2≤x-p2,which implies that(19)1t∫0tTsxds-p,1t∫0tTsxds-p≤x-p,x-p=x-p,x-1t∫0tTsxds+x-p,1t∫0tTsxds-p.Furthermore, (20)1t∫0tTsxds-p,1t∫0tTsxds-x≤x-p,x-1t∫0tTsxds.Then, (21)1t∫0tTsxds-x,1t∫0tTsxds-x+x-p,1t∫0tTsxds-x≤x-p,x-1t∫0tTsxds.Thus, (22)1t∫0tTsxds-x,1t∫0tTsxds-x≤2x-p,x-1t∫0tTsxds.This completes the proof of Lemma 7.
3. Main Results
In this section, we first describe our algorithm and then reveal the convergence analysis of the algorithm.
Now, we propose our algorithm.
Algorithm 8.
Let x0∈H1 be arbitrary. Assume that xn has been constructed and A∗(JλB2-I)Axn≠0; then, calculate xn+1 via the rule (k+1)th iterate via the following formula: (23)un=JλB1xn+τnA∗JλB2-IAxn,xn+1=1-anun+an1tn∫0tnTsunds,where the stepsize τn is chosen by (8). If A∗(JλB2-I)Axn=0, then xn+1=xn is the solution of ((2)-(3)) and the iterative process stops. Otherwise, we set k:=k+1 and go to (23).
Remark 9.
Notice that in (8) the choice of the stepsize τn is independent of the norms A.
Remark 10.
The stepsize {τn} is bounded. Indeed, it follows from the condition on {τn} that(24)infA∗JλB2-IAxn≠0JλB2-IAxn2A∗JλB2-IAxn2-τn>0.On the other hand, since(25)A∗JλB2-IAxn≤AJλB2-IAxn,we obtain(26)JλB2-IAxn2A∗JλB2-IAxn2≥1A2.Thus, supA∗(JλB2-I)Axn≠0τn<∞ and {τn} is bounded.
Next, we will discuss the convergence analysis of algorithm (23) for approximating a common solution of SVIP ((2)-(3)) and fixed point problem for nonexpansive semigroups.
Theorem 11.
Let H1 and H2 be two real Hilbert spaces and let A:H1→H2 be a bounded linear operator. Assume that B1:H1→2H1 and B2:H2→2H2 are maximal monotone mappings and Γ:={T(s):0≤s<∞} be a one-parameter nonexpansive semigroup on H1 such that Ω=F(Γ)∩U≠∅. Let the sequences {xn} be generated by (23). If 0<liminfn→∞an<limsupn→∞an<1 and {tn}∈(0,∞), then {xn} converges strongly to q∈Ω.
Proof.
Taking p∈Ω, we have p=JλB1p, Ap=JλB2(Ap), and T(s)p=p. From (23) and Lemma 3, one has (27)un-p2=JλB1xn+τnA∗JλB2-IAxn-p2=JλB1xn+τnA∗JλB2-IAxn-JλB1p2≤xn+τnA∗JλB2-IAxn-p2=xn-p2+τn2A∗JλB2-IAxn2+2τnxn-p,A∗JλB2-IAxn.Notice that (28)2τnxn-p,A∗JλB2-IAxn=2τnAxn-p,JλB2-IAxn=2τnAxn-Ap+JλB2-IAxn-JλB2-IAxn,JλB2-IAxn=2τnJλB2Axn-p,JλB2-IAxn-2τnJλB2-IAxn2=τnJλB2Axn-Ap2+JλB2-IAxn2-Axn-Ap2-2τnJλB2-IAxn2≤τnJλB2-IAxn2-2τnJλB2-IAxn2=-τnJλB2-IAxn2.It follows from (28), (27), and (8) that (29)un-p2≤xn-p2+τnτnA∗JλB2-IAxn2-JλB2-IAxn2≤xn-p2.Furthermore, (23), (29), and Lemma 2 lead to(30)xn+1-p2=1-anun+an1tn∫0tnTsunds-p2=1-anun-p+an1tn∫0tnTsunds-p2=1-anun-p+an1tn∫0tnTsunds-1tn∫0tnTspds2=1-anun-p2+an1tn∫0tnTsunds-1tn∫0tnTspds2-an1-an1tn∫0tnTsunds-un2≤1-anun-p2+anun-p2-an1-an1tn∫0tnTsunds-un2=un-p2-an1-an1tn∫0tnTsunds-un2≤xn-p2-an1-an1tn∫0tnTsunds-un2≤xn-p2≤⋯≤x0-p2.Hence, {xn} is bounded and we also obtain that {un} is bounded.
On the other hand, from Lemma 7, we obtain (31)xn+1-p2=1-anun+an1tnâˆ«0tnTsunds-p2=un-p+an1tnâˆ«0tnTsunds-un2=un-p2+an21tnâˆ«0tnTsunds-un2+2anun-p,1tnâˆ«0tnTsunds-unâ‰¤un-p2+an21tnâˆ«0tnTsunds-un2-an1tnâˆ«0tnTsunds-un2â‰¤xn-p2+anan-11tnâˆ«0tnTsunds-un2,which means that (32)an1-an1tn∫0tnTsunds-un2≤xn-p2-xn+1-p2.From (30), we have that {xn-p} is nonincreasing, and then {xn-p} is convergent. Obviously,(33)limn→∞xn-p2-xn+1-p2=0.This together with (32) and the condition on an deduce that (34)limn→∞1tn∫0tnTsunds-un=0.Observe that (35)un-Tuun≤un-1tn∫0tnTsunds+1tn∫0tnTsunds-Tu1tn∫0tnTsunds+Tu1tn∫0tnTsunds-Tuun≤2un-1tn∫0tnTsunds+1tn∫0tnTsunds-Tu1tn∫0tnTsunds.It follows from (34) and Lemma 4 that (36)limn→∞un-Tuun=0.
From (29) and (30), we deduce (37)xn+1-p2=1-anun+an1tn∫0tnTsunds-p2=1-anun-p2+anun-p2-an1-an1tn∫0tnTsunds-un2=un-p2-an1-an1tn∫0tnTsunds-un2≤un-p2≤xn-p2+τnτnA∗JλB2-IAxn2-JλB2-IAxn2=xn-p2+σnσn-1JλB2-IAxn4A∗JλB2-IAxn2.It yields that (38)σn1-σnJλB2-IAxn4A∗JλB2-IAxn2≤xn-p2-xn+1-p2.Thus, (39)limn→∞JλB2-IAxn2A∗JλB2-IAxn=0.Since(40)A∗JλB2-IAxn≤AJλB2-IAxn,it is easy to show that (41)JλB2-IAxn≤AJλB2-IAxn2A∗JλB2-IAxn.Consequently, we obtain(42)limn→∞JλB2-IAxn=0.Furthermore,(43)limn→∞A∗JλB2-IAxn=0.
From (23), we have (44)un-p2=JλB1xn+τnA∗JλB2-IAxn-p2=JλB1xn+τnA∗JλB2-IAxn-JλB1p2≤un-p,xn+τnA∗JλB2-IAxn-p=12un-p2+xn+τnA∗JλB2-IAxn-p2-un-p-xn+τnA∗JλB2-IAxn-p2=12un-p2+xn+τnA∗JλB2-IAxn-p2-un-xn-τnA∗JλB2-IAxn2=12un-p2+xn-p2+τn2A∗JλB2-IAxn2+2τnxn-p,A∗JλB2-IAxn-un-xn2+2τnun-xn,A∗JλB2-IAxn-τn2A∗JλB2-IAxn2.Hence, (45)un-p2≤xn-p2+2τnxn-p,A∗JλB2-IAxn-un-xn2+2τnun-xn,A∗JλB2-IAxn,which implies that (46)un-xn2≤xn-p2-un-p2+2τnxn-pA∗JλB2-IAxn+2τnun-xnA∗JλB2-IAxn.Moreover, from (30), we have (47)un-xn2≤xn-p2-xn+1-p2+2τnxn-pA∗JλB2-IAxn+2τnun-xnA∗JλB2-IAxn.Therefore, (43) yields that (48)limn→∞un-xn=0.We compute (49)xn+1-un=1-anun+an1tn∫0tnTsunds-un=an1tn∫0tnTsunds-un. Equation (34) implies that (50)limn→∞xn+1-un=0.Consequently,(51)limn→∞xn+1-xn≤limn→∞xn+1-un+un-xn=0.Furthermore, (52)limn→∞un+1-un≤limn→∞un+1-xn+1+xn+1-un=0.
Since {xn} and {un} are bounded, we consider a weak cluster point q of {xn}. Without loss of generality, we may assume that subsequence {xnj} of {xn} converges weakly to q. From (48), we have {unj} of {un}, which converges weakly to q. Furthermore, unj=JλB1(xnj+τnjA∗(JλB2-I)Axnj) can be rewritten as(53)xnj-unj+τnjA∗JλB2-IAxnjλ∈B1unj.Taking limit j→∞ in (53) and taking into account (43) and (48), together with the fact that the graph of a maximal monotone operator is weakly strongly closed, we have 0∈B1(q); that is, q∈SOLVIP(B1). Furthermore, from the asymptotical behavior of {xn} and {un}, we deduce that {Axnj} weakly converges to Aq. Applying (43) and the fact that the resolvent JλB2 is nonexpansive, together with Lemma 3, we have Aq∈B2(Aq); that is, Aq∈SOLVIP(B2). Thus, q∈U.
Next, we will prove that q∈F(T). For the sake of contradiction, suppose that T(u)q≠q. It follows from Opial condition and (36) that (54)limj→∞infunj-q<limj→∞infunj-Tuq≤limj→∞infunj-Tuunj+Tuunj-Tuq≤limj→∞infunj-Tuunj+unj-q≤limj→∞infunj-q.This is a contradiction, which shows that q∈F(Γ). Thus, x∈Ω. Furthermore, from Lemma 6, we deduce that xn⇀q, un⇀q, and q∈Ω. Due to (36), we obtain that T(u) is semicompact, and then {xn} and {un} converge strongly to q. This completes the proof of Theorem 11.
Theorem 12.
Let H1 and H2 be two real Hilbert spaces and let A:H1→H2 be a bounded linear operator. Assume that B1:H1→2H1 and B2:H2→2H2 are maximal monotone mappings and let T:H1→H1 be a nonexpansive mapping such that Ω=F(T)∩U≠∅. Let the sequences {xn} be generated by (23). If 0<liminfn→∞an<limsupn→∞an<1, then, {xn} converges strongly to q∈Ω.
Proof.
Clearly, Theorem 12 is valid for a nonexpansive mapping. Therefore, the desired conclusion follows immediately from Theorem 11. This completes the proof.
4. Numerical Examples
We now pay our attention to show a numerical example to demonstrate the performance and convergence of our result. In the experiment, the stopping criterion is A∗(JλB2-I)Axn≤10-10. In [4, 11], the stopping criterion is xn+1-xn≤10-10. ST denotes the initial point, IT denotes the iterative number, and SOL denotes a solution of the test problem.
Example 13 (see [<xref ref-type="bibr" rid="B13">11</xref>]).
Let H1=H2=R2, and let two operators of matrix multiplication B1:R2→R2 and B2:R2→R2 be defined by B1(x)=T1(x) and B2(x)=T2(x), where T1=8002 and T2=3006. Observe that T1 and T2 are positive linear operators; then, they are maximal monotone. So, we can define the resolvent mappings JλB1=(I+λB1)-1 and JλB2=(I+λB2)-1 on R2 associated with B1 and B2, where λ>0. Let A∈R2×2 be a nonsingular matrix operator in which elements are random and let A∗ be an adjoint of A. Assume that λ=0.5, an=1/2, σn=0.8+1/10n, and Γ={T(t):0≤t<∞}, where T(t)=10-tx, for x∈R2.
Example 14.
Let two operators of matrix multiplication B1:RI→RI and B2:RI→RJ be defined by B1(x)=T1(x) and B2(x)=T2(x), which are generated by the codes as (55)L=30∗randI,1,B1=diagL,H=20∗randJ,1,B2=diagH.Observe that T1 and T2 are positive linear operators; then, they are maximal monotone. So, we can define the resolvent mappings JλB1=(I+λB1)-1 on RI and JλB2=(I+λB2)-1 on RJ associated with B1 and B2, where λ>0. Let A=(aij)J×I be a matrix operator which is generated by the codes as(56)A=10∗randJ,I+5.We choose the initial point x1=10∗rand(I,1)+20.
Figure 1 reveals that the more the iteration steps are, the more slowly the sequence (xn,yn) converges to (0,0). From Table 1, we see that our method is as completive as the methods of [4, 11], and the sequence {xn} is more closed to the same point (0,0). From Table 2, as the size increases, our method performs better than the algorithms of [4, 11]. Furthermore, it indicates that our method is promising for solving large scale problems. Thus, we could observe that our algorithm is more suitable for SVIP ((2)-(3)) than the proposed algorithms of [4, 11].
Numerical comparison between our algorithm and the algorithms of [4, 11] for Example 13.
Our algorithm
The algorithm of [11]
The algorithm of [4]
ST
IT
SOL
IT
SOL
IT
SOL
(1,1)⊤
12
10-12(0.0019,0.9722)⊤
21
10-10(0.1953,0.2653)⊤
14
10-10(0.0003,0.5509)⊤
(10,10)⊤
12
10-11(0.0019,0.9722)⊤
20
10-10(0.1554,0.1697)⊤
15
10-10(0.0003,0.8953)⊤
(100,100)⊤
13
10-11(0.0009,0.7899)⊤
21
10-10(0.1787,0.1891)⊤
17
10-10(0.0000,0.2364)⊤
(1000,1000)⊤
14
10-11(0.0004,0.6418)⊤
22
10-10(0.3249,0.3865)⊤
18
10-10(0.0000,0.3842)⊤
Numerical comparison between our algorithm and the algorithms of [4, 11] for Example 14.
Our algorithm
The algorithm of [11]
The algorithm of [4]
I
J
IT
‖xn‖
IT
‖xn‖
IT
‖xn‖
10
20
13
5.4634e-017
435
1.0608e-011
104
NaN
30
30
45
2.0376e-015
1449
1.5019e-011
84
NaN
50
100
33
4.5147e-016
195
NaN
71
NaN
100
200
31
6.2001e-017
134
NaN
64
NaN
Behavior of xn at the initial point (100,100)T with λ=0.5 for Example 13.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
Acknowledgments
The authors thank Professor Yiju Wang for his careful reading of the paper. This project is supported by the Natural Science Foundation of China (Grant nos. 11401438, 11171180, 11171193, and 11126233), the Natural Science Foundation of Shandong Province (Grant no. ZR2013FL032), and Project of Shandong Province Higher Educational Science and Technology Program (Grant no. J14LI52).
MoudafiA.Split monotone variational inclusionsCensorY.GibaliA.ReichS.Algorithms for the split variational inequality problemMoudafiA.The split common fixed-point problem for demicontractive mappingsByrneC.CensorY.GibaliA.ReichS.The split common null point problemCensorY.BortfeldT.MartinB.TrofimovA.A unified approach for inversion problems in intensity-modulated radiation therapyQuB.XiuN.A note on the CQ algorithm for the split feasibility problemCheH.LiM.A simultaneous iterative method for split equality problems of two finite families of strictly pseudononspreading mappings without prior knowledge of operator normsByrneC.Iterative oblique projection onto convex sets and the split feasibility problemCombettesP. L.The convex feasibility problem in image recoveryKazmiK. R.RizviS. H.An iterative method for split variational inclusion problem and fixed point problem for a nonexpansive mappingSitthithakerngkietK.DeephoJ.KumamP.A hybrid viscosity algorithm via modify the hybrid steepest descent method for solving the split variational inclusion in image reconstruction and fixed point problemsLópezG.Martín-MárquezV.WangF.XuH.-K.Solving the split feasibility problem without prior knowledge of matrix normsZhaoJ.YangQ.A simple projection method for solving the multiple-sets split feasibility problemBrowderF. E.Convergence of approximants to fixed points of nonexpansive nonlinear mappings in Banach spacesChangS.-S.Some problems and results in the study of nonlinear analysisTanK.XuH. K.The nonlinear ergodic theorem for asymptotically nonexpansive mappings in Banach spacesMoudafiA.Al-ShemasE.Simultaneous iterative methods for split equality problem