IJMMSInternational Journal of Mathematics and Mathematical Sciences1687-04250161-1712Hindawi10.1155/2021/99803099980309Research ArticleConvergence Theorems for the Variational Inequality Problems and Split Feasibility Problems in Hilbert Spaceshttps://orcid.org/0000-0002-9257-4137LohawechPanisa1https://orcid.org/0000-0003-2713-6787KaewcharoenAnchalee1https://orcid.org/0000-0002-2296-0366FarajzadehAli2SolovjovsSergejs1Department of MathematicsFaculty of ScienceNaresuan UniversityPhitsanulok 65000Thailandnu.ac.th2Department of MathematicsFaculty of ScienceRazi UniversityKermanshah 67146Iranrazi.ac.ir20213620212021203202185202117520213620212021Copyright © 2021 Panisa Lohawech et al.This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

In this paper, we establish an iterative algorithm by combining Yamada’s hybrid steepest descent method and Wang’s algorithm for finding the common solutions of variational inequality problems and split feasibility problems. The strong convergence of the sequence generated by our suggested iterative algorithm to such a common solution is proved in the setting of Hilbert spaces under some suitable assumptions imposed on the parameters. Moreover, we propose iterative algorithms for finding the common solutions of variational inequality problems and multiple-sets split feasibility problems. Finally, we also give numerical examples for illustrating our algorithms.

Naresuan UniversityR2564E049
1. Introduction

In 2005, Censor et al.  introduced the multiple-sets split feasibility problem (MSSFP), which is formulated as follows:(1)find xi=1NCisuch that Axj=1MQj,where Cii=1,2,,N and Qjj=1,2,,M are nonempty closed convex subsets of Hilbert spaces H1 and H2, respectively, and A:H1H2 is a bounded linear mapping. Denote by Ω the set of solutions of MSSFP (1). Many iterative algorithms have been developed to solve the MSSFP (see ). Moreover, it arises in many fields in the real world, such as inverse problem of intensity-modulated radiation therapy, image reconstruction, and signal processing (see [1, 4, 5] and the references therein).

When N=M=1, the MSSFP is known as the split feasibility problem (SFP); it was first introduced by Censor and Elfving , which is formulated as follows:(2)find xCsuch that AxQ.

Denote by Γ the set of solutions of SFP (2).

Assume that the SFP is consistent (i.e., (2) has a solution). It is well known that xC solves (2) if and only if it solves the fixed point equation(3)x=Tx,T=PCIγAIPQA,xC,where γ is a positive constant, A is the adjoint operator of A, and PC and PQ are the metric projections of H1 and H2 onto C and Q, respectively (for more details, see ).

The variational inequality problem (VIP) was introduced by Stampacchia , which is finding a point(4)xCsuch that Fx,xx0,for all xC,where C is a nonempty closed convex subset of a Hilbert space H and F:CH is a mapping. The ideas of the VIP are being applied in many fields including mechanics, nonlinear programming, game theory, and economic equilibrium (see ).

In , we see that xC solves (4) if and only if it solves the fixed point equation(5)x=Sx,S=PCIμF,xC.

Moreover, it is well known that if F is k-Lipschitz continuous and η-strongly monotone, then VIP (4) has a unique solution (see, e.g., ).

Since SFP and VIP include some special cases (see [15, 16]), indeed, convex linear inverse problem and split equality problem are special cases of SFP, and zero point problem and minimization problem are special cases of VIP. Jung  studied the common solution of variational inequality problem and split feasibility problem: find a point(6)xΓsuch that Fx,xx0,for all xΓ,where Γ is the solution set of SFP (2) and F:H1H1 is an η-strongly monotone and k-Lipschitz continuous mapping. After that, for solving problem (6), Buong  considered the following algorithms, which were proposed in [14, 18], respectively:(7)xn+1=ItnμFTxn,n0,(8)xn+1=αnxn+1αnItnμFTxn,n0,where T=PCIγAIPQA, and under the following conditions:

(C1) tn0,1,tn0 as n and n=1tn=.

(C2) 0<liminfnαnlimsupnαn<1.

Moreover, Buong  considered the sequence xn that is generated by the following algorithm, which is weakly convergent to a solution of MSSFP (1):(9)xn+1=P1IγAIP2Axn,where P1=PC1,,PCN and P2=PQ1,,PQM or P1=i=1NαiPCi and P2=j=1MβjPQj in which αi and βj, for 1iN and 1jM, are positive real numbers such that i=1Nαi=j=1Mβj=1.

Motivated by the aforementioned works, we establish an iterative algorithm by combining algorithms (7) and (8) for finding the solution of problem (6) and prove the strong convergence of the sequence generated by our iterative algorithm to the solution of problem (6) in the setting of Hilbert spaces. Moreover, we propose iterative algorithms for solving the common solutions of variational inequality problems and multiple-sets split feasibility problems. Finally, we also give numerical examples for illustrating our algorithms.

2. Preliminaries

In order to solve our results, we now recall the following definitions and preliminary results that will be used in the sequel. Throughout this section, let C be a nonempty closed convex subset of a real Hilbert space H with inner product , and norm .

Definition 1.

A mapping T:HH is called

k-Lipschitz continuous, if TxTykxy for all x,yH, where k is a positive number.

Nonexpansive, if (i) holds with k=1.

η-strongly monotone, if ηxy2TxTy,xy for all x,yH, where η is a positive number.

Firmly nonexpansive, if TxTy2TxTy,xy for all x,yH.

α-Averaged, if T=1αI+αN for some fixed α0,1 and a nonexpansive mapping N.

In , we know that the metric projection PC:HC is firmly nonexpansive and 1/2-averaged.

We collect some basic properties of averaged mappings in the following results.

Lemma 1 (see [<xref ref-type="bibr" rid="B16">16</xref>]).

We have

The composite of finitely many averaged mappings is averaged. In particular, if Ti is αi-averaged, where αi0,1 for i=1,2, then the composite T1T2 is α-averaged, where α=α1+α2α1α2.

If the mappings Tii=1N are averaged and have a common fixed point, then

(10)FixT1,T2,,TN=i=1NFixTi.

Proposition 1 (see [<xref ref-type="bibr" rid="B19">19</xref>]).

Let D be a nonempty subset of H, m2 be an integer, and ϕ;0,1m0,1 be defined by(11)ϕα1,,αm=11+1/i=1mαi/1αi.

For every i1,,m, let αi0,1 and Ti:DD be αi-averaged. Then, T=T1,,Tm is α-averaged, where α=ϕα1,,αm.

The following properties of the nonexpansive mappings are very convenient and helpful to use.

Lemma 2 (see [<xref ref-type="bibr" rid="B20">20</xref>]).

Assume that H1 and H2 are Hilbert spaces. Let A:H1H2 be a linear bounded mapping such that A0 and let T:H2H2 be a nonexpansive mapping. Then, for 0γ<1/A2,IγAITA is γA2-averaged.

Proposition 2 (see [<xref ref-type="bibr" rid="B19">19</xref>]).

Let C be a nonempty subset of H, and let TiiI be a finite family of nonexpansive mappings from C to H. Assume that α˜iiI0,1 and δiiI0,1 such that iIδi=1. Suppose that, for every iI, Ti is α˜i-averaged; then, T=iIδiTi is α-averaged, where α=iIδiα˜i.

The following results play a crucial role in the next section.

Lemma 3 (see [<xref ref-type="bibr" rid="B14">14</xref>]).

Let t be a real number in 0,1. Let F:HH be an η-strongly monotone and k-Lipschitz continuous mapping. The mapping ItμF, for each fixed point μ0,2η/k2, is contractive with constant 1tτ, i.e.,(12)ItμFxItμFy1tτxy,where τ=11μ2ημk20,1.

Theorem 1 (see [<xref ref-type="bibr" rid="B21">21</xref>]).

Let F be a k-Lipschitz continuous and η-strongly monotone self-mapping of H. Assume that Tii=1N is a finite family of nonexpansive mappings from H to H such that C=i=1NFixTi. Then, the sequence xn defined by the following algorithm converges strongly to the unique solution x of the variational inequality (4):(13)xn+1=1βn0xn+βn0ItnμFTNn,TN1n,,T1nxn,n0,where μ0,2η/k2, Tin:=1βniI+βniTi, for i=1,,N, and under the following conditions:

tn0,1,tn0 as n and n=0tn=.

βniα,β, for some α,β0,1, and βn+1iβni0 as n,i=0,,N.

Theorem 2 (see [<xref ref-type="bibr" rid="B22">22</xref>]).

Let F, C, μ,βnii=1N, tn, and Tii=1N be as in Theorem 1. Then, the sequence xn defined by the following algorithm:(14)xn+1=ItnμFTNn,TN1n,,T1nxn,n1,converges strongly to the unique solution x of variational inequality (4).

3. Main Results

In this section, we consider the following iterative algorithm by combining Yamada’s hybrid steepest descent method  and Wang’s algorithm  for solving problem (6):(15)yn=1αnxn+αnItnμFTxn,xn+1=ItnμFTyn,n1,where T=PCIγAIPQA. If we set αn=0 for n, then (15) is reduced to (7) studied by Buong . On the other hand, in the Numerical Example section, we present the example illustrating that the two-step method (15) is more efficient that the one-step method (8) studied by Buong  and in terms of the two-step method (15) the generated sequence has the less number of iterations and converges faster than the sequence generated by the one-step method (8).

Throughout our results, unless otherwise stated, we assume that H1 and H2 are two real Hilbert spaces and A:H1H2 is a linear bounded mapping. Let F be an η-strongly monotone and k-Lipschitz continuous mapping on H1 with some positive constants η and k. Assume that μ0,2η/k2 is a fixed number.

Theorem 3.

Let C and Q be two closed convex subsets in H1 and H2, respectively. Then, as n, the sequence xn defined by (15), where the sequences tn and αn satisfy conditions (C1) and (C2), respectively, converges strongly to the solution of (6).

Proof.

From Lemma 2, we have that IγAIPQA is γA2-averaged. Since T=PCIγAIPQA, by Lemma 1 (i), we get that T is λ-averaged where λ=1+γA2/2. Moreover, we obtain that zΓ if and only if zFixT. It follows from Definition 1 (iv) that T=1λI+λS, where S is nonexpansive. Then, iterative algorithm (15) can be rewritten as follows:(16)xn+1=ItnμFTT˜xn,where T˜=1αnI+αnItnμFT and T=1λI+λS. Since 1λI+λS and ItnμF are nonexpansive, then ItnμFT is also nonexpansive. Therefore, the strong convergence of (15) to the element x in the solution set of (6) follows by Theorem 2.

In , Miao and Li showed the weak convergence results of the sequence xn converging to the element of FixT where xn is generated by the following algorithm:(17)yn=1βnxn+βnItnμFTxn,xn+1=1αnxn+αnItnμFTyn,n1,which tn satisfies condition (C3) n=1tn<+. Next, we will show the strong convergence for (17) where tn satisfies condition C1.

Theorem 4.

Let C and Q be two closed convex subsets in H1 and H2, respectively. Then, as n, the sequence xn defined by (17), where the sequence tn satisfies condition (C1) and βn and αn satisfy condition (C2), converges strongly to the solution of (6).

Proof.

In the proof of Theorem 3, one can rewrite iterative algorithm (17) as follows:(18)xn+1=1αnxn+αnItnμFTT˜xn,where T˜=1βnI+βnItnμFT and T=1λI+λS. Since ItnμFT is nonexpansive, then the strong convergence of (17) to the element x in the solution set of (6) follows by Theorem 1.

Moreover, we obtain the following results which are solving the common solution of variational inequality problem and multiple-sets split feasibility problem, i.e., find a point(19)xΩsuch that Fx,xx0,for all xΩ,where Ω is a solution set of (1), and F:H1H1 is an η-strongly monotone and k-Lipschitz continuous mapping. This problem has been studied in .

Theorem 5.

Let Cii=1N and Qjj=1M be two finite families of closed convex subsets in H1 and H2, respectively. Assume that γ0,1/A2, tn and αn satisfy conditions (C1) and (C2), respectively, and the parameters δn and ζn satisfy the following conditions:

δi>0 for 1iN such that i=1Nδi=1.

ζj>0 for 1jM such that i=1Nζj=1.

Then, as n, the sequence xn, defined by(20)yn=1αnxn+αnItnμFP1IγAIP2Axn,xn+1=ItnμFP1IγAIP2Ayn,n1,with one of the following cases:

(A1) P1=PC1,,PCN and P2=PQ1,,PQM

(A2) P1=i=1NδiPCi and P2=j=1MζjPQj

(A3) P1=PC1,,PCN and P2=j=1MζjPQj

(A4) P1=i=1NδiPCi and P2=PQ1,,PQM,

converges to the element x in the solution set of (19).

Proof.

Let T=P1IγAIP2A. We will show that T is averaged.

In the case of (A1), P1=PC1,,PCN and P2=PQ1,,PQM. Since PCi is 1/2-averaged for all i=1,,N, by Proposition 1, we get that P1 is λ1-averaged, where λ1=N/N+1. Similarly, we have that P2 is also averaged and so P2 is nonexpansive. By using Lemma 2, we deduce that IγAIP2A is λ2-averaged, where λ2=γA2. It follows from Lemma 1 (i) that T is λ-averaged with λ=N/N+1+γA2N/N+1γA2.

If P1=i=1NδiPCi and P2=j=1MζjPQj, then by using Proposition 2 and condition (a), we obtain that P1 is 1/2-averaged. From condition (b) and taking into account that PQj is nonexpansive, for all j=1,,M, we have that P2 is also nonexpansive. It follows from Lemma 2 that IγAIP2A is γA2-averaged. Thus, T is λ-averaged with λ=1+γA2/2.

Cases (A3) and (A4) are similar. This implies that T:=1λI+λS, where S is nonexpansive. Moreover, by Lemma 1, we get that(21)FixT=FixP1FixIγAIP2A=FixP1A1FixP2=i=1NCiA1j=1MQj=Ω.

Then, iterative algorithm (20) can be rewritten as follows:(22)xn+1=ItnμFTT˜xn,where T˜=1αnI+αnItnμFT and T=1λI+λS. Since 1λI+λS and ItnμF are nonexpansive, then ItnμFT is nonexpansive. Thus, the strong convergence of (20) to the element x in the solution set of (19) follows by Theorem 2.

Theorem 6.

Let Cii=1N, Qjj=1M,γ,tn,δn, and ζn be as in Theorem 5. Then, as n, the sequence xn, defined by(23)yn=1αnxn+αnItnμFP1IγAIP2Axn,xn+1=1βnxn+βnItnμFP1IγAIP2Ayn,n1,with one of the cases (A1)–(A4), converges strongly to an element in the solution set of (19).

Proof.

In the proof of Theorem 5, one can rewrite iterative algorithm (23) as follows:(24)xn+1=1αnxn+αnItnμFTT˜xn,where T˜=1βnI+βnItnμFT and T=1λI+λS. Since ItnμFT is nonexpansive, the strong convergence of (23) to the element x in the solution set of (19) follows by Theorem 1.

4. Numerical Example

In this section, we present the numerical example comparing algorithm (8) which is given by Buong  and algorithm (15) (new method) to solve the following test problem in : find an element xΩ such that(25)φx=minxΩφx,Ω=CiA1Qj,where φ is a convex function, having a strongly monotone and Lipschitz continuous derivative φx on the Euclidian space En, C=i=1NCiandQ=j=1MQj where(26)Ci=xEn:k=1nakixkbi,aki,bi,+, for 1kn and 1iN,(27)Qj=yEm:l=1mylalj2Rj2,Rj>0,alj,+, for 1lm and 1jM, and A is an n×m-matrix.

Example 1.

We consider test problem (25), where N=M=1, n=m=2, φx=1ax2/2 for some fixed a0,1, and(28)A=1102.

So, we have that F:=φ=1aI is a k-Lipschitz continuous and η-strongly monotone mapping with k=η=1a. For each algorithm, we set ai=1/i,1,bi=0, for all i=1,,N, and aj=1/j,0,Rj=1, for all j=1,,M. Taking a=0.5,γ=0.3, the stopping criterion is defined by En=xn+1xn<ε where ε=104 and 106. The numerical results are listed in Table 1 with different initial points x1, where n is the number of iterations and s is the CPU time in seconds. In Figures 1 and 2, we present the graphs illustrating the number of iterations for both methods using the stopping criterion defined as above with the different initial points shown in Table 1.

Computational results for Example 1 with different methods.

104106
Initial pointnsns
2,1TBuong method294610.364595294620431.362283
New method117840.241371117848123.411679

1,3TBuong method306320.565431306334333.468210
New method122520.324808122533625.570356

The convergence behavior of En with the initial point 2,1T.

The convergence behavior of En with the initial point 1,3T.

Remark 1.

From the numerical analysis of our results in Table 1 and Figures 1 and 2, we get that algorithm (15) (new method) has less number of iterations and faster convergence than algorithm (8) (Buong method).

Example 2.

In this example, we consider algorithm (23) for solving test problem (25), where N=5 and M=4. Let Cii=1N, Qjj=1M, φ, a, and A be as in Example 1. In the numerical experiment, we take the stopping criterion En<104. The numerical results are listed in Table 2 with different cases of P1 and P2. In Figures 3 and 4, we present the graphs illustrating the number of iterations for all cases of P1 and P2 using the stopping criterion as above with the different initial points appeared in Table 2. Moreover, Table 3 shows the effect of different choices of γ.

Computational results for Example 2 with different methods.

Initial pointA1A2A3A4
2,1Tn28577242642857724264
s1.4912251.3550741.5344141.282528

1,3Tn33407314383340731438
s1.7468681.6930691.8168971.690618

The convergence behavior of En with the initial point 2,1T.

The convergence behavior of En with the initial point 1,3T.

Computational results for Example 2 with different γ.

γ0.10.20.3
2,1Tn96751920028577
s0.6695081.2451361.666702

1,3Tn113112244733407
s0.7645361.3726001.958486
Remark 2.

We observe from the numerical analysis of Table 2 that algorithm (23) has the fastest convergence when P1 and P2 satisfy (A4) and the slowest convergence when P1 and P2 satisfy (A3). Moreover, we require less iteration steps and CPU times for convergence when γ is chosen very small and close to zero.

Data Availability

No data were used to support this study.

Conflicts of Interest

The authors declare that there are no conflicts of interest.

Acknowledgments

The first author is thankful to the Science Achievement Scholarship of Thailand. The authors would like to thank the Department of Mathematics, Faculty of Science, Naresuan University (grant no. R2564E049), for the support.

CensorY.ElfvingT.KopfN.BortfeldT.The multiple-sets split feasibility problem and its applications for inverse problemsInverse Problems20052162071208410.1088/0266-5611/21/6/0172-s2.0-28544451131BuongN.Iterative algorithms for the multiple-sets split feasibility problem in Hilbert spacesNumerical Algorithms201776378379810.1007/s11075-017-0282-42-s2.0-85014018068ZhaoJ.HouD.ZongH.Several iterative algorithms for solving the multiple-set split common fixed-point problem of averaged operatorsJournal of Nonlinear Functional Analysis201920193910.23952/jnfa.2019.39ByrneC.Iterative oblique projection onto convex sets and the split feasibility problemInverse Problems200218244145310.1088/0266-5611/18/2/3102-s2.0-0036538286CensorY.ElfvingT.A multiprojection algorithm using bregman projections in a product spaceNumerical Algorithms19948222123910.1007/bf021426922-s2.0-0000424337XuH. K.Iterative methods for the split feasibility problem in infinite-dimensional hilbert spacesInverse Problems2010261010501810.1088/0266-5611/26/10/1050182-s2.0-78049429291StampacchiaG.Formes bilineaires coercivites sur les ensembles convexesComptes Rendus de l’Académie des Sciences Paris196425844134416CengL. C.AnsariQ. H.YaoJ. C.Mann-type steepest-descent and modified hybrid steepest-descent methods for variational inequalities in banach spacesNumerical Functional Analysis and Optimization2008299-10987103310.1080/016305608024183912-s2.0-56349116819CengL. C.TeboulleM.YaoJ. C.Weak convergence of an iterative method for pseudomonotone variational inequalities and fixed-point problemsJournal of Optimization Theory and Applications20101461193110.1007/s10957-010-9650-02-s2.0-77953651809FukushimaM.A relaxed projection method for variational inequalitiesMathematical Programming1986351587010.1007/bf015894412-s2.0-0022717559KinderlehrerD.StampacchiaG.An Introduction to Variational Inequalities and their Applications2000Philadelphia, PA, USASIAMYangH.BellM. G. H.Traffic restraint, road pricing and network equilibriumTransportation Research Part B: Methodological199731430331410.1016/s0191-2615(96)00030-6CegielskiA.Iterative Methods for Fixed Point Problems in Hilbert Spaces2012Berlin, GermanySpringerYamadaI.ButnariuD.CensorY.ReichS.The hybrid steepest descent method for the variational inequality problem over the intersection of fixed point sets of nonexpansive mappingsInherently Parallel Algorithms In Feasibility and Optimization and their Applications2001New York, NY, USALuoY.An inertial splitting algorithm for solving inclusion problems and its applications to compressed sensingJournal of Applied and Numerical Optimization.20202279295XuH. K.Averaged mappings and the gradient-projection algorithmJournal of Optimization Theory and Applications2011150236037810.1007/s10957-011-9837-z2-s2.0-79959774282JungJ. S.Iterative algorithms based on the hybrid steepest descent method for the split feasibility problemJournal of Nonlinear Sciences and Applications2016964214422510.22436/jnsa.009.06.63WangL.An iterative method for nonexpansive mapping in hilbert spacesJournal of Fixed Point Theory and Applications.2007200782861910.1155/2007/286192-s2.0-33947622132CombettesP. L.YamadaI.Compositions and convex combinations of averaged nonexpansive operatorsJournal of Mathematical Analysis and Applications20154251557010.1016/j.jmaa.2014.11.0442-s2.0-84920986628TakahashiW.XuH. K.YaoJ. C.Iterative methods for generalized split feasibility problems in Hilbert spacesSet-Valued and Variational Analysis201523220522110.1007/s11228-014-0285-42-s2.0-84931268127BuongN.DuongL. T.An explicit iterative algorithm for a class of variational iequalitiesJournal of Optimization Theory and Applications2011151351352410.1007/s10957-011-9890-72-s2.0-80255124815ZhouH.WangP.A simpler explicit iterative algorithm for a class of variational inequalities in hilbert spacesJournal of Optimization Theory and Applications2014161371672710.1007/s10957-013-0470-x2-s2.0-84901494151MiaoY.JunfenL.Weak and strong convergence of an iterative method for nonexpansive mappings in hilbert spacesApplicable Analysis and Discrete Mathematics20082219720410.2298/aadm0802197m2-s2.0-78650942266