An Iterative Method for Solving Split Monotone Variational Inclusion Problems and Finite Family of Variational Inequality Problems in Hilbert Spaces

*e purpose of this paper is to study the convergence analysis of an intermixed algorithm for finding the common element of the set of solutions of split monotone variational inclusion problem (SMIV) and the set of a finite family of variational inequality problems. Under the suitable assumption, a strong convergence theorem has been proved in the framework of a real Hilbert space. In addition, by using our result, we obtain some additional results involving split convex minimization problems (SCMPs) and split feasibility problems (SFPs). Also, we give some numerical examples for supporting our main theorem.


Introduction
Let H 1 and H 2 be real Hilbert spaces whose inner product and norm are denoted by 〈·, ·〉 and ‖ · ‖, respectively, and let C, Q be nonempty closed convex subsets of H 1 and H 2 , respectively. For a mapping S: C ⟶ C, we denoted by F(S) the set of fixed points of S (i.e., F(S) � x ∈ C: Sx � x { }). Let A: C ⟶ H be a nonlinear mapping. e variational inequality problem (VIP) is to find x * ∈ C such that 〈Ax * , y − x * 〉 ≥ 0, ∀y ∈ C, (1) and the solution set of problem (1) is denoted by VI(C, A). It is known that the variational inequality, as a strong and great tool, has already been investigated for an extensive class of optimization problems in economics and equilibrium problems arising in physics and many other branches of pure and applied sciences. Recall that a mapping A: C ⟶ C is said to be α-inverse strongly monotone if there exists α > 0 such that A multivalued mapping M: H 1 ⟶ 2 H 1 is called monotone if for all x, y ∈ H 1 , 〈x − y, u − v〉 ≥ 0, for any u ∈ Mx and v ∈ My. A monotone mapping M: H 1 ⟶ 2 H 1 is maximal if the graph G(M) for M is not properly contained in the graph of any other monotone mapping. It is generally known that M is maximal if and only if for (x, u) ∈ H 1 × H 1 , 〈x − y, u − v〉 ≥ 0 for all (y, v) ∈ G(M) implies u ∈ Mx. Let M: H 1 ⟶ 2 H 1 be a multivalued maximal monotone mapping. e resolvent mapping J M λ : H 1 ⟶ H 1 associated with M is defined by where I stands for the identity operator on H 1 . We note that for all λ > 0, the resolvent J M λ is single-valued, nonexpansive, and firmly nonexpansive.
where c ∈ (1, 1/L) with L being the spectral radius of the operator T * T.
He obtained the following weak convergence theorem for algorithm (6).
Theorem 1 (see [1]). Let H 1 , H 2 be real Hilbert spaces. Let T: H 1 ⟶ H 2 be a bounded linear operator with adjoint T * . For i � 1, 2, let A i : H i ⟶ H i be α i -inverse strongly monotone with α � min α 1 , α 2 and let M i : H i ⟶ 2 M i be two maximal monotone operators. en, the sequence generated by (6) converges weakly to an element x * ∈ Ω provided that Ω ≠ ∅, λ ∈ (0, 2α), and c ∈ (1, 1/L) with L being the spectral radius of the operator T * T.
On the other hand, Yao et al. [20] presented an intermixed Algorithm 1.3 for two strict pseudo-contractions in real Hilbert spaces. ey also showed that the suggested algorithms converge strongly to the fixed points of two strict pseudo-contractions, independently. As a special case, they can find the common fixed points of two strict pseudocontractions in Hilbert spaces (i.e., a mapping S: C ⟶ C is said to be κ-strictly pseudo-contractive if there exists a constant κ ∈ [0, 1) such that ‖Sx − Sy‖ 2 ≤ ‖x − y‖ 2 + k ‖(I − S)x − (I − S)‖, 0.3 cm ∀x, y ∈ C). Algorithm 2. For arbitrarily given x 0 , y 0 ∈ C, let the sequences x n and y n be generated iteratively by x n+1 � 1 − β n x n + β n P C α n f y n + 1 − k − α n x n + kTx n , n ≥ 0, y n+1 � 1 − β n y n + β n P C α n f x n + 1 − k − α n y n + kSy n , n ≥ 0, where α n and β n are two sequences of the real number in (0, 1), T, S: C ⟶ C are λ-strictly pseudo-contractions, Under some control conditions, they proved that the sequence x n converges strongly to P F(T) f(y * ) and y n converges strongly to P F(S) f(x * ), respectively, where x * ∈ F(T), y * ∈ F(S), and P F(T) and P F(S) are the metric projection of H onto F(T) and F(S), respectively. After that, many authors have developed and used this algorithm to solve the fixed-point problems of many nonlinear operators in real Hilbert spaces (see for example [21][22][23][24][25][26][27]). Question: can we prove the strong convergence theorem of two sequences of split monotone variational inclusion problems and fixed-point problems of nonlinear mappings in real Hilbert spaces? e purpose of this paper is to modify an intermixed algorithm to answer the question above and prove a strong convergence theorem of two sequences for finding a common element of the set of solutions of (SMVI) (4) and (5) and the set of solutions of a finite family of variational inequality problems in real Hilbert spaces. Furthermore, by applying our main result, we obtain some additional results involving split convex minimization problems (SCMPs) and split feasibility problems (SFPs). Finally, we give some numerical examples for supporting our main theorem.

Preliminaries
Let H be a real Hilbert space and C be a nonempty closed convex subset of H. We denote the strong convergence of x n to x and the weak convergence of x n to x by notations "x n ⟶ x as n ⟶ ∞" and "x n ⇀x as n ⟶ ∞," Definition 1. Let H be a real Hilbert space and C be a closed convex subset of H. Let S: C ⟶ C be a mapping. en, S is said to be It is well known that if S is α-inverse strongly monotone, then it is 1/α-Lipschitz continuous and every nonexpansive mapping S is 1-Lipschitz continuous. We note that if S: H ⟶ H is a nonexpansive mapping, then it satisfies the following inequality (see eorem 3 in [28] and eorem 1 in [29]): Particularly, for every x ∈ H and y ∈ F(S), we have For every x ∈ H, there is a unique nearest point P C x in C such that Such an operator P C is called the metric projection of H onto C.
Lemma 1 (see [30]). For a given z ∈ H and u ∈ C, Furthermore, P C is a firmly nonexpansive mapping of H onto C and satisfies Moreover, we also have the following lemma.
Lemma 2 (see [31]). Let H be a real Hilbert space, let C be a nonempty closed convex subset of H, and let A be a mapping of C into H. Let u ∈ C. en, for λ > 0, where P C is the metric projection of H onto C.

Lemma 3.
Let C be a nonempty closed and convex subset of a real Hilbert space H. For every i � 1, 2, . . . , N, let where 0 < a i < 1 for all i � 1, 2, . . . , N and N i�1 a i � 1. Moreover, I − λ N i�1 a i A i is a nonexpansive mapping for all λ ∈ (0, 2α).

Proof. By Lemma 4.3 of [32], we have that
Let λ ∈ (0, 2α) and let x, y ∈ C. As the same argument as in the proof of Lemma 8 in [16], we have I − λ N i�1 a i A i as nonexpansive.
Next, we give an example to support Lemma 7.
□ Lemma 8 (see [34]). Let s n be a sequence of nonnegative real numbers satisfying s n+1 ≤ (1 − α n )s n + δ n , ∀n ≥ 0 where α n is a sequence in (0, 1) and δ n is a sequence in R such that en lim n⟶∞ s n � 0.

Main Results
In this section, we introduce an iterative algorithm of two sequences which depend on each other by using the intermixed method. en, we prove a strong convergence theorem for solving two split monotone variational inclusion problems and a finite family of variational inequality problems.

Theorem 2.
Let H 1 and H 2 be Hilbert spaces, and let C be a nonempty closed convex subset of H 1 . Let T: H 1 ⟶ H 2 be a bounded linear operator, and let f, g: Let x n and y n be sequences generated by for all n ≥ 1 where δ n , σ n , η n , α n ⊆[0, 1] with δ n + σ n + η n � 1, a x 1 , a x 2 , . . . , a x N , a y 1 , a y 2 , . . . , a y N ⊂ (0, 1), and μ x n , μ y n ⊂ (0, ∞). Assume the following condition holds: en, x n converges strongly to x � P F x f(y) and y n converges strongly to y � P F y g(x).
Proof. We divided the proof into five steps.

□
Step 1. We will show that x n and y n are bounded. Let x * ∈ F x and y * ∈ F y . en, from Lemma 7 and Lemma 6, we get From (21), Lemma 3, and (22), we have Similarly, from definition of y n , we have Hence, from (23) and (24), we obtain By induction, we have International Journal of Mathematics and Mathematical Sciences for every n ∈ N. us, x n and y n are bounded.
Step 2. We will show that lim By applying Lemma 3, we get that From the definition of x n , (27), and (28), we have By the same argument as in (27) and (29), we also have 6 International Journal of Mathematics and Mathematical Sciences From (29) and (31), we obtain that From (32), conditions (1), (2), and (5), and Lemma 8, we obtain that Step have that International Journal of Mathematics and Mathematical Sciences en, we have Observe that From (33) and (37), we have By the same argument as above, we also have that Note that By (37) and (39), we get that By the same argument as (41), we also obtain By (33) and (39), we get that However, It follows from (33) and (45) that Consider From (39) and (47), we obtain Applying the same method as (48), we also have Step 4. We will show that lim sup n⟶∞ 〈f (y) − x, z n − x〉 ≤ 0 and lim sup n⟶∞ 〈g(x) − y, z n − y〉 ≤ 0, where x � P F x f(y) and y � P F y f(x). First, we take a subsequence z n k of z n such that limsup n⟶∞ 〈f(y) − x, z n − x〉 � lim k⟶∞ 〈f(y) − x, z n k − x〉.
(51) 8 International Journal of Mathematics and Mathematical Sciences Since x n is bounded, there exists a subsequence x n k of x n such that x n k ⇀q 1 as k ⟶ ∞. From (39), we get that z n k ⇀q 1 .
Next, we need to show that . Assume that q 1 ∉ Ω x . By Lemma 7, we get that q 1 ≠ G x q 1 . Applying Opial's condition and (49), we get that is is a contradiction. us, q 1 ∈ Ω x . Assume that q 1 ∉ ∩ N i�1 VI(C, B x i ). en, from Lemma 3 and Lemma 2, we have q 1 ∉ F(P C (I − μ x n N i�1 a i B x i )). From Opial's condition and (42), we obtain is is a contradiction. us, q 1 ∈ ∩ N i�1 VI(C, B x i ), and so, (54) However, z n k ⇀q 1 . From (54) and Lemma 1, we can derive that limsup n⟶∞ 〈f(y) − x, z n − x〉 � lim k⟶∞ 〈f(y) − x, z n k − x〉, By the same method as (55), we also obtain that limsup n⟶∞ 〈g(x) − y, z n − y〉 ≤ 0. (56) Step 5. Finally, we show that the sequences x n and y n converges strongly to x � P F x f(y) and y � P F y f(x), respectively. From the definition of z n , we have which implies that International Journal of Mathematics and Mathematical Sciences From the definition of x n and (58), we get (59) Applying the same argument as in (58) and (59), we get From (58) and (59), we have According to condition (2) and (4), (61), and Lemma 8, we can conclude that x n and y n converge strongly to x � P F x f(y) and y � P F y g(x), respectively. Furthermore, from (39) and (40), we get that z n and w n converge strongly to x � P F x f(y) and y � P F y g(x), respectively.
is completes the proof. □ One of the great special cases of the SMVIP is the split variational inclusion problem that has a wide variety of application backgrounds, such as split minimization problems and split feasibility problems.
If we set A x i � 0 and A y i � 0 in eorem 2, for all i � 1, 2, then we get the strong convergence theorem for the split variational inclusion problem and the finite families of the variational inequality problems as follows: VI(C, B y i )) ≠ ∅. Let x n and y n be sequences generated by x 1 , y 1 ∈ H 1 and Ty n , for all i � 1, 2, and 0 < c x , c y < 1/L with L being a spectral radius of T * T. Assume the following conditions hold: en, x n converges strongly to x � P F x f(y) and y n converges strongly to y � P F y g(x).

Applications
In this section, by applying our main result in eorem 2, we can prove strong convergence theorems for approximating the solution of the split convex minimization problems and split feasibility problems.

Split Convex Minimization Problems.
Let φ: H ⟶ R be a convex and differentiable function and ψ: H ⟶ (− ∞, ∞] be a proper convex and lower semicontinuous function. It is well known that if ∇φ is 1/α-Lipschitz continuous, then it is α-inverse strongly monotone, where ∇φ is the gradient of φ (see [10]). It is also known that the subdifferential zψ of ψ is maximal monotone (see [35]). Moreover, Next, we consider the following the split convex minimization problem (SCMP): find and such that y * � Tx * ∈ H 2 solves where T: H 1 ⟶ H 2 is a bounded linear operator with adjoint T * , φ i , and ψ i defined as above, for i � 1, 2. We denoted the set of all solutions of (64) and (65) by Θ. at is, eorem 2, then we get the strong convergence theorem for finding the common solution of the split convex minimization problems and the finite families of the variational inequality problems as follows.  � 1, 2, . . . , N, let B Let x n and y n be sequences generated by x 1 , y 1 ∈ H 1 and y n+1 � δ n y n + σ n P C I − μ y n N i�1 a y i B y i ⎛ ⎝ ⎞ ⎠ y n + η n α n g x n + 1 − α n G y y n , for all n ≥ 1 where δ n , σ n , η n , α n ⊆[0, 1] with δ n + σ n + η n � 1, en, x n converges strongly to x � P F x f(y) and y n converges strongly to y � P F y g(x).

e Split Feasibility Problem.
Let H 1 and H 2 be two real Hilbert spaces. Let C and Q be the nonempty closed convex subset of H 1 and H 2 , respectively. e split feasibility problem (SFP) is to find a point x ∈ C, such that Ax ∈ Q.
(67) e set of all solutions (SFP) is denoted by is problem was introduced by Censor and Elfving [8] in 1994. e split feasibility problem was investigated extensively as a widely important tool in many fields such as signal processing, intensity-modulated radiation therapy problems, and computer tomography (see [36][37][38] and the references therein).
Let H be a real Hilbert space, and let h be a proper lower semicontinuous convex function of H into (− ∞, +∞]. e subdifferential zh of h is defined by en, zh is a maximal monotone operator [39]. Let C be a nonempty closed convex subset of H, and let i C be the indicator function of C, i.e., i C (x) � 0 if x ∈ C and i C (x) � ∞ if x ∉ C. en, i C is a proper, lower semicontinuous and convex function on H, and so the subdifferential zi C of i C is a maximal monotone operator. en, we can define the resolvent operator J Recall that the normal cone We note that zi C � N C , and for λ > 0, we have that u � J zi C λ x if and only if u � P C x (see [31]).
Setting M 1 � zi C , M 2 � zi Q , and in (SMVI) (4) and (5), then (SMVI) (4) and (5) are reduced to the split feasibility problem (SFP) (67) Now, by applying eorem 2, we get the following strong convergence theorem to approximate a common solution of SFP (67) and a finite family of variational inequality problems.

Theorem 4.
Let H 1 and H 2 be Hilbert spaces, and let C and Q be the nonempty closed convex subset of H 1 and H 2 , respectively. Let T: H 1 ⟶ H 2 be a bounded linear operator with adjoint T * , and let f, g: H 1 ⟶ H 1 be ρ f , ρ g -contraction mappings with ρ � max ρ f , ρ g . For i � 1, 2, . . . , N, Let x n and y n be sequences generated by x 1 , y 1 ∈ H 1 and y n+1 � δ n y n + σ n P C I − μ x n N i�1 a y i B y i ⎛ ⎝ ⎞ ⎠ y n + η n α n g x n + 1 − α n P C y − c y T * I − P Q Ty n , for all n ≥ 1, where δ n , σ n , η n , α n ⊆[0, 1] with δ n + σ n + η n � 1, a x 1 , a x 2 , . . . , a x N , a y 1 , a y 2 , . . . , a y N ⊂ (0, 1), μ x n , μ y n ⊂ (0, ∞), λ x i , λ y i ∈ (0, ∞) for all i � 1, 2, and 0 < c x , c y < 1/L with L being a spectral radius of T * T. Assume the following condition holds: en, x n converges strongly to x � P F x f(y) and y n converges strongly to y � P F y g(x). □ e split feasibility problem is a significant part of the split monotone variational inclusion problem. It is extensively used to solve practical problems in numerous situations. Many excellent results have been obtained. In what follows, an example of a signal recovery problem is introduced.
Example 2. In signal recovery, compressed sensing can be modeled as the following under-determined linear equation system: where x ∈ R N is a vector with m non-zero components to be recovered, y ∈ R M is the observed or measured data with noisy δ, and A: R N ⟶ R M (M < N) is a bounded linear observation operator. An essential point of this problem is that the signal x is sparse; that is, the number of nonzero elements in the signal x is much smaller than the dimension of the signal x. To solve this situation, a classical model, convex constraint minimization problem, is used to describe the above problem. It is known that problem (69) can be seen as solving the following LASSO problem [40]: where t > 0 is a given constant and ‖ · ‖ 1 is ℓ 1 norm. In particular, LASSO problem (70) is equivalent to the split feasibility problem (SFP) (67) when C � x ∈ R N : ‖x‖ 1 ≤ t and Q � y .
. Also, it is easy to see that all parameters satisfy all conditions in eorem 2. en, by eorem 2, we can conclude that the sequence x n converges strongly to (1, 1) and y n converges strongly to (− 2, − 2). Table 1 and Figure 1 show the numerical results of x n and y n where x 1 � (− 10, 10), y 1 � (− 10, 10), and n � N � 50.

Data Availability
No data were used to support this study.

Conflicts of Interest
e authors declare no conflicts of interest.