A Relaxed Self-Adaptive Projection Algorithm for Solving the Multiple-Sets Split Equality Problem

In this article, we introduce a relaxed self-adaptive projection algorithm for solving the multiple-sets split equality problem. Firstly, we transfer the original problem to the constrained multiple-sets split equality problem and a fixed point equation system is established. Then, we show the equivalence of the constrained multiple-sets split equality problem and the fixed point equation system. Secondly, we present a relaxed self-adaptive projection algorithm for the fixed point equation system. The advantage of the self-adaptive step size is that it could be obtained directly from the iterative procedure. Furthermore, we prove the convergence of the proposed algorithm. Finally, several numerical results are shown to confirm the feasibility and efficiency of the proposed algorithm.

When t = r = 1, MSSEP (1) reduces to the split equality problem which was introduced by Moudafi [40] as follows: find two points x ∈ C, y ∈ Q such that Ax = By, ð6Þ which is applied to the game theory [41] and optimal control and approximation theory [42]. The following alternating CQ algorithm (ACQ) was introduced by Moudafi [40] as follows: where γ k , β k ∈ ðε, min fð1/λ A Þ, ð1/λ B Þg − εÞ for small enough ε > 0, A * and B * denote the adjoint of A and B, respectively. λ A and λ B are the spectral radiuses of A * A and B * B, respectively. Since the computation of P C and P Q onto a closed convex subset might be hard to be implemented, Fukushima [43] suggested a way to compute the projection onto a level set of a convex function by considering a sequence of projections onto half-spaces containing the original level set. Then, Moudafi [44] introduced the following relaxed alternating CQ algorithm (RACQ): where C k and Q k are two sequences of closed convex sets. Recently, Dang et al. [45] gave the following relaxed twopoint projection method to solve MSSEP (1): where γ ∈ ð0, min ff1/21/2, 1/ð4kAk 2 Þð1/4kAk 2 Þ, 1/ð4kBk 2 ÞgÞ , C i,k , i = 1, 2, ⋯, r and Q j,k , j = 1, 2, ⋯, t are two sequences of closed convex sets corresponding to C i and Q j , respectively. Ω 1 ⊂ H 1 and Ω 2 ⊂ H 2 are auxiliary simple sets. α i > 0 for all i, and β j > 0 for all j with ∑ t i=1 α i + ∑ r j=1 β j = 1. Under some mild conditions, the weak convergence of the algorithm (9) was obtained.
Noting that the determination of the stepsize γ of algorithm (9) depends on the operator (matrix) norms ∥A∥ and ∥B∥. This implies that if we implement the relaxed twopoint projection method (9), one first need to calculate operator norms of A and B, which is in general not an easy work in practice. To overcome this weakness, Lopez et al. [46] and Zhao and Yang [47] introduced self-adaptive methods of which the advantage of the methods is that the stepsizes do not need prior knowledge of the operator norms. Motivated by them, we introduce a relaxed self-adaptive projection algorithm for solving the multiple-sets split equality problem. First, we transfer the origin problem to the constrained multiple-sets split equality problem and establish the fixed point equation system. We show the equivalence of the constrained multiple-sets split equality problem and the fixed point equation system. Second, based on the fixed point equation system, we present a relaxed self-adaptive projection algorithm for solving the constrained multiple-sets split equality problem, and the convergence of the proposed algorithm is obtained. Finally, several numerical results are shown to confirm the feasibility and efficiency of the proposed algorithm.
The remainder of this paper is organized as follows. Section 2 shows some preliminaries and notations used for subsequent analysis. In Section 3, we transfer the origin problem to the constrained multiple-sets split equality problem and establish the fixed point equation system and propose a relaxed self-adaptive projection algorithm for solving the constrained multiple-sets split equality problem. The convergence of the proposed algorithm is obtained. In Section 4, several numerical results are shown to confirm the effectiveness of our algorithm.

Preliminaries
Throughout this paper, we use ⟶ and ⇀ to denote the strong convergence and weak convergence, respectively. We write ω w ðx k Þ = fx : ∃x k j ⇀ xg to indicate the weak ω-limit set of fx k g. For any x ∈ H, there exists a unique nearest point in C, denoted by P C x, such that It is well known that P C is nonexpansive and firmly nonexpansive. Moreover, P C has the following well-known properties (see for example [48]).

Lemma 1.
Let C ⊂ H be nonempty, closed and convex. Then for all x, y ∈ H and z ∈ C, An element of ∂f ðxÞ is said to be a subgradient.
Lemma 3. Suppose f : H ⟶ R is a convex function, then it is subdifferentiable everywhere and its subdifferentials are uniformly bounded set of H:

Algorithm and Its Convergence
In this section, we focus on a relaxed self-adaptive projection algorithm and obtain the convergence of the proposed algorithm. Following the idea of Censor et al. [39], we give two additional closed convex sets Ω 1 ⊂ H 1 and Ω 2 ⊂ H 2 and consider the constrained multiple-sets split equality problem where the sets C i and Q j can be expressed by c i : H 1 ⟶ R and q j : H 2 ⟶ R are convex functions for all i = 1, 2, ⋯, t and j = 1, 2, ⋯, r, and Γ denotes the solution set of (32). Define where ξ i,k ∈ ∂c i ðx k Þ and where η j,k ∈ ∂q j ðy k Þ. It is easily seen that C i ⊂ C i,k and Q j ⊂ Q j,k for all k. Notice that C i,k and Q j,k are half-spaces and thus the corresponding projections have closed-form expressions. Hence, we focus on the following multiple-sets split equality problem (CMSSEP): Now, we define the proximity function p k ðx, yÞ: where Using the proximity function p k ðx, yÞ, we can obtain the following technical lemmas. (16) is consistent (i.e., (16) has a solution) and denotes its solution set by Γ. If ðx , yÞ ∈ Γ, then it solves the fixed point equation system

Lemma 4. Assume that
Proof. To solve the problem (16), we consider the minimization problem (19) leads to the following unconstrained optimization problem: where δ Ω i is a indicator function of Ω i for i = 1, 2 defined by Note that ∂δ Ω 1 ðxÞ = N Ω 1 ðxÞ and ∂δ Ω 2 ðyÞ = N Ω 2 ðyÞ, where N Ω 1 and N Ω 2 are the normal cone of the convex sets Ω 1 and Ω 1 , respectively. From the optimality conditions of (20), it yields which means that, for λ > 0, β > 0, that is,
The following lemma reveals that ESEP (16) is equivalent to the fixed point equation system (18). (2) if and only if ðx * , y * Þ solves the fixed point equation system (18).
Proof. Taking ðx * , y * Þ ∈ Γ, one has From (30) and the fact that the projection is nonexpansive, we have 4 Journal of Function Spaces Since together with (33), we deduce Similarly, we have From (35) and (36), it follows which together with (31) means Furthermore, it follows from (31) and (38) that By induction, one has Hence, fx n g and fy n g are bounded. Following (31), (36), and (39), we have

Journal of Function Spaces
Without loss of generality, we can assume that there is σ > 0 such that 4ð1 − γ k Þσ k ð1 − σ k Þ > σ for all k. Setting s k = ∥x k − x * ∥ 2 + ∥y k − y * ∥ 2 , together with (41), we have the following inequality Since s k is eventually decreasing, we obtain s k as convergent. From (42), we have lim k→∞ p k ðx k , y k Þ = 0: Furthermore, Furthermore, which with (41), (45), and the assumption on γ k means Note that we have (47) and (49) imply Similarly, we have Thus, fx k g and fy k g are asymptotically regular. Notice that which implies that Moreover, it follows from (22) that which with (43), (45), and the assumption on γ k yields Similarly, one has Let x and y be, respectively, weak cluster points of the sequences fx k g and fy k g, then there exist two subsequences of fx k g and fy k g (again labeled fx k g and fy k g which converge weakly to x and y). Next, we will show that ð x, yÞ ∈ Γ. It follows from (30) that 6 Journal of Function Spaces From the graphs of the maximal monotone operators, N C and N Q are weakly-strongly closed, and by passing to the limit in the last inclusions, we obtain that x ∈ Ω 1 and y ∈ Ω 2 .
On the other hand, from Lemma 1 and the definition of C i,k , one has where M satisfies ∥ξ i,k ∥≤M 1 for all k. The lower semicontinuity of function c i ðxÞ and (41) assert that Thus, x ∈ C i for i = 1, 2, ⋯, t. Likewise, we can obtain where M 2 satisfies ∥η j,k ∥≤M 2 for all k. The lower semicontinuity of function q j ðxÞ and (42) lead to Thus, y ∈ Q j for j = 1, 2, ⋯, r. Moreover, the weak convergence of Ax k − By k to A x − B y and the lower semicontinuity of the squared norm imply hence, ð x, yÞ ∈ Γ. This completes the proof.
From Tables 1-3, we can see that the iterative number and CPU time of Algorithm 6 is less algorithm RTPPM. Figures 1-4 indicate that Algorithm 6 is more stable than RTPPM.
Furthermore, for testing the stationary property of iterative number, we carry 500 experiments for the initial point which are presented randomly, such as in Example 9, the results can be found in Figure 1.     On the other initial point, such as x 1 = rand 3, 1 ð Þ * 10, y 1 = rand 4, 1 ð Þ * 10, ð66Þ in Example 9, the results can be found in Figure 2.
Similarly, we carry 500 experiments for the initial point which are presented randomly, such as in Example 10, the results can be found in Figure 3.    in Example 10, the results can be found in Figure 4.

Data Availability
The data used to support the findings of this study are available from the corresponding author upon request.