An Extrapolated Iterative Algorithm for Multiple-Set Split Feasibility Problem

The multiple-set split feasibility problem (cid:2) MSSFP (cid:3) , as a generalization of the split feasibility problem, is to ﬁnd a point in the intersection of a family of closed convex sets in one space such that its image under a linear transformation will be in the intersection of another family of closed convex sets in the image space. Censor et al. (cid:2) 2005 (cid:3) proposed a method for solving the multiple-set split feasibility problem (cid:2) MSSFP (cid:3) , whose e ﬃ ciency depends heavily on the step size, a ﬁxed constant related to the Lipschitz constant of ∇ p (cid:2) x (cid:3) which may be slow. In this paper, we present an accelerated algorithm by introducing an extrapolated factor to solve the multiple-set split feasibility problem. The framework encompasses the algorithm presented by Censor et al. (cid:2) 2005 (cid:3) . The convergence of the method is investigated, and numerical experiments are provided to illustrate the beneﬁts of the extrapolation.


Introduction
The multiple-set split feasibility problem MSSFP is to find a point where t and r are positive integers, C i ⊂ N , i 1, . . ., t and Q j ⊂ M , j 1, . . ., r are closed convex, A is an M × N real matrix.When t r 1, the problem becomes to find a point x ∈ C and Ax ∈ Q, which is just the two-set split feasibility problem SFP, for short .SFP was originally introduced in 1 allowing for constraints both in the domain and range of a linear operator.Many methods have been developed for solving the SFP, for example, the basic CQ algorithm proposed by Byrne 2 , the relaxed CQ algorithm presented by Yang 3 and the KM-CQ-like algorithm developed by Dang and Gao 4 .The MSSFP, formulated in 5 , arises in the field of intensity-modulated radiation therapy when one attempts to describe physical does constraints and equivalent uniform does EUD constraints within a single model, see 6 .Censor et al. generalized the CQ algorithm 2 to solve the MSSFP 5 to get the following iterative process: where 0 < s < 2, L t i 1 α i ρ A T A r j 1 β j and ρ A T A is the spectral radius of A T A, and α i > 0, β j > 0, for all i and j with t i 1 α i r j 1 β j 1.Let P S denote the projection onto the convex set S, that is, P S x arg min y∈S x − y . 1.3 There also came out other algorithms for solving MSSFP, such that Xu in 7 and Masad and Reich in 8 introduced strong convergence methods in infinite dimensional Hilbert space, respectively.Censor et al. in 9 presented the perturbed projection and simultaneous subgradient projection algorithm to deal with the limit of accurately computing the orthogonal projection, Censor and Segal proposed string-averaging algorithmic scheme for sparse case in 10 and employed product space formulation to derive and analyze the simultaneous algorithm for MSSFP in 11 .However, the above algorithms use a fixed stepsize related to the largest eigenvalue of the matrix A T A, which sometimes affects the convergence speed of the algorithms.Extrapolated iterative method was first proposed in 12 , it is an accelerated method in optimization since Pierra observed that the extrapolation parameter can be much larger than 1 and that the sequence generated by the extrapolated method converges fast.Subsequently, Heinz et al. in 13 , proposed a general parallel block-iterative algorithmic framework by introducing extrapolated overrelaxations to solve the affine-convex feasibility problems, the corresponding numerical results also show the fast convergence.
Motivated by the extrapolated method for solving the affine-convex feasibility problems, in this paper, we present an extrapolated iterative method to solve the MSSFP, which includes the algorithm proposed by Censor et al. in 5 .As will be shown our algorithm extends and includes as a special case of the method in 5 .
The paper is organized as follows.Section 2 reviews some preliminaries.Section 3 gives an extrapolated algorithm and shows its convergence.Section 4 provides some numerical experiments.

Preliminaries
Under normal circumstances, the MSSFP considers both the feasible and the infeasible cases by the use of a proximity function, that is, if the MSSFP problem is consistent then unconstrained minimization of the proximity function yields the value 0; in the inconsistent case, it finds a point which is least violating the feasibility by being "closest" to all sets, as "measured" by the proximity function.The minimization problem is

2.1
We know that the projections of a point onto the sets C and Q are difficult to implement, even if each individual sets C i and Q j have simple or special structures such that projection onto each of them is easy to implement.In practical applications, the projections onto individual sets C i are more easily calculated than the projection onto the intersection C. For this purpose, Censor et al. 5 introduced the proximity function p x , to measure the distance of a point to all sets.We have where α i > 0, β j > 0, for all i and j with t i 1 α i r j 1 β j 1.Then, Hence, 1.2 can be rewritten as 2.4 The lemma provides well-known properties of orthogonal projections.
Lemma 2.1 see 14 .Let S be a nonempty closed convex subset of N , for any x, y ∈ N and any z ∈ S, the following properties hold:

The Extrapolated Projection Algorithm and Its Convergence
The following is our extrapolated projection algorithm.
Algorithm 3.1.For an arbitrary initial point x 0 , {x k } k≥0 is generated by the iteration where s is a positive scalar such that 0 < s < 2, α i > 0, β j > 0 for all i and j with t i Proof.Let h k max{1/L, λ k } and take a point z ∈ C with Az ∈ Q.
Step 1.First we show that x k − z ≤ x k 1 − z .From 3.1 , we have

3.3
Observe that

3.4
Abstract and Applied Analysis 5 By the property 2 in Lemma 2.1, we get t i 1 Therefore, Similarly, we have

3.7
Since Az ∈ Q, using again property 2 in Lemma 2.1, we obtain that Substituting 3.6 and 3.8 into 3.3 , we get the following:

3.9
Assume at k th step, 1/L > λ k , then algorithms 3.1 and 2.4 coincide.Since ∇p x has a Lipschitz constant L, and ∇p x is 1/L-ism inverse-strongly monotone , that is, 3.10 see 5 ; then, from the proof of Theorem 2.1 in 2 , we get that the sequence {x k } generated by 1.2 satisfies

3.11
Similarly, assume at kth step, 1/L ≤ λ k , that is h k λ k , then replacing 1/L with λ k in 3.9 , we get the following:

3.13
Since s ∈ 0, 2 , we have Step 2. Secondly we show that lim k → ∞ x k x * with x * ∈ C and Ax * ∈ Q.
As shown in Step 1, the sequence { x k −z } is monotonically decreasing and bounded, and there exists the limit

3.15
Since the case h k 1/L is already treated in 5 , we only need to consider the subsequence {x k p } λ kp >1/L .Hence, we need to show that the subsequence {x k p } converges to x * with x * ∈ C and Ax * ∈ Q.From 3.12 and 3.15 , and replacing x k by x k p , we have lim From 3 in Lemma 2.1, we know that Az , then, we may assume that there exists a constant M such that t i 1

3.18
Taking limits as p → ∞ in 3.18 and considering 3.16 lead to lim

3.20
Since the sequence {x k p } is bounded, there exists a subsequence {x k p l } of {x k p } which converges to a point b, and a corresponding subsequence {Ax k p l } of {Ax k p } which converges to a point Ab.Therefore, from 3.20 , it is easy to get that b ∈ C and Ab ∈ Q.
To obtain the result that the sequence {x k p } itself is convergent to a point b ∈ C with Ab ∈ Q, it is now sufficient to show that the subsequence of {x k p } converges to the same point b and the corresponding subsequence of {Ax k p } is convergent to Ab.Let us suppose that there exists a subsequence {x k p l } of {x k p } that is convergent to point b , as above, b ∈ C and Ab ∈ Q.For l ∈ Z , we obtain 3.21 which, after calculating the inner product, leads to

3.22
Similarly, for l ∈ Z , it is easily to obtain that

3.23
As remarked, the sequences In particular, we get the following:

3.24
Taking the limits in 3.22 and 3.23 , for l → ∞ and for l → ∞, we deduce that Here we shortly explain the rational for the choice of the parameter λ k in Algorithm 3.1.In fact, if s 1, 3.9 can be rewritten as

3.26
Evidently, when the maximal value of the right hand side expression of 3.26 is obtained.Hence, if s 1, for the case λ k > 1/L, the factor λ k can be considered as the "best" possible value which assures that x k 1 as the "closest" point to the set of solution of MSSFP along the direction Therefore, to some extent, the extrapolated factor λ k plays an important role for the accelerated convergence for Algorithm 3.1.

Numerical Experiments
In the numerical results listed in the following table CPU time in seconds.We denote that e 0 0, 0, . . ., 0 ∈ N and e 1 1, 1, . . ., 1 ∈ N ."Algorithm 1.2 " in the tables denotes the projection algorithm developed by Censor et al., in 5 as 1.2 ."Algorithm 3.1" in the tables denotes Algorithm 3.1.Now we give the following examples to test the efficiency of the above algorithm.
The number of iterative step needed for Algorithm 1.2 and Algorithm 3.1, and the corresponding solutions of this example are shown in Table 1.The number of iterative step needed for Algorithm 1.2 and Algorithm 3.1, and the corresponding solutions of this example are shown in Table 3.
From these preliminary numerical results, we can see that the method is efficient, while the computational burden is not too large using the extrapolated technique.

3 . 2 Evidently, 3 . 1 1 . 3 . 2 .
happens to be 1.2 , when 1/L > λ k .Now we prove the convergence of the Algorithm 3.Theorem Assume that the set of the solution of the multiple-sets split feasibility problem (MSSFP) is nonempty.Then, any sequence {x k } ∞ k 0 generated by Algorithm 3.1 converges to a solution of MSSFP 1.1 .
25 from above we conclude that b b .Similarly, Ab Ab .Hence, lim p → ∞ x k p b with b ∈ C and Ab ∈ Q, that is, lim p → ∞ x k p − b 0 with b ∈ C and Ab ∈ Q. Replace b with x * , it can be written as lim p → ∞ x k p − x * 0 with x * ∈ C and Ax * ∈ Q.And by reason of the monotonicity and boundness of the sequence { x k − x * }, we get the result.

Table 2 :
The numerical results of Example 4.2.In this example, we consider a multiple-set split feasibility where A a ij N × N ∈ N × N , and a ij ∈ 0, 10 are generated randomly Take initial point x 0 e 0 ∈ N , we test the algorithms with different values of s, t, r, and N, respectively, in different dimensional Euclidean space.The number of iterative step needed for Algorithm 1.2 and Algorithm 3.1 is displayed in Table2.

Table 3 :
The numerical results of Example 4.3.