The Split Feasibility Problem and Its Solution Algorithm

The split feasibility problemarises inmany fields in the real world, such as signal processing, image reconstruction, andmedical care. In this paper, we present a solution algorithm called memory gradient projection method for solving the split feasibility problem, which employs a parameter and two previous iterations to get the next iteration, and its step size can be calculated directly. It not only improves the flexibility of the algorithm, but also avoids computing the largest eigenvalue of the related matrix or estimating the Lipschitz constant in each iteration. Theoretical convergence results are established under some suitable conditions.


Introduction
The split feasibility problem (SFP) was first put forward in [1] by Censor and Elfving.It requires finding a point in a nonempty closed convex subset in one space such that its image under a certain operator is in another nonempty closed convex subset in the image space.Its precise mathematical formulation is as follows: Given closed convex set  and closed convex sets  in and -dimensional Euclidean space, respectively, the split feasibility problem is to find a vector  * for which where  is a given  ×  real matrix.The SFP arises in many fields in the real world, such as signal processing, image reconstruction, and medical care; for details see [1][2][3] and the references therein.For example, a number of image reconstruction problems can be formulated as split feasibility problems.The vector  represents a vectorized image, with the entries of , the intensity levels at each voxel or pixel.The set  can be selected to incorporate such features as nonnegativity of the entries of , while the matrix  can describe linear functional or projection measurements we have made, as well as other linear combinations of entries of  on which we wish to impose constraints.The set  then can be the product of the vector of measured data with other convex sets, such as nonnegative cones, that serve to describe the constraints to be imposed [4].Here we give a discretized model of SFP in image reconstruction problem of X-ray tomography [5,6].In image reconstruction, we consider a portion of the form of a twodimensional cross section in which the attenuation intensities of X-ray are not identical for different tissue.This attenuation effect can be seen as a function with nonnegative variable, called the image or images.We hope to get information about the physiological state of the organization by measuring the data.The fundamental model is formulated in the following way: a Cartesian grid of square picture elements, called pixels, is introduced into the region of interest so that it covers the whole picture that has to be reconstructed.The pixels are numbered in some agreed manner, say from 1 (top left corner pixel) to  (bottom right corner pixel) (see Figure 1).
The X-ray attenuation function is assumed to take a constant value   throughout the th pixel for  = 1, 2, . . ., .Source and detector are assumed to be points and the rays between them (lines).Further, we assumed that the length of intersection of the th ray with the th pixel, denoted by   for all  = 1, 2, . . .,  and  = 1, 2, . . ., , represents the weight of the contribution of the th pixel to the total attenuation along the th ray.The physical measurement of the total attenuation of the th ray, denoted by   , represents the line integral of the unknown attenuation function along the path of the ray.Therefore, in this discretized model, the line integral turns Discretized image domain out to be a finite sum and the whole model is described by a system of linear equations In matrix notation, we write the equations above as where  = (  ) ∈ R  is the measurement vector,  = (  ) ∈ R  is the image vector, and the  ×  matrix  = (  ) is the projection matrix.So the model is a special SFP: It can also be unified model for many inverse problems, such as intensity-modulated radiation therapy (IMRT) [3].Many well-known iterative algorithms have been established for the SFP (see, e.g., [1,4,[6][7][8][9][10][11][12][13][14][15][16][17][18]).In [1], the authors used their multidistance idea to obtain iterative algorithms for solving the SFP.Their algorithms, as well as others obtained later, involve matrix inverses at each iteration.In [4], Byrne presented a projection method called the  algorithm for solving the SFP that does not involve matrix inverses, but they assumed that the metric projections onto  and  are easily calculated.However, in some cases it is impossible or needs too much work to exactly compute the metric projection.Therefore, if this case appears, the efficiency of projection-type methods, including the  algorithm, will be seriously affected.In [16], by using the relaxed projection technology, Yang presented a relaxed  algorithm for solving the SFP, where he used two halfspaces   and   in place of  and , respectively, at the th iteration and the metric projections onto   and   are easily executed.López et al. [10] introduced a new self-adaptive step size to improve the  and the relaxed  algorithm.For more effective algorithms and the extensions of the SFP, the readers can see [12,19,20] and the survey papers [2,7].It is noted that all these algorithms only use the current point to get the next iteration, which did not use the previous iteration points  −1 ,  −2 , . .., and affect the flexibility.Using some information of previous iterative points will increase the flexibility of the algorithm.
In this paper, inspired by the inertial projection algorithms for convex feasibility problem [9], we propose a memory gradient projection algorithm for solving the SFP, which employs   and  −1 ,   to get the next point  +1 .In this case, it can improve the convergence greatly, since the vector   − −1 is acting as an impulsion term and   is acting as a speed regulator.Compared with the existing methods for solving the SFP, the algorithm presented in this paper has the following advantages: (1) When the projections   and   are not easily calculated, at each iteration of this algorithm, it only needs to compute the projection onto a halfspace containing the given closed convex set and being related to the current iterative point, which is implemented very easily.(2) At each iteration, the step size can be directly computed which need not compute the largest eigenvalue of the related matrix, estimate the Lipschitz constant, or use some line search scheme.(3) It employs two previous iterations to get the next iteration and hence improves the flexibility of the algorithm.
The rest of the paper is organized as follows.In Section 2, we present some useful preliminaries.In Section 3, we introduce a memory gradient projection method for solving the SFP and prove its convergence property.Section 4 introduces the relaxed version of the memory gradient projection method proposed in Section 3 for solving the SFP and establishes its convergence property.Section 5 highlights the main conclusions of this paper.

Preliminaries
In this section, we review some definitions and basic results which will be used in this paper.Throughout the paper, ⟨⋅, ⋅⟩ and ‖ ⋅ ‖ denote the usual inner product and norm in R  .
For a given nonempty closed convex subset Ω in R  , the metric projection from R  onto Ω is defined by It has the following well-known properties.
The lemma below will be useful for the convergence analysis of our algorithm.Lemma 3 (see [11]).Suppose that where  ∈ [0, 1).
The following lemma provides some important properties of the subdifferential.
Lemma 4 (see [21]).Suppose ℎ : R  → R is a convex function; then it is subdifferentiable everywhere and its subdifferentials are uniformly bounded on any bounded subset of R  .

Memory Gradient Projection Algorithm and Its Convergence
In this section, assuming that the projections   and   are easily calculated, we first establish a memory gradient projection algorithm for solving the split feasibility problem and then prove that the sequence of the iterative points generated by the algorithm converges to a solution of the SFP.
Algorithm 5. Given any  0 ,  1 ∈ R  , for  = 1, 2, . .., calculate Let where   ⊂ [0,],  ∈ [0,1), and Remark 6. Evidently, when   = 0, Algorithm 5 happens to be the Algorithm 3.1 of [14] with   = 1.At each iteration, the step size   can be directly computed which avoids computing the largest eigenvalue of the related matrix, estimating the Lipschitz constant, and using some line search scheme as in some existing methods in the literature.Here, the term   and two iterative points  −1 ,   are employed to get the next iterative point  +1 , hence improving the flexibility of the algorithm.Now we give the convergence analysis of Algorithm 5.
Then, for any  0 ,  1 ∈ R  , the sequence generated by Algorithm 5 converges to a solution of the SFP.

Mathematical Problems in Engineering
Proof.Let  * be any solution of the SFP.Then,  * ∈  and  * ∈ .

󵄩 󵄩 󵄩 󵄩 󵄩 𝑦
It is easy to see that, for any vector ,  ∈ R  , we have So, we can rewrite the relation above as From Lemma 1, we have Let   = ‖  −  * ‖ 2 .Then from ( 18), we get Thanks to Lemma 3, we conclude that {  } is convergent; that is, {‖  −  * ‖} is convergent.The proofs above imply that Combining with ( 15), ( That is, lim Let  be any accumulation point of {  }.Then there exists a subsequence {   } which converges to .Since  is a closed convex set,  ∈ . On the other hand, the construction of   and (20) clarify that    → .By using (23), we can get lim Thus, That is, Therefore,  is a solution of the SFP.Thus we may use  in place of  * in (18) and obtain that {‖  −‖} is convergent.Because there is a subsequence {‖   − ‖} converging to 0, then   →  as  → ∞.This completes the proof.

The Relaxed Memory Gradient Projection Algorithm and Its Convergence
In this section, assuming that the projections   and   are not easily calculated, we present the relaxed version of the memory gradient projection algorithm presented in Section 3. Carefully speaking, the convex sets  and  satisfy the following assumptions: (H1) The set  is given by where  : R  → R is a convex (not necessarily differentiable) function and  is nonempty.
The set  is given by where  : R  → R is a convex (not necessarily differentiable) function and  is nonempty.(H2) For any  ∈ R  , at least one subgradient  ∈ () can be calculated, where () is a generalized gradient of () at  and is defined as follows: Algorithm 8. Given any  0 ,  1 ∈ R  , for  = 1, 2, . .., calculate Let where where   is an element in (  ), and where   is an element in (  ).

Remark 9.
(1) By the definition of subgradient, it is clear that the halfspaces   and   contain  and , respectively.From the expressions of   and   , the metric projections onto   and   may be directly calculated (see [22,23]).
(2) Compared with Algorithm 5, when the projections   and   are not easily calculated, it only needs to compute the projection onto a halfspace containing the given closed convex set and being related to the current iterative point at each iteration of Algorithm 8, which is implemented very easily.Now, we establish the global convergence of Algorithm 8.