Applications of Fixed-Point and Optimization Methods to the Multiple-Set Split Feasibility Problem

The multiple-set split feasibility problem requires ﬁnding a point closest to a family of closed convex sets in one space such that its image under a linear transformation will be closest to another family of closed convex sets in the image space. It can be a model for many inverse problems where constraints are imposed on the solutions in the domain of a linear operator as well as in the opera-tor’s range. It generalizes the convex feasibility problem as well as the two-set split feasibility problem. In this paper, we will review and report some recent results on iterative approaches to the multiple-set split feasibility problem.


The Multiple-Set Split Feasibility Problem Model
The intensity-modulated radiation therapy IMRT has received a great deal of attention recently; for related works, please refer to 1-29 .In intensity modulated radiation therapy, beamlets of radiation with different intensities are transmitted into the body of the patient.Each voxel within the patient will then absorb a certain dose of radiation from each beamlet.The goal of IMRT is to direct a sufficient dosage to those regions requiring the radiation, those that are designated planned target volumes PTVs , while limiting the dosage received by the other regions, the so-called organs at risk OAR .The forward problem is to calculate the radiation dose absorbed in the irradiated tissue based on a given distribution of the beamlet intensities.The inverse problem is to find a distribution of beamlet intensities, the radiation intensity map, which will result in a clinically acceptable dose distribution.One important constraint is that the radiation intensity map must be implementable; that is, it is physically possible to produce such an intensity map, given the machine's design.There will be limits on the change in intensity between two adjacent beamlets, for example.
The equivalent uniform dose EUD for tumors is the biologically equivalent dose which, if given uniformly, will lead to the same cell kill within the tumor volume as the actual nonuniform dose.Constraints on the EUD received by each voxel of the body are described in dose space, the space of vectors whose entries are the doses received at each voxel.Constraints on the deliverable radiation intensities of the beamlets are best described in intensity space, the space of vectors whose entries are the intensity levels associated with each of the beamlets.The constraints in dose space will be upper bounds on the dosage received by the OAR and lower bounds on the dosage received by the PTV.The constraints in intensity space are limits on the complexity of the intensity map and on the delivery time, and, obviously, that the intensities be nonnegative.Because the constraints operate in two different domains, it is convenient to formulate the problem using these two domains.This leads to a split feasibility problem.
The split feasibility problem SFP is to find an x in a given closed convex subset C of R J such that Ax is in a given closed convex subset Q of R I , where A is a given real I by J matrix.Because the constraints are best described in terms of several sets in dose space and several sets in intensity space, the SFP model needs to be expanded into the multiple-set SFP.It is not uncommon to find that, once the various constraints have been specified, there is no intensity map that satisfies them all.In such cases, it is desirable to find an intensity map that comes as close as possible to satisfying all the constraints.One way to do this, as we will see, is to minimize a proximity function.
For i 1, . . ., I and j 1, . . ., J, let b i ≥ 0 be the dose absorbed by the ith voxel of the patient's body, x j ≥ 0 the intensity of the jth beamlet of radiation, and A ij ≥ 0 the dose absorbed at the ith voxel due to a unit intensity of radiation at the jth beamlet.The nonnegative matrix A with entries A ij is the dose influence matrix.Let us assume that we have M constraints in the dose space and N constraints in the intensity space.Let H m be the set of dose vectors that fulfill the mth dose constraint, and let X n be the set of beamlet intensity vectors that fulfill the nth intensity constraint.
In intensity space, we have the obvious constraints that x j ≥ 0. In addition, there are implementation constraints; the available treatment machine will impose its own requirements, such as a limit on the difference in intensities between adjacent beamlets.In dosage space, there will be a lower bound on the dosage delivered to those regions designated as planned target volumes PTV and an upper bound on the dosage delivered to those regions designated as organs at risk OAR .
Suppose that S t is either a PTV or an OAR, and suppose that S t contains N t voxels.

Fixed-Point Method
Next, we focus on the multiple-set split feasibility problem MSSFP which is to find a point x * such that where N, M ≥ 1 are integers, the C i i 1, 2, . . ., N are closed convex subsets of H 1 , the Q j j 1, 2, . . ., M are closed convex subsets of H 2 , and A : H 1 → H 2 is a bounded linear operator.Assume that MSSFP is consistent; that is, it is solvable, and S denotes its solution set.The case where N M 1, called split feasibility problem SFP , was introduced by Censor and Elfving 43 , modeling phase retrieval and other image restoration problems, and further studied by many researchers; see, for instance, 2-4, 6, 9-12, 17, 19-21 .
We use Γ to denote the solution set of the SFP.Let γ > 0 and assume that x * ∈ Γ.Thus, Ax * ∈ Q 1 which implies the equation I − P Q 1 Ax * 0 which in turn implies the equation γA * I − P Q 1 Ax * 0, hence the fixed point equation I − γA * I − P Q 1 A x * x * .Requiring that x * ∈ C 1 , we consider the fixed-point equation 1.6 We will see that solutions of the fixed point equation This proposition reminds us that MSSFP 1.5 is equivalent to a common fixed-point problem of finitely many nonexpansive mappings, as we show below.
Decompose MSSFP into N subproblems 1 ≤ i ≤ N : For each 1 ≤ i ≤ N, we define a mapping T i by where f is defined by with β j > 0 for all 1 ≤ j ≤ M. Note that the gradient of ∇f is It is known that if 0 < γ i ≤ 2/L, T i is nonexpansive.Therefore fixed-point algorithms for nonexpansive mappings can be applied to MSSFP 1.5 .

Optimization Method
Note that x * solves the MSSFP implies that x * satisfies two properties: ii the distance from Ax * to each Q j is also zero.
This motivates us to consider the proximity function where {α i } and {β j } are positive real numbers, and P C i and P Q j are the metric projections onto C i and Q j , respectively.
Proposition 1.2.x * is a solution of MSSFP 1.5 if and only if g x * 0.
Since g x ≥ 0 for all x ∈ H 1 , a solution of MSSFP 1.5 is a minimizer of g over any closed convex subset, with minimum value of zero.Note that this proximity function is convex and differentiable with gradient where A * is the adjoint of A. Since the gradient ∇g x is L -Lipschitz continuous with constant we can use the gradient-projection method to solve the minimization problem where Ω is a closed convex subset of H 1 whose intersection with the solution set of MSSFP is nonempty, and get a solution of the so-called constrained multiple-set split feasibility problem CMSSFP x * ∈ Ω such that x * solves 1.5 .

1.16
In this paper, we will review and report the recent progresses on the fixed-point and optimization methods for solving the MSSFP.

Some Concepts and Tools
Assume H is a Hilbert space and C is a nonempty closed convex subset of H.The nearest point or metric projection, denoted P C , from H onto C assigns for each x ∈ H the unique point P C x ∈ C in such a way that

y ∈ H, and equality holds if and only if
x − y P C x − P C y.In particular, P C is nonexpansive; that is, Definition 2.2.The operator is called a relaxed projection, where λ ∈ 0, 2 and I is the identity operator on H.
A mapping R : H → H is said to be an averaged mapping if R can be written as an average of the identity I and a nonexpansive mapping T : where α is a number in 0, 1 and T :

2.6
Consequently, a projection can be written as the mean average of a nonexpansive mapping and the identity: Thus projections are averaged maps with α 1/2.Also relaxed projections are averaged.
Proposition 2.3.Let T : H → H be a nonexpansive mapping and R 1 − α I αT an averaged map for some α ∈ 0, 1 .Assume T has a bounded orbit.Then, one has the following.
1 R is asymptotically regular; that is, for all x ∈ H.

Definition 2.4. Let A be an operator with domain D A and range R A in H.
i A is monotone if for all x, y ∈ D A , Ax − Ay, x − y ≥ 0. 2.9 ii Given a number ν > 0. A is said to be ν-inverse strongly monotone ν-ism or cocoercive if It is easily seen that a projection P C is a 1-ism.
Proposition 2.5.Given T : H → H, let V I − T be the complement of T .Given also S : H → H, then one has the following.
iii S is averaged if and only if the complement The next proposition includes the basic properties of averaged mappings.
Proposition 2.6.Given operators S, T, V : H → H, then one has the following.
i If S 1 − α T αV for some α ∈ 0, 1 and if T is averaged and V is nonexpansive, then S is averaged.
ii S is firmly nonexpansive if and only if the complement I − S is firmly nonexpansive.If S is firmly nonexpansive, then S is averaged.
iii If S 1 − α T αV for some α ∈ 0, 1 , T is firmly nonexpansive and V is nonexpansive, then S is averaged.
iv If S and T are both averaged, then the product (composite) ST is averaged.
v If S and T are both averaged and if S and T have a common fixed point, then

Fix S
Fix T Fix ST .
where C is a closed convex subset of a Hilbert space H and A is a monotone operator on H. Assume that VI 2.12 has a solution and A is ν-ism.Then for 0 < γ < 2ν, the sequence {x n } generated by the algorithm converges weakly to a solution of the VI 2.12 .
An immediate consequence of Proposition 2.7 is the convergence of the gradientprojection algorithm.Proposition 2.8.Let f : H → R be a continuously differentiable function such that the gradient ∇f is Lipschitz continuous:

2.14
Assume that the minimization problem is consistent, where C is a closed convex subset of H.Then, for 0 < γ < 2/L, the sequence {x n } generated by the gradient-projection algorithm converges weakly to a solution of 2.15 .

Iterative Methods
In this section, we will review and report the iterative methods for solving MSSFP 1.5 in the literature.
It is not hard to see that the solution set S i of the subproblem 1.7 coincides with Fix T i , and the solution set S of MSSFP 1.5 coincides with the common fixed-point set of the mappings T i .Further, we have see 9, 18 By using the fact 3.1 , we obtain the corresponding algorithms and the convergence theorems for the MSSFP.
Algorithm 3.1.The Picard iterations are where λ i > 0 for all i such that N i 1 λ i 1, and 0 < γ < 2/L with L given by 1.11 .
where T n T n mod N with the mod function taking values in {1, 2, . . ., N}.
Theorem 3.6 see 8 .Assume that the MSSFP 1.5 is consistent.Let {x n } be the sequence generated by the Algorithm 3.5, where 0 < γ < 2/L with L given by 1.11 .Then {x n } converges weakly to a solution of the MSSFP 1.5 .
Note that the MSSFP 1.5 can be viewed as a special case of the convex feasibility problem of finding x * such that 3.5 In fact, 1.5 can be rewritten as where However, the methodologies for studying the MSSFP 1.5 are actually different from those for the convex feasibility problem in order to avoid usage of the inverse A −1 .In other words, the methods for solving the convex feasibility problem may not apply to solve the MSSFP 1.5 straightforwardly without involving the inverse A −1 .The CQ algorithm of Byrne 1 is such an example where only the operator A not the inverse A −1 is relevant.
Since every closed convex subset of a Hilbert space is the fixed point set of its associating projection, the convex feasibility problem becomes a special case of the common fixedpoint problem of finding a point x * with the property Similarly, the MSSFP 1.5 becomes a special case of the split common fixed-point problem 19 of finding a point x * with the property where Algorithm 3.7 cyclic iterations .Take an initial guess x 0 ∈ H 1 , choose γ ∈ 0, 2/L and define a sequence {x n } by the iterative procedure: Theorem 3.8 see 17 .The sequence {x n }, generated by Algorithm 3.7, converges weakly to a solution of MSSFP 1.5 whenever its solution set is nonempty.
Since MSSFP 1.5 is equivalent to the minimization problem 1.15 , we have the following gradient-projection algorithm.Algorithm 3.9.Gradient-projection algorithmis Censor et al. 5 proved in finite-dimensional Hilbert spaces that Algorithm 3.9 converges to a solution of the MSSFP 1.5 in the consistent case.Below is a version of this convergence in infinite-dimensional Hilbert spaces.Theorem 3.10 see 8 .Assume that 0 < γ < 2/L , where L is given by 1.14 .The sequence {x n } generated by the Algorithm 3.9 weakly converges to a point z which is a solution of the MSSFP 1.5 in the consistent case and a minimizer of the function p over Ω in the inconsistent case.

Perturbation Techniques
Consider the consistent 1.16 and denote by S its nonempty solution set.As pointed in the previous, the projection P C , where C is a closed convex subset of H, may bring difficulties in computing it, unless C has a simple form e.g., a closed ball or a half-space .Therefore some perturbed methods in order to avoid this inconvenience are presented.We can use subdifferentials when {C i }, {Q j }, and Ω are level sets of convex functionals.Consider where c i , ω : H 1 → R and q j : H 2 → R are convex functionals.We iteratively define a sequence {x n } as follows.
Algorithm 3.14.The initial x 0 ∈ H 1 is arbitrary; once x n has been defined, we define the n 1 th iterate x n 1 by where

3.14
Theorem 3.15 see 18 .Assume that each of the functions {c i } N i 1 , ω, and {q j } M j 1 satisfies the property: it is bounded on every bounded subset of H 1 and H 2 , respectively.(Note that this condition is automatically satisfied in a finite-dimensional Hilbert space.)Then the sequence {x n } generated by Algorithm 3.14 converges weakly to a solution of 1.16 , provided that the sequence {γ n } satisfies where the constant L is given by 1.14 .
Now consider general perturbation techniques in the direction of the approaches studied in 20-22, 44 .These techniques consist on taking approximate sets which involve the ρ-distance between two closed convex sets A and B of a Hilbert space:

3.16
Let {Ω n }, {C n i }, and {Q n j } be closed convex sets which are viewed as perturbations for the closed convex sets Ω, {C i }, and {Q j }, respectively.Define function g n by 3.17 The gradient ∇g n of g n is

3.18
It is clear that ∇g n is Lipschitz continuous with the Lipschitz constant L given by 1.14 .
Algorithm 3.16.Let an initial guess x 0 ∈ H 1 be given, and let {x n } be generated by the Krasnosel'skii-Mann iterative algorithm:

3.19
In 8 , Xu proved the following result.
Theorem 3.17 see 8 .Assume that the following conditions are satisfied.
iii For each ρ > 0, 1 ≤ i ≤ N, and Then the sequence {x n } generated by Algorithm 3.16 converges weakly to a solution of MSSFP 1.5 .
Lopez et al. 18 further obtained a general result by relaxing condition ii .
Theorem 3.18 see 18 .Assume that the following conditions are satisfied.
ii t n ∈ 0, 4/ 2 γL for all n (note that t n may be larger than one since 0 < γ < 2/L ) and Then the sequence {x n } generated by Algorithm 3.16 converges weakly to a solution of 1.16 .
Corollary 3.19.Assume that the following conditions are satisfied.
ii t n ∈ 0, 4/ 2 γL for all n (note that t n may be larger than one since 0 < γ < 2/L ) and

3.21
Then the sequence {x n } generated by converges weakly to a solution of the MSSFP 1.5 .
Note that all above algorithms only have weak convergence.Next, we will consider some algorithms with strong convergence.Algorithm 3.20.The Halpern iterations are 3.23 Theorem 3.21.Assume that the MSSFP 1.5 is consistent, 0 < γ < 2/L with L given by 1.11 , and {α n } satisfies the conditions (for instance, α n 1/n for all n ≥ 1) Then the sequence {x n } generated by the Algorithm 3.20 converges strongly to a solution of the MSSFP 1.5 that is closest to u from the solution set of the MSSFP 1.5 .
Next, we consider a perturbation algorithm which has strong convergence.Algorithm 3.22.Given an initial guess x 0 ∈ H 1 , let {x n } be generated by the perturbed iterative algorithm

3.24
Theorem 3.23 see 18 .Assume that the following conditions are satisfied.
Then the sequence {x n } generated by Algorithm 3.22 converges in norm to the solution of 1.16 which is nearest to u. Corollary 3.24.Assume that the following conditions are satisfied.
ii lim n → ∞ t n 0 and ∞ n 0 t n ∞.Then the sequence {x n } generated by converges in norm to a solution of the MSSFP 1.5 .

Regularized Methods
Consider the following regularization:

3.26
where α > 0 is the regularization parameter.We can compute the gradient ∇g α of g α as It is easily see that ∇g α is L α -Lipschitz continuous with constant x α exists and equals x, the minimum-norm solution of 1.16 .
Remark 3.30.This new method is a modification of the projection method proposed by Goldstein 45 and Levitin and Polyak 46 , where the constant step size β in their original method is replaced by an automatically selected one, β k , per iteration.This is very important, since it helps us avoid the difficult task of selecting a "suitable" step size.
The following self-adaptive projection method was introduced by Zhao and Yang 7 , which was adopted by using the Armijo-like searches to solve the MSSFP.Algorithm 3.31.Given constants β > 0, σ ∈ 0, 1 , γ ∈ 0, 1 , let x 0 be arbitrary.For n 0, 1, . .., calculate where τ n βγ l n and l n is the smallest nonnegative integer l such that Algorithm 3.31 need not to estimate the Lipschitz constant of ∇g or compute the largest eigenvalue of the matrix A T A, and the step-size τ n is chosen so that the objective function g x has a sufficient decrease.It is in fact a special case of the standard gradient projection method with the Armijo-like search for solving the constrained optimization problem 3.31 .
The following convergence result for the gradient projection method with the Armijolike searches solving the generalized convex optimization problem 3.31 ensures the convergence of Algorithm 3.31.Ω be pseudoconvex and {x n } be an infinite sequence generated by the gradient projection method with Armijo-like searches.Then, the following conclusions hold: 1 lim n → ∞ g x n inf{g x , x ∈ Ω}; 2 Ω * , which denotes the set of the optimal solutions to 3.31 , is nonempty if and only if there exists at least one limit point of {x n }.In this case, {x n } converges to a solution of 3.31 .
However, we find that, in each iteration step of Algorithm 3.31, it costs a large amount of work to compute the orthogonal projections P C i and P Q j .In what follows, we consider the case that the projections are not easily calculated, and we consider a relaxed self-adaptive projection method for solving the MSSFP.In detail, the MSSFP and the convex sets C i and Q j in this part should satisfy the following assumptions.
1 The solution set of the constrained MSSFP is nonempty.
2 The sets C i , i 1, 2, . . ., t, are given by where c i : R N → R are convex functions.The sets Q j , j 1, 2, . . ., r are given by Q j y ∈ R M | q j y ≤ 0 , 3.41 where q j : R M → R are convex functions.
3 For any x ∈ R N , at least one subgradient ξ ∈ ∂c i x can be calculated, where ∂c i x is a generalized gradient, called subdifferential of c i x at x, and it is defined as follows:

3.42
For any y ∈ R M , at least one subgradient η j ∈ ∂q j y can be calculated, where ∂q j y is a generalized gradient, called subdifferential of q j y at y and is defined as follows: ∂q j y η j ∈ R M | q j u ≥ q j y η j , u − y ∀u ∈ R M .

3.43
In the kth iteration, let where ξ n i is an element in ∂c i x n :

3 . 2 Theorem 3 .2 see 8 .Algorithm 3 . 3 .iterations are x n 1 N i 1
Assume that the MSSFP 1.5 is consistent.Let {x n } be the sequence generated by the Algorithm 3.1, where 0 < γ < 2/L with L given by 1.11 .Then {x n } converges weakly to a solution of the MSSFP 1.5 .Parallel

Theorem 3 .4 see 8 .
Assume that the MSSFP 1.5 is consistent.Then the sequence {x n } generated by the Algorithm 3.3 converges weakly to a solution of the MSSFP 1.5 .Algorithm 3.5.Cyclic iterations are

3 . 28 ItTheorem 3 . 25 .
is known that ∇g α is strongly monotone.Consider the following regularized minimization problem min x∈Ω g α x , 3.29 which has a unique solution denoted by x α .The strong lim α → 0

Theorem 3 . 32 .
Let g ∈ C 1 < α < 1 if S t is a PTV, and α > 1 if S t is an OAR.The function e t b is convex, for b nonnegative, when S t is an OAR and −e t b is convex, when S t is a PTV.The constraints in dosage space take the form is a PTV.Therefore, we require that b Ax lie within the intersection of these convex sets.In a summary, we have formulated the constraints in the radiation intensity space R J and in the dose space R I , respectively, and the two spaces are related by the dose influence matrix A; that is, this problem referred as the multiple-set split feasibility problem MSSFP is formulated as follows.
1.6 are exactly solutions of the SFP.The following proposition is due to Byrne 4 and Xu 2 .Given x * ∈ H 1 .Then x * solves the SFP if and only if x * solves the fixed point 1.6 .
where L is given by 1.14 .The sequence {x n } generated by the Algorithm 3.11 weakly converges to a solution of 1.16 .Remark 3.13.It is obvious that Theorem 3.12 contains Theorem 3.10 as a special case.
Algorithm 3.33.Given γ > 0, ρ ∈ 0, 1 , μ ∈ 0, 1 let x 0 be arbitrary.For n 0, 1, 2, ..., computex n P Ω x n − τ n ∇g n x n ,3.48 where τ n γρ l n and l n is the smallest nonnegative integer l such that ∇g n x n − ∇g n x n ≤ μ x n − x n P Ω x n − τ n ∇g n x n .3.50 Theorem 3.34 see 7 .The sequence {x n } generated by Algorithm 3.33 converges to a solution of the MSSFP.