AAA Abstract and Applied Analysis 1687-0409 1085-3375 Hindawi Publishing Corporation 958040 10.1155/2012/958040 958040 Research Article Self-Adaptive and Relaxed Self-Adaptive Projection Methods for Solving the Multiple-Set Split Feasibility Problem Chen Ying 1 Guo Yuansheng 2 Yu Yanrong 2 Chen Rudong 2 Su Yongfu 1 Textile Division Tianjin Polytechnic University Tianjin 300160 China tjpu.edu.cn 2 Department of Mathematics Tianjin Polytechnic University Tianjin 300160 China tjpu.edu.cn 2012 6 12 2012 2012 10 10 2012 17 11 2012 2012 Copyright © 2012 Ying Chen et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Given nonempty closed convex subsets CiRm, i=1,2,,t and nonempty closed convex subsets QjRn, j=1,2,,r, in the n- and m-dimensional Euclidean spaces, respectively. The multiple-set split feasibility problem (MSSFP) proposed by Censor is to find a vector xi=1tCi such that Axj=1rQj, where A is a given M×N real matrix. It serves as a model for many inverse problems where constraints are imposed on the solutions in the domain of a linear operator as well as in the operator’s range. MSSFP has a variety of specific applications in real world, such as medical care, image reconstruction, and signal processing. In this paper, for the MSSFP, we first propose a new self-adaptive projection method by adopting Armijo-like searches, which dose not require estimating the Lipschitz constant and calculating the largest eigenvalue of the matrix ATA; besides, it makes a sufficient decrease of the objective function at each iteration. Then we introduce a relaxed self-adaptive projection method by using projections onto half-spaces instead of those onto convex sets. Obviously, the latter are easy to implement. Global convergence for both methods is proved under a suitable condition.

1. Introduction

The multiple-sets split feasibility problem (MSSFP) requires to find a point closest to a family of closed convex sets in one space such that its image under a linear transformation will be closest to another family of closed convex sets in the image space. It is formulated as follows: (1.1)Find  a    xC:=i=1tCisuch  that    AxQ:=j=1rQj, where nonempty closed convex sets CiRn,i=1,2,,t, in the n-dimensional Euclidean space Rn, and nonempty closed convex sets QjRm,j=1,2,,r, in the m-dimensional Euclidean space Rm. A is an m×n real matrix. Specially, the problem with only a single set C in Rn and a single set Q in Rm was introduced by Censor and Elfving  and was called the split feasibility problem (SFP).

Such MSSFPs (1.1), proposed in , arise in signal processing, image reconstruction and so on. Various algorithms have been invented to solve MSSFP (1.1). See  and references therein.

In , Censor and Elfving were handling the MSSFP (1.1), for both the consistent and the inconsistent cases, where they aim at minimizing the proximity function (1.2)P(x)=(12)i=1tαiPCi(x)-x2+(12)j=1rβjPQj(Ax)-Ax2.

For convenience reasons, they consider an additional closed convex set ΩRn. Their algorithm for the MSSFP (1.1) involves orthogonal projection onto ΩRn,CiRn,i=1,2,,t, and QjRm,j=1,2,,r, which were assumed to be easily calculated and has the following iterative step: (1.3)xk+1=PΩ(xk+γ(i=1tαi(PCi(xk)-xk)+j=1rβiAT(PQj(Axk)-Axk))), where αi>0,  i=1,2,,t.  βj>0,j=1,2,,r.  γ(0,2/L),  L1=i=1tαi+λj=1rβj is the Lipschitz constant of P(x), which is the gradient of the proximity function P(x) defined by (1.2), and λ is the spectral radius of the matrix ATA. For any starting vector x0Rn, the algorithm converges to a solution of the MSSFP (1.1), whenever MSSFP (1.1) has a solution. In the inconsistent case, it find a point “closest” to all sets.

This algorithm uses a fixed stepsize related to the Lipschitz constant L, which sometimes computing it may be hard. On the other hand, even if we know the Lipschitz constant L, the method with fixed stepsize may lead to slow speed of convergence.

In 2005, Qu and Xiu  modified the CQ algorithm  and relaxed CQ algorithm  by adopting Armijo-like searches to solve the SFP, where the second algorithm used orthogonal projections onto half-spaces instead of projections onto the original convex sets, just as Yang’s relaxed CQ algorithm . This may reduce a lot of work for computing projections, since projections onto half-spaces can be directly calculated.

Motivated by Qu and Xiu’s idea, Zhao and Yang in  introduce a self-adaptive projection method by adopting Armijo-like searches to solve the MSSFP (1.1) and propose a relaxed self-adaptive projection method by using orthogonal projections onto half-spaces instead of these projections onto the original convex sets, which is more practical. But the same as Algorithm 1.3, Zhao and Yang’s algorithm involves an addition projection PΩ. Though the MSSFP (1.1) includes the SFP as a special case, the Zhao and Yang’s algorithm does not reduce to Qu and Xiu’s modifications of the CQ algorithm .

In this paper, We first proposed a self-adaptive method by adopting Armijo-like searches to solve the MSSFP (1.1) without an addition projection PΩ, then a relaxed self-adaptive projection method was introduced which only involves orthogonal projections onto half-spaces, so that the algorithm is implementable. We need not estimate the Lipschitz constant and make a sufficient decrease of the objection function at each iteration; besides, these projection algorithms can reduce to the modifications of the CQ algorithm  when the MSSFP (1.1) is reduced to the SFP. We also show convergence the algorithms under mild conditions.

2. Preliminaries

In this section, we give some definitions and basic results that will be used in this paper.

Definition 2.1.

An operator F from a set XRn into Rn is called

monotone on X, if (2.1)F(x)-F(y),x-y0,x,yX;

cocoercive on X with constant α>0, if (2.2)F(x)-F(y),x-yαF(x)-F(y)2,x,yX;

Lipschitz continuous on X with constant λ>0, if (2.3)F(x)-F(y)λx-y,x,yX.

In particular, if λ=1, F is said to be nonexpansive. It is easily seen from the definitions that cocoercive mappings are monotone.

Definition 2.2.

Functions f(x), differentiable on a nonempty convex set S, is pseudoconvex if for every x1,x2S, the condition f(x1)<f(x2) implies that (2.4)f(x2)T(x1-x2)0.

It is known that differentiable convex functions are pseudoconvex (see ).

For a given nonempty closed convex set Ω in Rn, the orthogonal projection from Rn onto Ω is defined by (2.5)PΩ(x)=argmin{x-yyΩ},xRn.

Lemma 2.3 (see [<xref ref-type="bibr" rid="B9">10</xref>]).

Let Ω be a nonempty closed convex subset in Rn, then, for any x,yRn and zΩ.

PΩ(x)-x,z-PΩ(x)0;

PΩ(x)-PΩ(y)2PΩ(x)-PΩ(y),x-y;

PΩ(x)-z2x-z2-PΩ(x)-x2.

From Lemma 2.3 (2), one sees that the orthogonal projection mapping PΩ(x) is cocoercive with modulus 1, monotone, and nonexpansive.

Let F be a mapping from Rn into Rn. For any xRn and α>0, define x(α)=PΩ(x-αF(x)), e(x,α)=x-x(α).

Lemma 2.4 (see [<xref ref-type="bibr" rid="B6">6</xref>]).

Let F be a mapping from Rn into Rn. For any xRn and α>0, one has min{1,α}e(x,1)e(x,α)max{1,α}e(x,1).

Lemma 2.5 (see [<xref ref-type="bibr" rid="B10">5</xref>, <xref ref-type="bibr" rid="B8">9</xref>]).

Suppose h:Rn-R is a convex function, then it is subdifferentiable everywhere and its subdifferentials are uniformly bounded on any bounded subset of Rn.

3. Self-Adaptive Projection Iterative Scheme and Convergence Results

It is easily seen that if the solution set of MSSFP (1.1) is nonempty, then the MSSFP (1.1) is equivalent to the minimization problem of q over all xC:=i=1tCi. q(x) is defined by (3.1)q(x)=(12)j=1rβjPQj(Ax)-Ax2, where βj>0. Note that the gradient of q(x) is (3.2)q(x)=j=1rβjAT(I-PQj)Ax.

Consider the following constrained minimization problem: (3.3)min{q(x),xC}.

We say that a point x*C is a stationary point of the problem (3.3) if it satisfies the condition (3.4)q(x*),x-x*0,xC.

This optimization problem is proposed by Xu  for solving the MSSFP (1.1); the q defined by (3.2) is L-Lipschitzian with L=A2j=1rβj and q is (1/L)-ism.

Algorithm 3.1.

Given constant β>0,σ(0,1). Let x0 be arbitrary. For k=0,1,, calculate (3.5)xk+1=PC[k+1](xk-τkq(x)xk), where C[n]=CnmodN and mod function takes values in 1,2,,N, τk=βγlk and lk is the smallest nonnegative integer l such that (3.6)q(PC[k+1](xk-βγlq(x)xk))q(xk)-σq(xk),xk-PC[k+1](xk-βγlq(x)xk).

Algorithm 3.1 need not estimate the largest eigenvalue of the matrix ATA, and the stepsize τk is chosen so that the objective function q(x) has a sufficient decrease. It is in fact a special case of the standard gradient projection method with the Armijo-like search rule for solving the constrained optimization problem: (3.7)min{g(x);  xΩ}, where ΩRn is a nonempty closed convex set, and the function g(x) is continuously differentiable on Ω, then the following convergence result ensures the convergence of Algorithm 3.1.

Lemma 3.2 (see [<xref ref-type="bibr" rid="B6">6</xref>]).

Let gCΩ1 be pseudoconvex and xk be an infinite sequence generated by the gradient projection method with the Armijo-like searches. Then, the following conclusions hold:

limkg(xk)=inf{g(x):xΩ};

Ω*, which denotes the set of the optimal solutions to (3.7), if and only if there exists at least one limit point of {xk}. In this case, {xk} converges to a solution of (3.7).

Since the function q(x) is convex and continuously differentiable on C, therefore it is pseudoconvex. Then, by Lemma 3.2, one immediately obtains the following convergence result.

Theorem 3.3.

Let {xk} be a sequence generated by Algorithm 3.1, then the following conclusions hold:

{xk} is bounded if and only if the solution set of (3.3) is nonempty. In such a case, {xk} must converge to a solution of (3.3).

{xk} is bounded and limkq(xk)=0 if and only if the MSSFP (1.1) is solvable. In such a case, {xk} must converge to a solution of MSSFP (1.1).

However, in Algorithm 3.1, it costs a large amount of work to compute the orthogonal projections PCi and PQj; therefore, the same as Censor’s method, these projections are assumed to be easily calculated. But, in some cases it is difficult or costs too much work to exactly compute the orthogonal projection, then the efficiency of these methods will be deeply affected. In what follows, one assume that the projections are not easily calculated. One present a relaxed self-adaptive projection method. Carefully speaking, the convex sets Ci and Qj satisfy the following assumptions.

The sets Ci, i=1,2,,t. are given by (3.8)Ci={xRnci(x)0}, where ci:RnR,i=1,2,,t, are convex functions.The sets Qj, j=1,2,,r. are given by (3.9)Qj={yRmqj(y)0}, where qj:RmR,j=1,2,,r, are convex functions.

For any xRn, at least one subgradient ξici(x) can be calculated, where ci(x) is a generalized gradient (subdifferential) of ci(x) at x and it is defined as follows: (3.10)ci(x)={ξiRnci(z)ci(x)+ξi,z-xzRn}.

For any yRm, at least one subgradient ηiqj(y) can be calculated, where qj(y) is a generalized gradient (subdifferential) of qj(y) at y and it is defined as the following: (3.11)qj(y)={ηjRmqj(u)qj(y)+ηj,u-yuRm}.

In the kth iteration, let (3.12)Cik={xRnci(xk)+ξik,x-xk0}, where ξik is an element in ci(xk),  i=1,2,,t.

Consider (3.13)Qjk={yRmqj(Axk)+ηjk,y-Axk0}, where ηjk is an element in qj(Axk),  j=1,2,,r.

By the definition of the subgradient, it is clear that CiCik,QjQjk and the orthogonal projections onto Cik and Qjk can be calculated [4, 6, 8]. Define (3.14)qk(x)=(12)j=1rβjPQjk(Ax)-Ax2, where βj>0. Then (3.15)qk(x)=j=1rβjAT(I-PQjk)Ax.

For the Lipschiitz constant and the cocoercive modulus of q defined by (3.2) are not related to the nonempty closed convex sets Ci and Qj , one can obtain that the qk(x) is L-Lipschitzian with L=A2j=1rβj and (1/L)-ism. So, qk(x) is monotone.

Algorithm 3.4.

Given constant γ>0,  α(0,1)  μ(0,1). Let x0 be arbitrary. For k=0,1,2,, compute (3.16)x-k=PC[k+1]k(xk-ρkqk(xk)), where ρk=γαlk and lk is the smallest non-negative interger l such that (3.17)qk(xk)-qk(x-k)μxk-x-kρk. Set (3.18)xk+1=PC[k+1]k(xk-ρkqk(x-k)).

Lemma 3.5.

The Armijo-like search rule (3.17) is well defined, and μα/Lρkγ.

Proof.

Obviously, from (3.17), ρkγ  for  k=0,1,. We know that ρk/α must violate inequality (3.17). That is, (3.19)qk(xk)-qk(PC[k+1]k(xk-ρkαqk(xk)))μxk-PC[k+1]k(xk-(ρk/α)qk(xk))ρk/α. Since qk is Lipschitz continuous with constant L, which together with (3.19), we have (3.20)ρk>μαL, which completes the proof.

Theorem 3.6.

Let {xk} be a sequence generated by Algorithm 3.4. If the solution set of the MSSFP (1.1) is nonempty, the {xk} converges to a solution of the MSSFP (1.1).

Proof.

Let x* be a solution of the MSSFP (1.1), then x*=PC(x*)=PCi(x*),i=1,2,,t. and Ax*=PQ(Ax*)=PQj(x*),  j=1,2,,r.

Since CiCik,QjQjk for all i and j, we have x*Cik,  Ax*Qjk, and, qk(x*)=0; thus, we have qk(x*)=0 for all k=0,1,.

Using the monotonicity of qk, we have for all k=0,1,(3.21)qk(x-k)-qk(x*),x-k-x*0. This implies (3.22)qk(x-k),x-k-x*qk(x*),x-k-x*=0. Therefore, we have (3.23)qk(x-k),xk+1-x*qk(x-k),xk+1-x-k. Thus, using part (3) of Lemma 2.3 and (3.23), we obtain (3.24)xk+1-x*2=PC[k+1]k(xk-ρkqk(x-k))-x*2xk-ρkqk(x-k)-x*2-xk+1-xk+ρkqk(x-k)2=xk-x*2-2ρkqk(x-k),xk-x*-xk+1-xk2-2ρkqk(x-k),xk+1-xkxk-x*2-2ρkqk(x-k),xk+1-x-k-xk+1-x-k+x-k-xk2=xk-x*2-2ρkqk(x-k),xk+1-x-k-xk+1-x-k2-x-k-xk2-2x-k-xk,xk+1-x-k=xk-x*2-x-k-xk2-xk+1-x-k2+2xk-x-k-ρkqk(x-k),xk+1-x-k. Since x-k=PC[k+1]k(xk-ρkqk(xk)), xk+1C[k+1]k. By Lemma 2.3(1), we have x-k-xk+ρkqk(xk),xk+1-x-k0; also, by search rule (3.17), it follows that (3.25)xk+1-x*2xk-x*2-x-k-xk2-xk+1-x-k2+2xk-x-k-ρkqk(x-k),xk+1-x-k+2x-k-xk+ρkqk(xk),xk+1-x-k=xk-x*2-x-k-xk2-xk+1-x-k2+2ρkqk(xk)-qk(x-k),xk+1-x-kxk-x*2-x-k-xk2-xk+1-x-k2+ρk2qk(xk)-qk(x-k)2+xk+1-x-k2xk-x*2-x-k-xk2+μ2xk-x-k2=xk-x*2-(1-μ2)x-k-xk2, which implies that the sequence {xk-x*} is monotonically decreasing and hence {xk} is bounded. Consequently, we get from (3.25) (3.26)limkxk-x-k=0. On the other hand (3.27)xk+1-xkxk+1-x-k+x-k-xkxk-ρkqk(x-k)-xk+ρkqk(xk)+x-k-xk=ρkqk(x-k)-qk(xk)+x-k-xk(μ+1)x-k-xk,

which results in (3.28)limkxk+1-xk=0. Let x~ be an accumulation point of {xk} and xknx~, where {xkn}n=1 is a subsequence of {xk}. We will show that x~ is a solution of the MSSFP (1.1). Thus, we need to show that x~C=i=1tCi and Ax~Q=j=1rQj.

Since xkn+1C[kn+1]kn,  for  all  n=1,2,, then by the definition of C[kn+1]kn, we have (3.29)c[ki+1](xkn)+ξ[kn+1]kn+1,xkn+1-xkn0.

Passing onto the limit in this inequality and taking into account (3.29) and Lemma 3.2, we obtain that c[kn+1](x~)0 with kn.

Because C[kn+1] is repeated regularly of C1,C2,,Ct, so x~Ci for every 1it; thus, x~C=i=1tCi. Next, we need to show Ax~Q=j=1rQj.

Let ek(x,ρ)=x-PC[k+1]k(x-ρqk(x)),k=0,1,2,, then ekn(xkn,ρkn)=xkn-x-kn and we get from Lemmas 2.4 and 3.2, and (3.26) that (3.30)limknekn(xkn,1)limknxkn-x-knmin{1,ρkn}limknxkn-x-knmin{1,ρ^}=0, where ρ^=μα/L. Since x* is a solution point of MSSFP (1.1), so x*Ci,  i=1,2,,t. By using Lemma 2.3, we have (3.31)0xkn-qkn(xkn)-PC[kn+1]kn(xkn-qkn(xkn)),PC[kn+1]kn(xkn-qkn(xkn))-x*=ekn(xkn,1)-qkn(xkn),xkn-x*-ekn(xkn,1).

From Lemma 2.3, we see that orthogonal projection mappings are cocoercive with modulus 1, taking into account the fact that the mapping F is cocoercive with modulus 1 if and only if I-F is cocoercive with modulus 1 , we obtain from (3.31) and qkn(x*)=0 that (3.32)xkn-x*,ekn(xkn,1)ekn(xkn,1)2-qkn(xkn),ekn(xkn,1)+qkn(xkn)-qkn(x*),xkn-x*=ekn(xkn,1)2-qkn(xkn),ekn(xkn,1)+j=1rβjAT(I-PQjkn)(Axkn)-AT(I-PQjkn)(Ax*),xkn-x*=ekn(xkn,1)2-qkn(xkn),ekn(xkn,1)+j=1rβj(I-PQjkn)(Axkn)-(I-PQjkn)(Ax*),Axkn-Ax*ekn(xkn,1)2-qkn(xkn),ekn(xkn,1)+j=1rβjAxkn-PQjkn(Axkn)2.

Since qkn(xkn)=qkn(xkn)-qkn(x*)Lxkn-x* and {xkn} is abounded, the sequence qkn(xkn) is also bounded. Therefore, from (3.30) and (3.32), we get for all j=1,2,, that (3.33)limknAxkn-PQjkn(Axkn)=0. Moreover, since PQjkn(Axkn)Qjkn, we have (3.34)qj(Axkn)+ηjkn,PQjkn(Axkn)-Axkn0,j=1,2,r. Again passing onto the limits and combining those inequalities with Lemma 2.5, we conclude by (3.33) that (3.35)qj(Ax~)0,j=1,2,r. Thus, x~C=i=1tCi and Ax~Q=j=1rQj.

Therefore, x~ is a solution of the MSSFP (1.1). So we may use x~ in place of x* in (3.23) and get that the sequence {xk-x~} is convergent. Furthermore, noting that a subsequence of {xk}, that is, {xkn}, converges to x~, we obtain that xkx~, as k.

This completes the proof.

4. Concluding Remarks

This paper introduced two self-adaptive projection methods with the Armijo-like searches for solving the multiple-set split feasibility problem MSSFP (1.1). They need not compute the additional projection PΩ and avoid the difficult task of estimating the Lipschitz constant. It makes a sufficient decrease of the objective function at each iteration; thus the efficiency is enhanced greatly. Moreover in the second algorithm, we use the relaxed projection technology to calculate orthogonal projections onto convex sets, which may reduce a large amount of computing work and make the method more practical. The corresponding convergence theories have been established.

Acknowledgment

This paper was supported partly by NSFC Grants no. 11071279.

Censor Y. Elfving T. A multiprojection algorithm using Bregman projections in a product space Numerical Algorithms 1994 8 2 221 239 2-s2.0-0000424337 10.1007/BF02142692 ZBL0828.65065 Censor Y. Elfving T. Kopf N. Bortfeld T. The multiple-sets split feasibility problem and its applications for inverse problems Inverse Problems 2005 21 6 2071 2084 2-s2.0-28544451131 10.1088/0266-5611/21/6/017 ZBL1089.65046 Xu H. K. A variable Krasnosel'skii-Mann algorithm and the multiple-set split feasibility problem Inverse Problems 2006 22 6 2021 2034 2-s2.0-33846078029 10.1088/0266-5611/22/6/007 ZBL1126.47057 Zhao J. L. Yang Q. Z. Self-adaptive projection methods for the multiple-sets split feasibility problem Inverse Problems 2011 27 3 2-s2.0-79951791432 10.1088/0266-5611/27/3/035009 035009 ZBL1215.65115 Lopez G. Martin V. Xu H. K. Censor Y. Jiang M. Wang G. Iterative algorithms for the multiple-sets split feasibility problem Biomedical Mathematics: Promising Directions in Imaging, Therapy Planning and Inverse Problems 2010 Madison, Wis, USA Medical Physics 243 279 Qu B. Xiu N. H. A note on the CQ algorithm for the split feasibility problem Inverse Problems 2005 21 5 1655 1665 2-s2.0-25444477311 10.1088/0266-5611/21/5/009 ZBL1080.65033 Byrne C. Iterative oblique projection onto convex sets and the split feasibility problem Inverse Problems 2002 18 2 441 453 2-s2.0-0036538286 10.1088/0266-5611/18/2/310 Yang Q. The relaxed CQ algorithm solving the split feasibility problem Inverse Problems 2004 20 4 1261 1266 2-s2.0-4043102605 10.1088/0266-5611/20/4/014 ZBL1066.65047 Rockafellar R. T. Convex Analysis 1970 Princeton, NJ, USA Princeton University Press Zarantonello E. H. Projections on Convex Sets in Hilbert Space and Spectral Theory 1971 Madison, Wis, USA University of Wisconsin Press