The proximal split feasibility problem has been studied. A regularized method has been presented for solving the proximal split feasibility problem. Strong convergence theorem is given.

1. Introduction

Throughout, we assume that H1 and H2 are two real Hilbert spaces, f:H1→R∪{+∞} and g:H2→R∪{+∞} are two proper, lower semicontinuous convex functions, and A:H1→H2 is a bounded linear operator.

In the present paper, we are devoted to solving the following minimization problem:
(1)minx†∈H1{f(x†)+gλ(Ax†)},
where gλ stands for the Moreau-Yosida approximation of the function g of parameter λ; that is,
(2)gλ(u)=minv∈H2{g(v)+12λ∥u-v∥2}.

Problem (1) includes the split feasibility problem as a special case. In fact, we choose f and g as the indicator functions of two nonempty closed convex sets C⊂H1 and Q∈H2; that is,
(3)f(x†)=δC(x†)={0,ifx†∈C,+∞,otherwise,g(x†)=δQ(x†)={0,ifx†∈Q,+∞,otherwise.
Then, problem (1) reduces to
(4)minx†∈H1{δC(x†)+(δQ)λ(Ax†)},
which equals
(5)minx†∈C{12λ∥(I-projQ)(Ax†)∥2}.
Now we know that solving (5) is exactly to solve the following split feasibility problem of finding x‡ such that
(6)x‡∈C,Ax‡∈Q,
provided C∩A-1(Q)≠∅.

The split feasibility problem in finite-dimensional Hilbert spaces was first introduced by Censor and Elfving [1] for modeling inverse problems which arise from phase retrievals and in medical image reconstruction. Recently, the split feasibility problem (6) has been studied extensively by many authors; see, for instance, [2–8].

In order to solve (6), one of the key ideas is to use fixed point technique according to x† which solves (6) if and only if
(7)x†=projC(I-γA*(I-projQ)A)x†.
Next, we will use this idea to solve (1). First, by the differentiability of the Yosida approximation gλ, we have
(8)∂(f(x†)+gλ(Ax†))=∂f(x†)+A*∇gλ(Ax†)=∂f(x†)+A*(I-proxλgλ)(Ax†),
where ∂f(x†) denotes the subdifferential of f at x† and proxλg(x†) is the proximal mapping of g. That is,
(9)∂f(x†)={x*∈H1:f(x‡)≥f(x†)+〈x*,x‡-x†〉,hhh∀x‡∈H1},proxλg(x†)=argminx‡∈H2{g(x‡)+12λ∥x‡-x†∥2}.
Note that the optimality condition of (8) is as follows:
(10)0∈∂f(x†)+A*(I-proxλgλ)(Ax†),
which can be rewritten as
(11)0∈μλ∂f(x†)+μA*(I-proxλg)(Ax†),
which is equivalent to the fixed point equation
(12)x†=proxμλf(x†-μA*(I-proxλg))(Ax†).
If argminf∩A-1(argming)≠∅, then (1) is reduced to the following proximal split feasibility problem of finding x† such that
(13)x†∈argminf,Ax†∈argming,
where
(14)argminf={x*∈H1:f(x*)≤f(x†),∀x†∈H1},argming={x†∈H2:g(x†)≤g(x),∀x∈H2}.
In the sequel, we will use Γ to denote the solution set of (13).

Recently, in order to solve (13), Moudafi and Thakur [9] presented the following split proximal algorithm with a way of selecting the stepsizes such that its implementation does not need any prior information about the operator norm.

Split Proximal Algorithm

Step 1 (initialization).

(15)x0∈H1.

Step 2.

Assume that xn has been constructed and θ(xn)≠∅. Then compute xn+1 via the manner
(16)xn+1=proxμnλf[xn-μnA*(I-proxλg)Axn],∀n≥0,
where the stepsize μn=ρn((h(xn)+l(xn))/θ2(xn)) in which 0<ρn<4,h(xn)=(1/2)∥(I-proxλg)Axn∥2,l(xn)=(1/2)∥(I-proxμnλf)xn∥2 and θ(xn)=∥∇h(xn)∥2+∥∇l(xn)∥2.

If θ(xn)=0, then xn+1=xn is a solution of (13) and the iterative process stops; otherwise, we set n:=n+1 and go to (16).

Consequently, they demonstrated the following weak convergence of the above split proximal algorithm.

Theorem 1.

Suppose that Γ≠∅. Assume that the parameters satisfy the condition:
(17)ϵ≤ρn≤4h(xn)h(xn)+l(xn)-ϵforsomeϵ>0smallenough.
Then the sequence xn weakly converges to a solution of (13).

Note that the proximal mapping of g is firmly nonexpansive, namely,
(18)〈proxλgx-proxλgy,x-y〉≥∥proxλgx-proxλgy∥2,∀x,y∈H2,
and it is also the case for complement I-proxλg. Thus, A*(I-proxλg)A is cocoercive with coefficient 1/∥A∥2 (recall that a mapping B:H1→H1 is said to be cocoercive if 〈Bx-By,x-y〉≥α∥Bx-By∥2 for all x,y∈H1 and some α>0). If μ∈(0,1/∥A∥2), then I-μA*(I-proxλg)A is nonexpansive. Hence, we need to regularize (16) such that the strong convergence is obtained. This is the main purpose of this paper. In the next section, we will collect some useful lemmas and in the last section we will present our algorithm and prove its strong convergence.

2. LemmasLemma 2 (see [<xref ref-type="bibr" rid="B10">10</xref>]).

Let {an}n∈N be a sequence of nonnegative real numbers satisfying the following relation:
(19)an+1≤(1-αn)an+αnσn+δn,n≥0,
where

{αn}n∈N⊂[0,1] and ∑n=1∞αn=∞;

limsupn→∞σn≤0;

∑n=1∞δn<∞.

Then, limn→∞an=0.Lemma 3 (see [<xref ref-type="bibr" rid="B11">11</xref>]).

Let {γn}n∈N be a sequence of real numbers such that there exists a subsequence {γni}i∈N of {γn}n∈N such that γni<γni+1 for all i∈N. Then, there exists a nondecreasing sequence {mk}k∈N of N such that limk→∞mk=∞ and the following properties are satisfied by all (sufficiently large) numbers k∈N:
(20)γmk≤γmk+1,γk≤γmk+1.
In fact, mk is the largest number n in the set {1,…,k} such that the condition γn<γn+1 holds.

3. Main results

Let H1 and H2 be two real Hilbert spaces. Let f:H1→R∪{+∞} and g:H2→R∪{+∞} be two proper, lower semicontinuous convex functions and A:H1→H2 a bounded linear operator.

Now, we firstly introduce our algorithm.

Algorithm 4

Step 1 (initialization).

(21)x0∈H1.

Step 2.

Assume that xn has been constructed. Set h(xn)=(1/2)∥(I-proxλg)Axn∥2,l(xn)=(1/2)∥(I-proxμnλf)xn∥2 and θ(xn)=∥∇h(xn)∥2+∥∇l(xn)∥2 for all n∈N.

If θ(xn)≠∅, then compute xn+1 via the manner
(22)xn+1=proxμnλf[αnu+(1-αn)xn-μnA*(I-proxλg)Axn],HHHHHHHHHHHHHHHHHHHHHHHHLH∀n≥0,
where u∈H1 is a fixed point and {αn}n∈N⊂[0,1] is a real number sequence and μn is the stepsize satisfying μn=ρn((h(xn)+l(xn))/θ2(xn)) with 0<ρn<4.

If θ(xn)=0, then xn+1=xn is a solution of (13) and the iterative process stops; otherwise, we set n:=n+1 and go to (22).

Theorem 5.

Suppose that Γ≠∅. Assume that the parameters {αn} and {ρn} satisfy the conditions:

limn→∞αn=0;

∑n=0∞αn=∞;

ϵ≤ρn≤(4h(xn)/(h(xn)+l(xn)))-ϵ for some ϵ>0 small enough.

Then the sequence xn converges strongly to projΓ(u).Proof.

Let x*∈Γ. Since minimizers of any function are exactly fixed points of its proximal mappings, we have x*=proxμnλfx* and Ax*=proxλgAx*. By (22) and the nonexpansivity of proxμnλf, we derive
(23)∥xn+1-x*∥2=∥proxμnλf[αnu+(1-αn)xn-μnA*(I-proxλg)Axn]hhh-proxμnλfx*∥2≤∥αnu+(1-αn)xn-μnA*(I-proxλg)Axn-x*∥2=∥μn1-αnαn(u-x*)+(1-αn)hhh×[xn-μn1-αnA*(I-proxλg)Axn-x*]∥2≤αn∥u-x*∥2+(1-αn)×∥xn-μn1-αnA*(I-proxλg)Axn-x*∥2.
Since proxλg is firmly nonexpansive, we deduce that I-proxλg is also firmly nonexpansive. Hence, we have
(24)〈A*(I-proxλg)Axn,xn-x*〉=〈(I-proxλg)Axn,Axn-Ax*〉=〈(I-proxλg)Axn-(I-proxλg)Ax*,Axn-Ax*〉≥∥(I-proxλg)Axn∥2=2h(xn).
Note that ∇h(xn)=A*(I-proxλg)Axn and ∇l(xn)=(I-proxμnλf)xn. From (24), we obtain
(25)∥xn-μn1-αnA*(I-proxλg)Axn-x*∥2=∥xn-x*∥2+μn2(1-αn)2∥A*(I-proxλg)Axn∥2-2μn1-αn〈A*(I-proxλg)Axn,xn-x*〉=∥xn-x*∥2+μn2(1-αn)2∥∇h(xn)∥2-2μn1-αn〈∇h(xn),xn-x*〉≤∥xn-x*∥2+μn2(1-αn)2∥∇h(xn)∥2-4μnh(xn)1-αn=∥xn-x*∥2+ρn2(h(xn)+l(xn))2(1-αn)2θ4(xn)∥∇h(xn)∥2-4ρnh(xn)+l(xn)(1-αn)θ2(xn)h(xn)≤∥xn-x*∥2+ρn2(h(xn)+l(xn))2(1-αn)2θ2(xn)-4ρn(h(xn)+l(xn))2(1-αn)θ2(xn)h(xn)h(xn)+l(xn)=∥xn-x*∥2-ρn(4h(xn)h(xn)+l(xn)-ρn1-αn)×(h(xn)+l(xn))2(1-αn)θ2(xn).
By condition (C3), without loss of generality, we can assume that (4h(xn)/(h(xn)+l(xn)))-(ρn/(1-αn))≥0 for all n≥0. Thus, from (23) and (25), we obtain
(26)∥xn+1-x*∥2≤αn∥u-x*∥2+(1-αn)×[(h(xn)+l(xn))2(1-αn)θ2(xn)∥xn-x*∥2hhhlh-ρn(4h(xn)h(xn)+l(xn)-ρn1-αn)(h(xn)+l(xn))2(1-αn)θ2(xn)]=αn∥u-x*∥2+(1-αn)∥xn-x*∥2-ρn(4h(xn)h(xn)+l(xn)-ρn1-αn)(h(xn)+l(xn))2θ2(xn)≤αn∥u-x*∥2+(1-αn)∥xn-x*∥2≤max{∥u-x*∥2,∥xn-x*∥2}.
Hence, {xn} is bounded.

Let z=PΓu. From (26), we deduce
(27)0≤ρn(4h(xn)h(xn)+l(xn)-ρn1-αn)(h(xn)+l(xn))2θ2(xn)≤αn∥u-z∥2+(1-αn)∥xn-z∥2-∥xn+1-z∥2.
We consider the following two cases.

Case 1. One has ∥xn+1-z∥≤∥xn-z∥ for every n≥n0 large enough.

In this case, limn→∞∥xn-z∥ exists as finite and hence
(28)limn→∞(∥xn+1-z∥-∥xn-z∥)=0.
This together with (27) implies that
(29)ρn(4h(xn)h(xn)+l(xn)-ρn1-αn)(h(xn)+l(xn))2θ2(xn)⟶0.
Since liminfn→∞ρn((4h(xn)/(h(xn)+l(xn)))-(ρn/(1-αn)))≥2ϵ (by condition (C3)), we get
(30)(h(xn)+l(xn))2θ2(xn)⟶0.
Noting that θ2(xn)=∥∇h(xn)∥2+∥∇l(xn)∥2 is bounded, we deduce immediately that
(31)limn→∞(h(xn)+l(xn))=0.
Therefore,
(32)limn→∞h(xn)=limn→∞l(xn)=0.
Next, we prove
(33)limsupn→∞〈u-z,xn-z〉≤0.
Since {xn} is bounded, there exists a subsequence {xni} satisfying xni⇀z† and
(34)limsupn→∞〈u-z,xn-z〉=limi→∞〈u-z,xni-z〉.
By the lower semicontinuity of h, we get
(35)0≤h(z†)≤liminfi→∞h(xni)=limn→∞h(xn)=0.
So,
(36)h(z†)=12∥(I-proxλg)Az†∥=0.
That is, Az† is a fixed point of the proximal mapping of g or equivalently 0∈∂g(Az†). In other words, Az† is a minimizer of g.

Similarly, from the lower semicontinuity of l, we get
(37)0≤l(z†)≤liminfi→∞l(xni)=limn→∞l(xn)=0.
Therefore,
(38)l(z†)=12∥(I-proxμnλf)z†∥=0.
That is, z† is a fixed point of the proximal mapping of f or equivalently 0∈∂f(z†). In other words, z† is a minimizer of f. Hence, z†∈Γ. Therefore,
(39)limsupn→∞〈u-z,xn-z〉=limi→∞〈u-z,xni-z〉=〈u-z,z†-z〉≤0.
From (22), we have
(40)∥xn+1-z∥2≤∥μn1-αnαn(u-z)+(1-αn)hhll×[xn-μn1-αnA*(I-proxλg)Axn-z]∥2=(1-αn)2∥xn-μn1-αnA*(I-proxλg)Axn-z∥2+αn2∥u-z∥2+2αn(1-αn)×〈xn-μn1-αnA*(I-proxλg)Axn-z,u-z〉≤(1-αn)2∥xn-z∥2+αn2∥u-z∥2+2αn(1-αn)〈xn-z,u-z〉-2αnμn〈∇h(xn),u-z〉≤(1-αn)∥xn-z∥2+αn(αn∥u-z∥2+2(1-αn)〈xn-z,u-z〉hhhlhhh+2μn∥∇h(xn)∥∥u-z∥∥u-z∥2).
Since ∇h is Lipschitz continuous with Lipschitzian constant ∥A∥2 and ∇l is nonexpansive, ∇h(un),∇l(un), and θ2(xn)=∥∇h(xn)∥2+∥∇l(xn)∥2 are bounded. Note that μn∥∇h(xn)∥=ρn((h(xn)+l(xn))/θ2(xn))∥∇h(xn)∥. Thus, μn∥∇h(xn)∥→0 by (32). From Lemma 2, (39), and (40) we deduce that xn→z.

Case 2. There exists a subsequence {∥xnj-z∥} of {∥xn-z∥} such that
(41)∥xnj-z∥<∥xnj+1-z∥,
for all j≥1. By Lemma 3, there exists a strictly increasing sequence {mk} of positive integers such that limk→∞mk=+∞ and the following properties are satisfied by all numbers k∈N:
(42)∥xmk-z∥≤∥xmk+1-z∥,∥xk-z∥≤∥xmk+1-z∥.
Consequently,
(43)0≤limk→∞(∥xmk+1-z∥-∥xmk-z∥)≤limsupn→∞(∥xn+1-z∥-∥xn-z∥)≤limsupn→∞(αn∥u-z∥+(1-αn)∥xn-z∥-∥xn-z∥)=limsupn→∞αn(∥u-z∥-∥xn-z∥)=0.
Hence,
(44)limk→∞(∥xmk+1-z∥-∥xmk-z∥)=0.
By a similar argument as that of Case 1, we can prove that
(45)limsupk→∞〈u-z,xmk-z〉≤0,∥xmk+1-z∥2≤(1-αmk)∥xmk-z∥2+αmkσmk,
where σmk=αmk∥u-z∥2+2(1-αmk)〈xmk-z,u-z〉+2μmk∥∇h(xmk)∥∥u-z∥.

In particular, we get
(46)αmk∥xmk-z∥2≤∥xmk-z∥2-∥xmk+1-z∥+αmkσmk≤αmkσmk.
Then,
(47)limsupk→∞∥xmk-z∥2≤limsupk→∞σmk≤0.
Thus, from (42) and (44), we conclude that
(48)limsupk→∞∥xk-z∥≤limsupk→∞∥xmk+1-z∥=0.
Therefore, xn→z. This completes the proof.

Remark 6.

Note that problem (13) was considered, for example, in [12, 13]; however, the iterative methods proposed to solve it need to know a priori the norm of the bounded linear operator A.

Remark 7.

We would like also to emphasize that by taking f=δC, g=δQ the indicator functions of two nonempty closed convex sets C, Q of H1 and H2 respectively, our algorithm (22) reduces to
(49)xn+1=projC[αnu+(1-αn)xn-μnA*(I-projQ)Axn],hhhhhhlhhhhhhhhhhhhhhhhhhhhhhhhhhh∀n≥0.
We observe that (49) is simpler than the one in [14].

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The authors are grateful to the referees for their valuable comments and suggestions. Sun Young Cho was supported by the Basic Science Research Program through the National Research Foundation of Korea funded by the Ministry of Education, Science and Technology (KRF-2013053358). Li-Jun Zhu was supported in part by NNSF of China (61362033 and NZ13087).

CensorY.ElfvingT.A multiprojection algorithm using Bregman projections in a product spaceByrneC.Iterative oblique projection onto convex sets and the split feasibility problemXuH.-K.Iterative methods for the split feasibility problem in infinite-dimensional Hilbert spacesByrneC.A unified treatment of some iterative algorithms in signal processing and image reconstructionZhaoJ.YangQ.Several solution methods for the split feasibility problemDangY.GaoY.The strong convergence of a KM-CQ-like algorithm for a split feasibility problemWangF.XuH.Approximating curve and strong convergence of the CQ algorithm for the split feasibility problemYaoY.JigangW.LiouY.Regularized methods for the split feasibility problemMoudafiA.ThakurB. S.Solving proximal split feasibility problems without prior knowledge of operator normsXuH. K.Iterative algorithms for nonlinear operatorsMaingéP. E.Strong convergence of projected subgradient methods for non-smooth and non-strictly convex minimizationMoudafiA.Split monotone variational inclusionsByrneC.CensorY.GibaliandA.ReichS.Weak and strong convergence of algorithms for the split common null point problemYuY.An explicit method for the split feasibility problem with self-adaptive step sizes