AAA Abstract and Applied Analysis 1687-0409 1085-3375 Hindawi Publishing Corporation 10.1155/2014/894272 894272 Research Article A Regularized Algorithm for the Proximal Split Feasibility Problem Yao Zhangsong 1 http://orcid.org/0000-0002-0896-5566 Cho Sun Young 2 http://orcid.org/0000-0002-5294-4447 Kang Shin Min 3 Zhu Li-Jun 4,5 Kim Jong Kyu 1 School of Mathematics & Information Technology Nanjing Xiaozhuang University Nanjing 211171 China njxzc.edu.cn 2 Department of Mathematics Gyeongsang National University Jinju 660-701 Republic of Korea gnu.ac.kr 3 Department of Mathematics and RINS Gyeongsang National University Jinju 660-701 Republic of Korea gnu.ac.kr 4 School of Mathematics and Information Science Beifang University of Nationalities Yinchuan 750021 China cucas.edu.cn 5 School of Management Hefei University of Technology Hefei 230009 China hfut.edu.cn 2014 172014 2014 28 04 2014 17 06 2014 2 7 2014 2014 Copyright © 2014 Zhangsong Yao et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

The proximal split feasibility problem has been studied. A regularized method has been presented for solving the proximal split feasibility problem. Strong convergence theorem is given.

1. Introduction

Throughout, we assume that H 1 and H 2 are two real Hilbert spaces, f : H 1 R { + } and g : H 2 R { + } are two proper, lower semicontinuous convex functions, and A : H 1 H 2 is a bounded linear operator.

In the present paper, we are devoted to solving the following minimization problem: (1) min x H 1 { f ( x ) + g λ ( A x ) } , where g λ stands for the Moreau-Yosida approximation of the function g of parameter λ ; that is, (2) g λ ( u ) = min v H 2 { g ( v ) + 1 2 λ u - v 2 } .

Problem (1) includes the split feasibility problem as a special case. In fact, we choose f and g as the indicator functions of two nonempty closed convex sets C H 1 and Q H 2 ; that is, (3) f ( x ) = δ C ( x ) = { 0 , if x C , + , otherwise ,    g ( x ) = δ Q ( x ) = { 0 , if x Q , + , otherwise . Then, problem (1) reduces to (4) min x H 1 { δ C ( x ) + ( δ Q ) λ ( A x ) } , which equals (5) min x C { 1 2 λ ( I - pro j Q ) ( A x ) 2 } . Now we know that solving (5) is exactly to solve the following split feasibility problem of finding x such that (6) x C , A x Q , provided C A - 1 ( Q ) .

The split feasibility problem in finite-dimensional Hilbert spaces was first introduced by Censor and Elfving  for modeling inverse problems which arise from phase retrievals and in medical image reconstruction. Recently, the split feasibility problem (6) has been studied extensively by many authors; see, for instance, .

In order to solve (6), one of the key ideas is to use fixed point technique according to x which solves (6) if and only if (7) x = pro j C ( I - γ A * ( I - pro j Q ) A ) x . Next, we will use this idea to solve (1). First, by the differentiability of the Yosida approximation g λ , we have (8) ( f ( x ) + g λ ( A x ) ) = f ( x ) + A * g λ ( A x ) = f ( x ) + A * ( I - pro x λ g λ ) ( A x ) , where f ( x ) denotes the subdifferential of f at x and pro x λ g ( x ) is the proximal mapping of g . That is, (9) f ( x ) = { x * H 1 : f ( x ) f ( x ) + x * , x - x , h h h x H 1 } , pro x λ g ( x ) = arg min x H 2 { g ( x ) + 1 2 λ x - x 2 } . Note that the optimality condition of (8) is as follows: (10) 0 f ( x ) + A * ( I - pro x λ g λ ) ( A x ) , which can be rewritten as (11) 0 μ λ f ( x ) + μ A * ( I - pro x λ g ) ( A x ) , which is equivalent to the fixed point equation (12) x = pro x μ λ f ( x - μ A * ( I - pro x λ g ) ) ( A x ) . If arg min f A - 1 ( arg min g ) , then (1) is reduced to the following proximal split feasibility problem of finding x such that (13) x arg min f , A x arg min g , where (14) arg min f = { x * H 1 : f ( x * ) f ( x ) , x H 1 } , arg min g = { x H 2 : g ( x ) g ( x ) , x H 2 } . In the sequel, we will use Γ to denote the solution set of (13).

Recently, in order to solve (13), Moudafi and Thakur  presented the following split proximal algorithm with a way of selecting the stepsizes such that its implementation does not need any prior information about the operator norm.

Split Proximal Algorithm

Step 1 (initialization).

(15) x 0 H 1 .

Step 2.

Assume that x n has been constructed and θ ( x n ) . Then compute x n + 1 via the manner (16) x n + 1 = pro x μ n λ f [ x n - μ n A * ( I - pro x λ g ) A x n ] , n 0 , where the stepsize μ n = ρ n ( ( h ( x n ) + l ( x n ) ) / θ 2 ( x n ) ) in which 0 < ρ n < 4 , h ( x n ) = ( 1 / 2 ) ( I - pro x λ g ) A x n 2 , l ( x n ) = ( 1 / 2 ) ( I - pro x μ n λ f ) x n 2   and θ ( x n ) = h ( x n ) 2 + l ( x n ) 2 .

If θ ( x n ) = 0 , then x n + 1 = x n is a solution of (13) and the iterative process stops; otherwise, we set n : = n + 1 and go to (16).

Consequently, they demonstrated the following weak convergence of the above split proximal algorithm.

Theorem 1.

Suppose that Γ . Assume that the parameters satisfy the condition: (17) ϵ ρ n 4 h ( x n ) h ( x n ) + l ( x n ) - ϵ f o r s o m e ϵ > 0 s m a l l e n o u g h . Then the sequence x n weakly converges to a solution of (13).

Note that the proximal mapping of g is firmly nonexpansive, namely, (18) pro x λ g x - pro x λ g y , x - y pro x λ g x - pro x λ g y 2 , x , y H 2 , and it is also the case for complement I - pro x λ g . Thus, A * ( I - pro x λ g ) A is cocoercive with coefficient 1 / A 2 (recall that a mapping B : H 1 H 1 is said to be cocoercive if B x - B y , x - y α B x - B y 2 for all x , y H 1 and some α > 0 ). If μ ( 0 , 1 / A 2 ) , then I - μ A * ( I - pro x λ g ) A is nonexpansive. Hence, we need to regularize (16) such that the strong convergence is obtained. This is the main purpose of this paper. In the next section, we will collect some useful lemmas and in the last section we will present our algorithm and prove its strong convergence.

2. Lemmas Lemma 2 (see [<xref ref-type="bibr" rid="B10">10</xref>]).

Let { a n } n N be a sequence of nonnegative real numbers satisfying the following relation: (19) a n + 1 ( 1 - α n ) a n + α n σ n + δ n , n 0 , where

{ α n } n N [ 0,1 ] and n = 1 α n = ;

limsup n σ n 0 ;

n = 1 δ n < .

Then, lim n a n = 0 .

Lemma 3 (see [<xref ref-type="bibr" rid="B11">11</xref>]).

Let { γ n } n N be a sequence of real numbers such that there exists a subsequence { γ n i } i N of { γ n } n N such that γ n i < γ n i + 1 for all i N . Then, there exists a nondecreasing sequence { m k } k N of N such that lim k m k = and the following properties are satisfied by all (sufficiently large) numbers k N : (20) γ m k γ m k + 1 , γ k γ m k + 1 . In fact, m k is the largest number n in the set { 1 , , k } such that the condition γ n < γ n + 1 holds.

3. Main results

Let H 1 and H 2 be two real Hilbert spaces. Let f : H 1 R { + } and g : H 2 R { + } be two proper, lower semicontinuous convex functions and A : H 1 H 2 a bounded linear operator.

Now, we firstly introduce our algorithm.

Algorithm  4

Step 1 (initialization).

(21) x 0 H 1 .

Step 2.

Assume that x n has been constructed. Set h ( x n ) = ( 1 / 2 ) ( I - pro x λ g ) A x n 2 , l ( x n ) = ( 1 / 2 ) ( I - pro x μ n λ f ) x n 2 and θ ( x n ) = h ( x n ) 2 + l ( x n ) 2 for all n N .

If θ ( x n ) , then compute x n + 1 via the manner (22) x n + 1 = pro x μ n λ f [ α n u + ( 1 - α n ) x n - μ n A * ( I - pro x λ g ) A x n ] , HHHHHHHHHHHHHHHHHHHHHHHHLH n 0 , where u H 1 is a fixed point and { α n } n N [ 0,1 ] is a real number sequence and μ n is the stepsize satisfying μ n = ρ n ( ( h ( x n ) + l ( x n ) ) / θ 2 ( x n ) ) with 0 < ρ n < 4 .

If θ ( x n ) = 0 , then x n + 1 = x n is a solution of (13) and the iterative process stops; otherwise, we set n : = n + 1 and go to (22).

Theorem 5.

Suppose that Γ . Assume that the parameters { α n } and { ρ n } satisfy the conditions:

lim n α n = 0 ;

n = 0 α n = ;

ϵ ρ n ( 4 h ( x n ) / ( h ( x n ) + l ( x n ) ) ) - ϵ for some ϵ > 0 small enough.

Then the sequence x n converges strongly to p r o j Γ ( u ) .

Proof.

Let x * Γ . Since minimizers of any function are exactly fixed points of its proximal mappings, we have x * = pro x μ n λ f x * and A x * = pro x λ g A x * . By (22) and the nonexpansivity of pro x μ n λ f , we derive (23) x n + 1 - x * 2 = pro x μ n λ f [ α n u + ( 1 - α n ) x n - μ n A * ( I - pro x λ g ) A x n ] h h h - pro x μ n λ f x * 2 α n u + ( 1 - α n ) x n - μ n A * ( I - pro x λ g ) A x n - x * 2 = μ n 1 - α n α n ( u - x * ) + ( 1 - α n ) hhh × [ x n - μ n 1 - α n A * ( I - pro x λ g ) A x n - x * ] 2 α n u - x * 2 + ( 1 - α n ) × x n - μ n 1 - α n A * ( I - pro x λ g ) A x n - x * 2 . Since pro x λ g is firmly nonexpansive, we deduce that I - pro x λ g is also firmly nonexpansive. Hence, we have (24) A * ( I - pro x λ g ) A x n , x n - x * = ( I - pro x λ g ) A x n , A x n - A x * = ( I - pro x λ g ) A x n - ( I - pro x λ g ) A x * , A x n - A x * ( I - pro x λ g ) A x n 2 = 2 h ( x n ) . Note that h ( x n ) = A * ( I - prox λ g ) A x n and l ( x n ) = ( I - prox μ n λ f ) x n . From (24), we obtain (25) x n - μ n 1 - α n A * ( I - pro x λ g ) A x n - x * 2 = x n - x * 2 + μ n 2 ( 1 - α n ) 2 A * ( I - pro x λ g ) A x n 2 - 2 μ n 1 - α n A * ( I - pro x λ g ) A x n , x n - x * = x n - x * 2 + μ n 2 ( 1 - α n ) 2 h ( x n ) 2 - 2 μ n 1 - α n h ( x n ) , x n - x * x n - x * 2 + μ n 2 ( 1 - α n ) 2 h ( x n ) 2 - 4 μ n h ( x n ) 1 - α n = x n - x * 2 + ρ n 2 ( h ( x n ) + l ( x n ) ) 2 ( 1 - α n ) 2 θ 4 ( x n ) h ( x n ) 2 - 4 ρ n h ( x n ) + l ( x n ) ( 1 - α n ) θ 2 ( x n ) h ( x n ) x n - x * 2 + ρ n 2 ( h ( x n ) + l ( x n ) ) 2 ( 1 - α n ) 2 θ 2 ( x n ) - 4 ρ n ( h ( x n ) + l ( x n ) ) 2 ( 1 - α n ) θ 2 ( x n ) h ( x n ) h ( x n ) + l ( x n ) = x n - x * 2 - ρ n ( 4 h ( x n ) h ( x n ) + l ( x n ) - ρ n 1 - α n ) × ( h ( x n ) + l ( x n ) ) 2 ( 1 - α n ) θ 2 ( x n ) . By condition (C3), without loss of generality, we can assume that ( 4 h ( x n ) / ( h ( x n ) + l ( x n ) ) ) - ( ρ n / ( 1 - α n ) ) 0 for all n 0 . Thus, from (23) and (25), we obtain (26) x n + 1 - x * 2 α n u - x * 2 + ( 1 - α n ) × [ ( h ( x n ) + l ( x n ) ) 2 ( 1 - α n ) θ 2 ( x n ) x n - x * 2 hhhlh - ρ n ( 4 h ( x n ) h ( x n ) + l ( x n ) - ρ n 1 - α n ) ( h ( x n ) + l ( x n ) ) 2 ( 1 - α n ) θ 2 ( x n ) ] = α n u - x * 2 + ( 1 - α n ) x n - x * 2 - ρ n ( 4 h ( x n ) h ( x n ) + l ( x n ) - ρ n 1 - α n ) ( h ( x n ) + l ( x n ) ) 2 θ 2 ( x n ) α n u - x * 2 + ( 1 - α n ) x n - x * 2 max { u - x * 2 , x n - x * 2 } . Hence, { x n } is bounded.

Let z = P Γ u . From (26), we deduce (27) 0 ρ n ( 4 h ( x n ) h ( x n ) + l ( x n ) - ρ n 1 - α n ) ( h ( x n ) + l ( x n ) ) 2 θ 2 ( x n ) α n u - z 2 + ( 1 - α n ) x n - z 2 - x n + 1 - z 2 . We consider the following two cases.

Case  1.  One has  x n + 1 - z x n - z for every n n 0 large enough.

In this case, lim n x n - z exists as finite and hence (28) lim n ( x n + 1 - z - x n - z ) = 0 . This together with (27) implies that (29) ρ n ( 4 h ( x n ) h ( x n ) + l ( x n ) - ρ n 1 - α n ) ( h ( x n ) + l ( x n ) ) 2 θ 2 ( x n ) 0 . Since liminf n ρ n ( ( 4 h ( x n ) / ( h ( x n ) + l ( x n ) ) ) - ( ρ n / ( 1 - α n ) ) ) 2 ϵ (by condition (C3)), we get (30) ( h ( x n ) + l ( x n ) ) 2 θ 2 ( x n ) 0 . Noting that θ 2 ( x n ) = h ( x n ) 2 + l ( x n ) 2 is bounded, we deduce immediately that (31) lim n ( h ( x n ) + l ( x n ) ) = 0 . Therefore, (32) lim n h ( x n ) = lim n l ( x n ) = 0 . Next, we prove (33) limsup n u - z , x n - z 0 . Since { x n } is bounded, there exists a subsequence { x n i } satisfying x n i z and (34) limsup n u - z , x n - z = lim i u - z , x n i - z . By the lower semicontinuity of h , we get (35) 0 h ( z ) liminf i h ( x n i ) = lim n h ( x n ) = 0 . So, (36) h ( z ) = 1 2 ( I - pro x λ g ) A z = 0 . That is, A z is a fixed point of the proximal mapping of g or equivalently 0 g ( A z ) . In other words, A z is a minimizer of g .

Similarly, from the lower semicontinuity of l , we get (37) 0 l ( z ) liminf i l ( x n i ) = lim n l ( x n ) = 0 . Therefore, (38) l ( z ) = 1 2 ( I - pro x μ n λ f ) z = 0 . That is, z is a fixed point of the proximal mapping of f or equivalently 0 f ( z ) . In other words, z is a minimizer of f . Hence, z Γ . Therefore, (39) limsup n u - z , x n - z = lim i u - z , x n i - z = u - z , z - z 0 . From (22), we have (40) x n + 1 - z 2 μ n 1 - α n α n ( u - z ) + ( 1 - α n ) h h l l × [ x n - μ n 1 - α n A * ( I - pro x λ g ) A x n - z ] 2 = ( 1 - α n ) 2 x n - μ n 1 - α n A * ( I - pro x λ g ) A x n - z 2 + α n 2 u - z 2 + 2 α n ( 1 - α n ) × x n - μ n 1 - α n A * ( I - pro x λ g ) A x n - z , u - z ( 1 - α n ) 2 x n - z 2 + α n 2 u - z 2 + 2 α n ( 1 - α n ) x n - z , u - z - 2 α n μ n h ( x n ) , u - z ( 1 - α n ) x n - z 2 + α n ( α n u - z 2 + 2 ( 1 - α n ) x n - z , u - z hhhlhhh + 2 μ n h ( x n ) u - z u - z 2 ) . Since h is Lipschitz continuous with Lipschitzian constant A 2 and l is nonexpansive, h ( u n ) , l ( u n ) , and θ 2 ( x n ) = h ( x n ) 2 + l ( x n ) 2 are bounded. Note that μ n h ( x n ) = ρ n ( ( h ( x n ) + l ( x n ) ) / θ 2 ( x n ) ) h ( x n ) . Thus, μ n h ( x n ) 0 by (32). From Lemma 2, (39), and (40) we deduce that x n z .

Case  2. There exists a subsequence { x n j - z } of { x n - z } such that (41) x n j - z < x n j + 1 - z , for all j 1 . By Lemma 3, there exists a strictly increasing sequence { m k } of positive integers such that li m k m k = + and the following properties are satisfied by all numbers k N : (42) x m k - z x m k + 1 - z , x k - z x m k + 1 - z . Consequently, (43) 0 lim k ( x m k + 1 - z - x m k - z ) limsup n ( x n + 1 - z - x n - z ) limsup n ( α n u - z + ( 1 - α n ) x n - z - x n - z ) = limsup n α n ( u - z - x n - z ) = 0 . Hence, (44) lim k ( x m k + 1 - z - x m k - z ) = 0 . By a similar argument as that of Case 1, we can prove that (45) limsup k u - z , x m k - z 0 , x m k + 1 - z 2 ( 1 - α m k ) x m k - z 2 + α m k σ m k , where σ m k = α m k u - z 2 + 2 ( 1 - α m k ) x m k - z , u - z + 2 μ m k h ( x m k ) u - z .

In particular, we get (46) α m k x m k - z 2 x m k - z 2 - x m k + 1 - z + α m k σ m k α m k σ m k . Then, (47) limsup k x m k - z 2 limsup k σ m k 0 . Thus, from (42) and (44), we conclude that (48) limsup k x k - z limsup k x m k + 1 - z = 0 . Therefore, x n z . This completes the proof.

Remark 6.

Note that problem (13) was considered, for example, in [12, 13]; however, the iterative methods proposed to solve it need to know a priori the norm of the bounded linear operator A .

Remark 7.

We would like also to emphasize that by taking f = δ C , g = δ Q the indicator functions of two nonempty closed convex sets C , Q of H 1 and H 2 respectively, our algorithm (22) reduces to (49) x n + 1 = pro j C [ α n u + ( 1 - α n ) x n - μ n A * ( I - pro j Q ) A x n ] , hhhhhhlhhhhhhhhhhhhhhhhhhhhhhhhhhh n 0 . We observe that (49) is simpler than the one in .

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The authors are grateful to the referees for their valuable comments and suggestions. Sun Young Cho was supported by the Basic Science Research Program through the National Research Foundation of Korea funded by the Ministry of Education, Science and Technology (KRF-2013053358). Li-Jun Zhu was supported in part by NNSF of China (61362033 and NZ13087).

Censor Y. Elfving T. A multiprojection algorithm using Bregman projections in a product space Numerical Algorithms 1994 8 2–4 221 239 10.1007/BF02142692 MR1309222 2-s2.0-0000424337 Byrne C. Iterative oblique projection onto convex sets and the split feasibility problem Inverse Problems 2002 18 2 441 453 10.1088/0266-5611/18/2/310 MR1910248 2-s2.0-0036538286 Xu H.-K. Iterative methods for the split feasibility problem in infinite-dimensional Hilbert spaces Inverse Problems 2010 26 10 17 105018 10.1088/0266-5611/26/10/105018 MR2719779 2-s2.0-78049429291 Byrne C. A unified treatment of some iterative algorithms in signal processing and image reconstruction Inverse Problems 2004 20 1 103 120 10.1088/0266-5611/20/1/006 MR2044608 2-s2.0-1342265919 Zhao J. Yang Q. Several solution methods for the split feasibility problem Inverse Problems 2005 21 5 1791 1799 10.1088/0266-5611/21/5/017 MR2173423 2-s2.0-25444479430 Dang Y. Gao Y. The strong convergence of a KM-CQ-like algorithm for a split feasibility problem Inverse Problems 2011 27 1 9 015007 10.1088/0266-5611/27/1/015007 MR2746410 2-s2.0-78651357819 Wang F. Xu H. Approximating curve and strong convergence of the C Q algorithm for the split feasibility problem Journal of Inequalities and Applications 2010 2010 13 102085 10.1155/2010/102085 MR2603074 2-s2.0-77951485334 Yao Y. Jigang W. Liou Y. Regularized methods for the split feasibility problem Abstract and Applied Analysis 2012 2012 13 140679 10.1155/2012/140679 MR2889074 Moudafi A. Thakur B. S. Solving proximal split feasibility problems without prior knowledge of operator norms Optimization Letters 2013 10.1007/s11590-013-0708-4 Xu H. K. Iterative algorithms for nonlinear operators Journal of the London Mathematical Society 2002 66 1 240 256 10.1112/S0024610702003332 MR1911872 2-s2.0-0036692463 Maingé P. E. Strong convergence of projected subgradient methods for non-smooth and non-strictly convex minimization Set-Valued Analysis 2008 16 7-8 899 912 10.1007/s11228-008-0102-z MR2466027 2-s2.0-58149142982 Moudafi A. Split monotone variational inclusions Journal of Optimization Theory and Applications 2011 150 2 275 283 10.1007/s10957-011-9814-6 MR2818920 ZBLl1231.90358 2-s2.0-79959789566 Byrne C. Censor Y. Gibaliand A. Reich S. Weak and strong convergence of algorithms for the split common null point problem Journal of Nonlinear and Convex Analysis 2012 13 759 775 Yu Y. An explicit method for the split feasibility problem with self-adaptive step sizes Abstract and Applied Analysis 2012 2012 9 432501 MR2999880 10.1155/2012/432501