MPE Mathematical Problems in Engineering 1563-5147 1024-123X Hindawi Publishing Corporation 10.1155/2015/584712 584712 Research Article The Sparsity of Underdetermined Linear System via lp Minimization for 0<p<1 Li Haiyang 1 Peng Jigen 2 Yue Shigang 3 Chehdi Kacem 1 School of Science Xi’an Polytechnic University Xi’an 710048 China xpu.edu.cn 2 Department of Mathematics Xian Jiaotong University Xi’an 710049 China xjtu.edu.cn 3 School of Computer Science University of Lincoln Lincoln LN6 7TS UK lincoln.ac.uk 2015 2842015 2015 07 11 2014 14 04 2015 2842015 2015 Copyright © 2015 Haiyang Li et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

The sparsity problems have attracted a great deal of attention in recent years, which aim to find the sparsest solution of a representation or an equation. In the paper, we mainly study the sparsity of underdetermined linear system via lp minimization for 0<p<1. We show, for a given underdetermined linear system of equations Am×nX=b, that although it is not certain that the problem (Pp) (i.e., minXXpp subject to AX=b, where 0<p<1) generates sparser solutions as the value of p decreases and especially the problem (Pp) generates sparser solutions than the problem (P1) (i.e., minXX1 subject to AX=b), there exists a sparse constant γ(A,b)>0 such that the following conclusions hold when p<γ(A,b): (1) the problem (Pp) generates sparser solution as the value of p decreases; (2) the sparsest optimal solution to the problem (Pp) is unique under the sense of absolute value permutation; (3) let X1 and X2 be the sparsest optimal solution to the problems (Pp1) and (Pp2)(p1<p2), respectively, and let X1 not be the absolute value permutation of X2. Then there exist t1,t2[p1,p2] such that X1 is the sparsest optimal solution to the problem (Pt)(t[p1,t1]) and X2 is the sparsest optimal solution to the problem (Pt)(t(t2,p2]).

1. Introduction

Recently, considerable attention has been paid to the following sparsity problem. Given a full-rank matrix A of size m×n with mn, m-vector b, and knowing that b=AX, where XRn is an unknown sparse vector, we expect to recover X. Although the system of equations is underdetermined and hence it is not a properly posed problem in linear algebra, sparsity of X is a very useful priority that sometimes allows unique solution. Accordingly, one naturally proposes to use the following optimization model P0 to obtain the sparsest solutions: (1)P0minXX0s.t.AX=b,where X0 denotes the number of nonzero components of X (we call ·0  l0 norm). This is one of critical problems in compressed sensing research. This problem is motivated by data compression, error correcting codes, n-term approximation, and so forth (see, e.g., ). It is known that the problem P0 needs nonpolynomial time to solve (cf. ). It is crucial to recognize that one natural approach to tackle P0 is to solve the following convex minimization problem: (2)P1minXX1s.t.AX=b,where X1=i=1n|xi| is the standard l1 norm. The study of this problem P1 was pioneered by Donoho, Candès, and their collaborators and many researchers have made a lot of contributions related to the existence, uniqueness, and other properties of the sparse solution as well as computational algorithms and their convergence analysis to tackle the problem P0 (see survey papers in ). However, the solutions to the problem P1 are often not as sparse as those to the problem P0. It is definitely imperative and required for many applications to find solutions which are more sparse than that to the problem P1. A natural try for this purpose is to apply the problem Pp  (0<p<1), that is, to solve the following model: (3)PpminXXpps.t.AX=b,where Xpp=i=1n|xi|p (we call ·p  lp-norm, though it is no longer norms for p<1 as the triangle inequality is no longer satisfied). Obviously, the problem Pp is no longer a convex optimization problem. This minimization is motivated by the following fact:(4)limp0+Xpp=X0.This model was initiated by  and many researchers have worked on this direction [1, 2, 716]. They demonstrate that (1) for a Gaussian random matrix A, the restricted p-isometry property of order s holds if s is almost proportional to m when p0+ (cf. ); (2) when δ2s<1 (or δ2s+1<1, δ2s+2<1), the optimal solution to the problem Pp is the same as the optimal solution to the problem P0 when p>0 small enough, where δ2s<1 is the restricted isometry constants of matrix A (similar for δ2s+1<1, δ2s+2<1) (cf. [7, 10, 13]); and (3) the lp minimization can be applied to a wider class of random matrices A (cf. ). In addition, in [7, 15], the authors show that the problem Pp generates sparser solution than the problem P1 and the problem Pp generates sparser solution as the value of p decreases by taking phase diagram studies with a set of experiments. Nevertheless, are the conclusions showed by taking phase diagram studies true in theory? In the paper, we will answer this question by studying the sparsity of lp minimization. Firstly, using Example 2 we show, in general, that the answer to the question above is negative. Secondly, although the answer to the question above is negative, we can prove that, for a given underdetermined linear system of equations Am×nX=b, there exists a constant γ(A,b)>0 (we call it sparsity constant) such that the following conclusions hold when p<γ(A,b).

The problem Pp generates sparser solution as the value of p decreases (Theorem 7).

Let Xp be the sparsest optimal solution to the problem Pp. Then Xp is the unique sparsest optimal solution to the problem Pp under the sense of absolute value permutation (Corollary 6).

Let X1 and X2 be the sparsest optimal solution to the problem (Pp1) and problem (Pp2) (p1<p2), respectively, and let X1 not be the absolute value permutation of X2. Then there exist t1,t2[p1,p2] such that X1 is the sparsest optimal solution to the problem (Pt) (t[p1,t1]) and X2 is the sparsest optimal solution to the problem (Pt) (t(t2,p2]) (Theorem 8).

2. The Sparsity of Underdetermined Linear System via <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" id="M113"><mml:mrow><mml:msub><mml:mrow><mml:mi>l</mml:mi></mml:mrow><mml:mrow><mml:mi>p</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:math></inline-formula> Minimization

Let X be the set of all solutions to the underdetermined linear systems AX=b. For the convenience of account, we call X1 the absolute value permutation of X2, which means that (|x11|,|x12|,,|x1n|) is the permutation of (|x21|,|x22|,,|x2n|), where X1=(x11,x12,,x1n)T and X2=(x21,x22,,x2n)TX.

Lemma 1 (see [<xref ref-type="bibr" rid="B6">17</xref>]).

The problem P1 may have more than one solution. Nevertheless, even if there are infinitely many possible solutions to this problem, we can claim that (1) these solutions are gathered in a set that is bounded and convex, and (2) among these solutions, there exists at least one with at most m nonzeros.

The following example shows that, in general, it is not certain that the problem Pp generates sparser solution than the problem P1 and the problem Pp generates sparser solution as the value of p decreases.

Example 2.

We consider the underdetermined linear system of equations AX=b, where(5)A=α1,α2,α3,α4=-202913187001815160290463435-1,b=(1,2,3)T. By Lemma 1, the l0-norm of the optimal solutions to the problem P1 are not more than 3, and hence the optimal solution is one of the following feasible solutions:

X1=(0,-4/27,29/9,58/135)T;

X2=(0.1,0,3,0.4)T;

X3=(1.45,2,0,0)T;

X4=(1.45,2,0,0)T.

Furthermore, we can show that the optimal solution to the problem Pp  (p=0.8,0.95) is one of above feasible solutions. It is easy to calculate that(6)X10.80.8=3.2756,X20.80.8=3.0472,X30.80.8=X40.80.8=3.0873,X10.950.95=3.6502,X20.950.95=3.3706,X30.950.95=X40.950.95=3.3552,X11=3.7999,X1=3.5,X31=X41=3.45.Thus X2 is the optimal solution when p=0.8 and X3 is the optimal solution when p=0.95 and p=1. However, X20=3, X30=2. Therefore, the problem Pp does not generate sparser solution than the problem P1 and the problem Pp does not generate sparser solution as the value of p decreases.

In the following, we will prove the conclusions mentioned in Introduction.

We define two functions f(t)=Xt=(|x1|t++|xk|t)1/t (t>0) and g(t)=Xtt=|x1|t++|xk|t (t>0), where X=(x1,,xk) and xi0. Then f(t)=(g(t))1/t.

Theorem 3.

f ( t ) is a monotone decreasing convex function and(7)ft=fttgtgt-lnft.

Proof.

It is easy to show that (7) holds. Without loss of generality, we assume that x1x2|xk|. Because(8)ft=fttgtgt-lnft=ftt2i=1kxitlnxitgt-lngtftt2lnxkt-lngt0,f(t) is monotone decreasing.

Furthermore, f(t) is a convex function. In fact, we have, by the convexity of function f(x)=x2, (9)i=1kxitlnxii=1kxit2i=1kxitln2xii=1kxit.That is,(10)gtgt2g′′tgt.Thus(11)gtgt=g′′tgt-gtgt20and hence g(t)/g(t) is monotone increasing. Since f(t) is monotone decreasing, we know that g(t)/g(t)-lnf(t) is monotone increasing. Because f(t)/t is monotone decreasing, g(t)/g(t) is monotone increasing and g(t)/g(t)-lnf(t)0, (12)f′′t=fttgtgt-lnft+fttgtgt-lnft0which implies that f(t) is convex function.

Theorem 4.

For a given underdetermined linear system of equations Am×nX=b, there exists a constant γ>0 such that, for any X1,X2X, either f1(t)=X1t<f2(t)=X2t or f2(t)=X2t<f1(t)=X1t when 0<t<γ.

Proof.

Let Xk={XX0=k,XX} and Xβk={XXk; there exists β such that i=1kxi=β}. Clearly, we have Xk=βXβk, X=k=1nXk.

Firstly, for any X1,X2Xβk, there exists a constant γβk>0 such that when 0<t<γβk, either f1t=X1t<f2t=X2t or f2(t)=X2t<f1(t)=X1t.

Obviously, for any given X1,X2Xβk, there is a positive number {γβk}j such that when 0<t<{γβk}j, either f1t=X1t<f2(t)=X2t or f2(t)=X2t<f1(t)=X1t. Hence, it suffices to show infj{γβk}j=γβk0. Otherwise, for an arbitrarily small positive number ɛ, there exists t with 0<t<ɛ, Y1Xβk, and Y2Xβk such that (13)f1t=Y1t=f2t=Y2t.Using (7) we obtain (14)f1ttg1tg1t-lnf1t=f2ttg2tg2t-lnf2t.That is, (15)g1t/g1t-lnf1tg2t/g2t-lnf2t=f2tf1t.Since Y1,Y2Xβk, we have i=1k|y1i|=i=1k|y2i|=β.

Hence(16)i=1klny1i=i=1klny2i.Therefore, there is a positive integer M such that (17)i=1klnMy1ii=1klnMy2iand, for any positive integer N with N<M, (18)i=1klnNy1i=i=1klnNy2i.Since, for any positive integer K,(19)g1K0=y11t++y1ktKt=0=i=1klnKy1i,g2K0=y21t++y2ktKt=0=i=1klnKy2i,we obtain, for M and N mentioned above,(20)g1M0g2M0,g1N0=g2N0.We assume, without loss of generality, that g1(M)(0)<g2(M)(0). For the M mentioned above, (15) becomes(21)g1t/g1t-lnf1tg2t/g2t-lnf2t1/tM-1=f2tf1t1/tM-1=g2tg1t1/tM.For the right of (21), we obtain (22)limt0g2tg1t1/tM=explimt0lng2t-lng1ttM=explimt0g2t/g2t-g1t/g1tMtM-1=explimt0g2′′t/g2t-g1′′t/g1t-g22t/g22t+g22t/g12tMtM-1==expg2M0-g1M0k>1.And for the left of (21), we obtain(23)limt0g1t/g1t-lnf1tg2/g2t-lnf2t1/tM-1=explimt0lnlng1t-g1t/g1t×t-lnlng2t-g2t/g2t×ttM-1=1.This is a contradiction and thus when 0<t<γβk, either f1(t)=X1t<f2(t)=X2t or f2(t)=X2t<f1(t)=X1t.

Secondly, for any X1,X2Xk, there exists a constant γk>0 such that when 0<t<γk, either f1(t)=X1t<f2(t)=X2t or f2(t)=X2t<f1(t)=X1t.

It suffices to show that infβγβk=γk0. Otherwise, for an arbitrarily small positive number ɛ, there is t with 0<t<ɛ, Y1Xβ1k and Y2Xβ2k (β1β2) such that (24)f1t=Y1t=f2t=Y2t.Using (7) again, we also obtain (15).

For the right of (15), we have(25)limt0f2tf1t=explimt0lng2t-lng1tt=iy1iiy2i1.And for the left of (15), we have (26)limt0g1t/g1t-lnf1tg2t/g2t-lnf2t=limt0g1t/g1t×t-lng1tg2t/g2t×t-lng2t=1.This is a contradiction and thus infβγβk=γk0.

Thirdly, for any X1Xk, X2Xs, ks, there exists a constant γk,s>0 such that when 0<t<γk,s, either f1(t)=X1t<f2(t)=X2t or f2(t)=X2t<f1(t)=X1t.

We assume, without loss of generality, that X10=k<s=X20. Then (27)limt0g1t/g1t-lnf1tg2t/g2t-lnf2t=limt0g1t/g1t×t-lng1tg2t/g2t×t-lng2t=lnklns<1,limt0f2tf1t=explimt0lng2t-lng1tt=.So there is a positive number γk,s such that when t<γk,s, (28)g1t/g1t-lnf1tg2t/g2t-lnf2t<f2tf1t,which implies that (29)f2t=X2t<f1t=X1t.

In conclusion, we take γ=min{γk,γk,sk,s=1,2,,n} and thus when 0<t<γ, for any X1,X2X, either f1(t)=X1t<f2(t)=X2t or f2(t)=X2t<f1(t)=X1t.

Obviously, for a given underdetermined linear system of equations Am×nX=b, there are infinitely many constants γi>0 such that when 0<t<γi Theorem 7 holds. The supremum of γi is called the sparse constant of underdetermined linear system of equations Am×nX=b and denoted γ(A,b).

Corollary 5.

Let equations Am×nX=b be an underdetermined linear system. Then f1(t)=X1t and f2(t)=X2t have at most one intersection in (0,γ(A,b)), where X1,X2X and X1 is not the absolute value permutation of X2.

Proof.

It is easy to prove that the conclusion holds by Theorems 4 and 7.

Corollary 6.

Let Xp be the sparsest optimal solution to the problem Pp (p<γ(A,b)). Then Xp is the unique sparsest optimal solution to the problem Pp under the sense of absolute value permutation.

Proof.

Suppose that Xp is another sparsest optimal solution to the problem Pp and Xp is not the absolute value permutation of Xp. By Theorem 7, t(0,p), either f1(t)=(Xpt)<f2(t)=(Xpt) or f1(t)=(Xpt)>f2(t)=(Xpt). We suppose that f1(t)=(Xpt)<f2(t)=(Xpt) and hence t(0,p) we have f1(t)>f2(t), which implies that Xp0>Xp0. This is a contradiction.

Theorem 7.

The problem Pp generates sparser solution as the value of p decrease when p<γ(A,b).

Proof.

If the conclusion does not hold, then there exists the optimal solutions X1 to the problems (Pp1) and the optimal solutions X2 to the problems (Pp2) satisfying p1<p2<γ(A,b) and X10=s>k=X20. We consider the following two cases.

If X1p1=X2p1, then X1p2<X2p2 because of Corollary 5 and s>k. This contradicts with the fact that X2 is the optimal solutions to (Pp2).

If X1p1<X2p1, then X1t and X2t have at least one intersection in (0,p1) because of s>k. Since X2p2X1p2, X1t, and X2t have at least one intersection in (p1,p2]. This is contradictory to Corollary 5.

Theorem 8.

Let X1 and X2 be the sparsest optimal solution to the problem (Pp1) and problem (Pp2) (p1<p2<γ(A,b)), respectively, and X1 is not the absolute value permutation of X2. Then there exist t1,t2[p1,p2] such that when p1tt1, X1 is the sparsest optimal solution to the problem (Pt) and when t2<tp2, X2 is the sparsest optimal solution to the problem (Pt).

Proof.

Firstly, X1 is not the optimal solution to Pp2 and hence X1p2>X2p2. In fact, if X1p2=X2p2, then X1p1<X2p1 by Corollary 5 and X1 is the optimal solution to the problem (Pp1). By Corollary 5 again, we have X10<X20 which contradicts with the fact that X2 is the sparsest optimal solutions to (Pp2).

We consider the following two cases.

If X1p1=X2p1, then, for any p2t>p1, X2 is the sparsest optimal solution to the problem (Pt). Otherwise, there exists X3 such that X3t<X2t or X3t=X2t and X30<X20. If X3t<X2t, then X30>X20 by Corollary 5 and X3p1X1p1=X2p1, which is contradictory to Theorem 8. If X3t=X2t and X30<X20, then X3p1<X2p1=X1p1 by Corollary 5, which contradicts the fact that X1 is the optimal solutions to (Pp1). Therefore, we pick t1=t2=p1.

If X1p1<X2p1, then, by X1p2>X2p2, X1t and X2t have one intersection t0 in (p1,p2), and hence X10<X20. We assume, without loss of generality, that X10+2=X20. Let X3 be the sparsest optimal solution to the problem Pt0. Then X3 is not the absolute value permutation of X2. Otherwise, we have X3t0=X2t0=X1t0, that is, X1 is the optimal solution to the problem Pt0. Since X1p1<X2p1=X3p1, we have X10<X30 which contradicts the fact that X3 is the sparsest optimal solution to the problem Pt0.

If X3 is the absolute value permutation of X1, then X3t0=X1t0=X2t0 and thus, by the proof of case (1), for any p2t>t0, X2 is the sparsest optimal solution to the problem (Pt). Obviously, for any p1tt0, X1 is the sparsest optimal solution to the problem (Pt). Therefore, we pick t1=t2=t0.

If X3 is not the absolute value permutation of X1, then X30=X10+1 by Corollary 6, and there exist t1(p1,t0), t2(t0,p2) such that t1 is the intersection of X3t and X1t and t2 is the intersection of X3t and X2t. By the proof above, we have that, for any tt1, X1 is the sparsest optimal solution to the problem (Pt) and for any t>t2, X2 is the sparsest optimal solution to the problem (Pt).

3. Conclusion

In this paper, the sparsity of underdetermined linear system via lp minimization for 0<p<1 has been studied. Our research reveals that for a given underdetermined linear system of equations Am×nX=b there exists a sparse constant γ(A,b)>0 such that when p<γ(A,b), the problem (Pp) generates sparser solution as the value of p decreases and the sparsest optimal solution to the problem (Pp) is unique under the sense of absolute value permutation and if X1 is not the absolute value permutation of X2 where X1 and X2 are the sparsest optimal solution to the problems (Pp1) and (Pp2) (p1<p2), respectively, then there exist t1,t2[p1,p2] such that X1 is the sparsest optimal solution to the problem (Pt) (t[p1,t1]) and X2 is the the sparsest optimal solution to the problem (Pt) (t(t2,p2]).

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work was supported in part by the National Natural Science Foundation of China under Project 11271297, Project 11131006, Project 11201362, and EU FP7 Projects EYE2E (269118) and the National Basic Research Program of China under Project 2013CB329404.

Lai M.-J. On sparse solutions of underdetermined linear systems Journal of Concrete and Applicable Mathematics 2010 8 2 296 327 MR2606266 Natarajan B. K. Sparse approximate solutions to linear systems SIAM Journal on Computing 1995 24 2 227 234 MR1320206 2-s2.0-0029291966 10.1137/s0097539792240406 Baraniuk R. G. Compressive sensing IEEE Signal Processing Magazine 2007 24 4 118 124 2-s2.0-34548253373 10.1109/MSP.2007.4286571 Bruckstein A. M. Donoho D. L. Elad M. From sparse solutions of systems of equations to sparse modeling of signals and images SIAM Review 2009 51 1 34 81 10.1137/060657704 MR2481111 ZBL1178.68619 2-s2.0-59749104367 Candès E. J. Compressive sampling 3 Proceedings of the International Congress of Mathematician 2006 Zürich, Switzerland European Mathematical Society 1433 1452 Gribonval R. Nielsen M. Sparse representations in unions of bases IEEE Transactions on Information Theory 2003 49 12 3320 3325 10.1109/tit.2003.820031 MR2045813 2-s2.0-0347968052 Chartrand R. Exact reconstruction of sparse signals via nonconvex minimization IEEE Signal Processing Letters 2007 14 10 707 710 10.1109/LSP.2007.898300 2-s2.0-34548724437 Chartrand R. Staneva V. Restricted isometry properties and nonconvex compressive sensing Inverse Problems 2008 24 3 1 14 035020 10.1088/0266-5611/24/3/035020 MR2421974 2-s2.0-44449127493 Chen X. Xu F. Ye Y. Lower bound theory of nonzero entries in solutions of l2-lq minimization SIAM Journal on Scientific Computing 2010 32 5 2832 2852 Foucart S. Lai M.-J. Sparsest solutions of underdetermined linear systems via lq-minimization for 0<q1 Applied and Computational Harmonic Analysis 2009 26 3 395 407 10.1016/j.acha.2008.09.001 Foucart S. Lai M.-J. Sparse recovery with pre-Gaussian random matrices Studia Mathematica 2010 200 1 91 102 MR2720209 2-s2.0-78649867431 10.4064/sm200-1-6 Lai M.-J. Wang J. An unconstrained lq Minimization with 0q1 for sparse solution of underdetermined linear systems SIAM Journal on Optimization 2011 21 1 82 101 10.1137/090775397 MR2765490 2-s2.0-79957452717 Sun Q. Recovery of sparsest signals via lq-minimization Applied and Computational Harmonic Analysis 2012 32 3 329 341 10.1016/j.acha.2011.07.001 MR2892737 2-s2.0-84857781336 Xu Z. B. Chang X. Xu F. Zhang H. L 1/2 regularization: a thresholding representation theory and a fast solver IEEE Transactions on Neural Networks and Learning Systems 2012 23 7 1013 1027 10.1109/tnnls.2012.2197412 2-s2.0-84871085657 Xu Z. B. Guo H. L. Wang Y. Zhang H. Representative of L1/2 regularization among Lq(0<q1) regularization: an experiments study based on phase diagram Acta Automatica Sinica 2012 38 7 1225 1228 Zeng J. Lin S. Wang Y. Xu Z. B. L 1/2 regularization: convergence of iterative half thresholding algorithm IEEE Transactions on Signal Processing 2014 62 9 2317 2329 10.1109/tsp.2014.2309076 MR3208256 2-s2.0-84898680921 Elad M. Sparse and Redundant Representations: From Theory to Applications in Signal and Image Processing 2010 New York, NY, USA Springer 10.1007/978-1-4419-7011-4 MR2677506 2-s2.0-84892329327