Deterministic Sensing Matrices in Compressive Sensing: A Survey

Compressive sensing is a sampling method which provides a new approach to efficient signal compression and recovery by exploiting the fact that a sparse signal can be suitably reconstructed from very few measurements. One of the most concerns in compressive sensing is the construction of the sensing matrices. While random sensing matrices have been widely studied, only a few deterministic sensing matrices have been considered. These matrices are highly desirable on structure which allows fast implementation with reduced storage requirements. In this paper, a survey of deterministic sensing matrices for compressive sensing is presented. We introduce a basic problem in compressive sensing and some disadvantage of the random sensing matrices. Some recent results on construction of the deterministic sensing matrices are discussed.


Introduction
Consider a scenario that x ∈ R is a vector we wish to recover. Let y ∈ R ( ≪ ) be a linear measurements of the vector x, which is given by where A is the measurement matrix or sensing matrix. Because this system is underdetermined, the recovery problem of the vector x from the measurement vector y is an ill-posed problem. However, two papers by Donoho [1] and Candès et al. [2] gave us a breakthrough by exploiting sparsity in recovery problems. The authors show that a sparse signal can be reconstructed from very few measurements by solving via 0 -minimization min z∈R ‖z‖ 0 subject to Az = y, (P 0 ) 1 -minimization min z∈R ‖z‖ 1 subject to Az = y, (P 1 ) or adopting a strategy between (P 0 ) and (P 1 ) min z∈R ‖z‖ subject to Az = y.
(P ) The sufficient conditions for having the solution of (P 0 ) to coincide with that of (P 1 ) are dependent on either mutual coherence or Restricted Isometry Property (RIP). These conditions are closely related to each other and play an important role in the construction of sensing matrices. Consider A = [a 1 ⋅ ⋅ ⋅ a ] is an × sensing matrix we investigate. Then its mutual coherence is defined as (A) = max ⟨a , a ⟩ a 2 a 2 , , = 1, . . . , .
Lemma 1 (see [3]). For an × sensing matrix A, the Welch bound is given by The existence and uniqueness of the solution can be guaranteed as soon as the measurement matrix A satisfies the RIP of order ; that is, 2 The Scientific World Journal The smallest value of is called the Restricted Isometry Constant (RIC). A strict condition 2 < 1 also guarantees exact solution via 0 -minimization. However, the problem (P 0 ) remains NP-hard; that is, it cannot be solved in practice. For 0 < ≤ 1, there is no numerical scheme to compute solutions with minimal -norm as well. Furthermore, the problem (P 1 ) is a convex optimization problem, and in fact, it can be formulated as a linear optimization problem. Then solving via 1 -minimization is efficient with high probability. Hence, most researchers are interested in the recovery via 1 -minimization.
There are two common ways to solve these problems. First, we can exactly recover x via 1 -minimization by solving the problem (P 1 ) or (P 1, ) which is given as The second method is using greedy algorithms for 0minimization, such as Matching Pursuit (MP), Orthogonal Matching Pursuit (OMP), or their modifications [4][5][6][7][8].
However, in order to ensure unique and stable reconstruction, the sensing matrix A must satisfy some criteria. One of the well-known criteria is 2 -RIP. More attention has been paid to random sensing matrices generated by identical and independent distributions (i.i.d.) such as Gaussian, Bernoulli, and random Fourier ensembles, to name a few. Their applications have been shown in medical images processing [9], geophysical data analysis [10], communications [11,12], and other various signal processing problems. Even though random sensing matrices ensure high probability in reconstruction, they also have many drawbacks such as excessive complexity in reconstruction, significant space requirement for storage, and no efficient algorithm to verify whether a sensing matrix satisfies RIP property with small RIC value. Hence, exploiting specific structures of deterministic sensing matrices is required to solve these problems of the random sensing matrices.
Recently, several deterministic sensing matrices have been proposed. We can classify them into two categories. First are those matrices which are based on coherence [13][14][15]. Second are those matrices which are based on RIP or some weaker RIPs [16][17][18][19][20]. In this paper, we introduce some highlighted results such as deterministic construction of sensing matrices via algebraic curves over finite fields in term of coherence and chirp sensing matrices, second-order Reed-Muller codes, binary Bose-Chaudhuri-Hocquenghem (BCH) codes, and the sensing matrices with statistical RIP in terms of the RIP.
The rest of this paper is organized as follows. Section 2 introduces some random sensing matrices and their practical disadvantages. In Section 3, we present some highlighted results in terms of deterministic constructions. Section 4 concludes this paper.

Random Sensing Matrices and Their Drawbacks
Recall that x ∈ R is the vector we want to recover. Because the number of measurements is much smaller than its dimension ( ≪ ), we cannot find a linear identity reconstruction map; that is, unique solution does not exist for all x in R . However, if we assume that the signal x belongs to a certain subset Σ ⊂ R which is the set of all -sparse vectors as for each index set ⊆ {1, . . . , }, the -best approximation is given by where ‖ ⋅ ‖ can be any norm in R . As we noted above, the use of randomly generated sensing matrices has become powerful in compressive sensing. For an upper bound where is a positive constant, the i.i.d. Gaussian matrix achieves the -RIP as well, which guarantees to recover sparse signals with high probability [21,22]. The condition in (7) is also known to hold for the symmetric Bernoulli distribution case and changed to ≤ /(log ) 6 for the Fourier measurements [23]. For noiseless recovery, it can be stated as follows.
Theorem 2 (see [24]). If z ∈ R is a -sparse vector and the sensing matrix A satisfies then x is the unique minimizer to (P 1 ).
In practice, the original signals may be affected by noise, so the recovered signals are not exact, and rather they are almost sparse instead. Hence, some modified criteria were proposed as follows.
Theorem 3 (see [25]). Suppose that x ∈ Σ and the noise e = Az − satisfies ‖e‖ 2 ≤ . If the sensing matrix A has RIP such that then x * which is the output of the reconstruction algorithm applied to x via (P 1, ) will obey where the constant depends on sparsity .
A new result on RIC was proposed by Candès as follows.
Theorem 4 (see [25]). Given x ∈ Σ (x) and an upper bound of noise ‖e‖ 2 = ‖Az − y‖ 2 ≤ , if the sensing matrix A has RIP such that then any solution x * of (P 1, ) obeys where 1 and 2 are two positive constants depending on sparsity .
Random matrices are easy to construct and ensure high probability reconstruction. However, they also have many drawbacks. First, storing random matrices requires a lot of storage. Second, there is no efficient algorithm verifying RIP condition. So far, it is not a good approach because of its lack of efficiency. The recovery problems may be difficult when the dimension of the signal becomes large, and we have to construct a measurement matrix that satisfies RIP with a small , such as Theorem 4.

Chirp Sensing Matrices. A discrete chirp of length has the form
where is the chirp rate and is the base frequency. The full chirp sensing matrix A of size × 2 can be written as Each matrix U ( = 1, . . . , ) is an × matrix with columns given by the chirp signals having a fixed chirp rate with base frequency varying from 0 to − 1. For instance, given = 2 and , , ∈ {0, 1}, we obtain Hence, we get the 2 × 4 deterministic sensing matrix A as Note that when = 1 = 0, the matrix U 1 corresponding to chirp rate 1 becomes the Discrete Fourier Transform (DFT) matrix.
Most of the sensing chirp matrices admit a fast reconstruction algorithm which reduces the complexity to ( log ).

Second-Order Reed-Muller Sensing
Matrices. The secondorder Reed-Muller code is given as follows: where P is a × binary symmetric matrix, b is a ×1 binary vector in Z 2 , and (b) is the weight of b, that is, number of bit-1 entries. In practice, the matrices P are set as all-zero matrices or the matrices with zero diagonals. Thus, there are only 2 ( −1)/2 matrices P satisfying this condition, which are {P 1 , . . . , P 2 ( −1)/2 }, and the functions { P,b (a)} are real valued. The set forms a basis of Z 2 . The inner product on F P is defined as follows. For any two vectors P,b and P ,b in F P , where = rank(P − P ). The deterministic sensing matrix has the form where U P i is the unitary matrix corresponding to F ( = 1, . . . , 2 ( −1)/2 ). Note that if we set = 2 and = 2 ( +1)/2 , we get an × sensing matrix A RM . For instance, let = 2; then There are only 2 2(2−1)/2 = 2 binary symmetric matrices P of size 2 × 2 satisfying the condition. These are Hence, we get the deterministic sensing matrix A as Reconstruction algorithms using the second-order Reed-Muller sensing matrices can outperform the standard compressive sensing using random matrices via 1 -minimization, especially when the original signal is not sparse and the noise is present. Moreover, the nesting of the Delsarter-Goethals sets of the Reed-Muller codes is still feasible if the dimension of the original signal is large [17,29].
The BCH code C is defined by In other words, if we denote N as the null-space of the above matrix of GF(2 ), then the BCH code C = N ∩ F 2 . An example of binary matrices formed by BCH code is given as follows. Let = 15, = 4, and = 1, and let be a primitive element of GF(2 4 ) = GF(16) satisfying 4 + +1 = 0. Then = (2 −1)/ = (2 4 −1)/15 = . The BCH code is the set of 15 tuples that lie in the null space of the matrix Since ( ) = ( 4 + + 1) ( 4 + 3 + 2 + + 1) Since BCH code is cyclic, we can describe it in terms of a generator polynomial which is the smallest degree polynomial having zeros , 3 , . . . , 2 −1 , . . . , 2 −1 . The advantages of these matrices are deterministic construction, simplicity in sampling process, and reduced computational complexity compared with the DeVoice's binary sensing matrices. However, the generated matrices formed by BCH codes do not acchieve the RIP constraint yet.

Sensing Matrices with Statistical Restricted Isometry Property.
In [18], the authors proposed some weaker Statistical Restricted Isometry Properties (StRIPs) defined as follows.
Definition 5 (StRIP). An × matrix is called StRIP of order with constant and probability 1 − if with respect to a uniform distribution of vector x among all -sparse vectors in R of the same norm.
Definition 6 (UStRIP). An × matrix is called UStRIP of order with constant and probability 1 − if it satisfies StRIP and with probability exceeding 1 − with respect to a uniform distribution of vector x among all -sparse vectors in R of the same norm.
These constructions allow recovery methods for which expected performance is sublinear in and quadratic in , compared to the superlinear in of the BP or the MP algorithms. The criteria are simple; however, when satisfied by a deterministic sensing matrix, they guarantee successful recovery in an exponentially small fraction of -sparse signals. The authors also showed that such sensing matrices satisfying these aforementioned properties could be constructed by chirps, second-order Reed-Muller codes, and BCH codes [16][17][18].

Deterministic Construction of Sensing Matrices via Algebraic Curves over Finite Fields.
In [14], DeVore used polynomials over finite field F to construct binary sensing matrices of size 2 × +1 , where is a primer number. Let { 0 , 1 , . . . , } be a subset of F , and let be the set of all polynomials of degree no more than on F as There are = +1 such polynomials. Any polynomial ∈ can be described as a mapping The set of (x, (x)) is a subset of F ×F . We order the element of F × F as (0, 0), (0, 1), . . . , ( − 1, − 1). For given F , the set F × F has = 2 of order pairs. For each ∈ , we denote k as the vector indexed on F × F which is defined by where for , = 0, 1, . . . , − 1. There are several deterministic constructions of sensing matrices via algebraic curves over finite fields called algebraic geometry codes [30][31][32][33]. Goppa's code is one of well-known results which contain many linear codes with many good parameters. Hence, these kinds of sensing matrices are good candidates in reconstruction issues using compressive sensing.

Binary Sensing Matrices Generated by Unbalanced
Expander Graphs. In [20], a large class of deterministic sensing matrices based on unbalanced expander graphs, that is, the combinatorial structures, was proposed. Denoting [ ] = {1 ⋅ ⋅ ⋅ }, these bipartite graphs are formalized through the following definitions.
If the sensing matrix A is an adjacency matrix of high-quality unbalanced expander, then the RIP-holds for 1 ≤ ≤ 11 + (1)/ log( ).
This approach utilizes sparse matrices interpreted as adjacency matrices of sparsity to recover an approximation to the original signal. The new property RIP-suffices to guarantee exact recovery algorithms.

Concluding Remarks
In this paper, various deterministic sensing matrices have been investigated and presented in terms of coherence and RIP. The advantages of these matrices, in addition to their deterministic constructions, are the simplicity in sampling and recovery process as well as small storage requirement. It can be possible to make further improvement in both reconstruction efficiency and accuracy using these deterministic matrices in compressive sensing, particularly when some a priori information on location of nonzero components is available.