On the Uniqueness of the Sparse Signals Reconstruction Based on the Missing Samples Variation Analysis

An approach to sparse signals reconstruction considering its missing measurements/samples as variables is recently proposed. Number and positions of missing samples determine the uniqueness of the solution. It has been assumed that analyzed signals are sparse in the discrete Fourier transform (DFT) domain. A theorem for simple uniqueness check is proposed. Two forms of the theorem are presented, for an arbitrary sparse signal and for an already reconstructed signal. The results are demonstrated on illustrative and statistical examples.


I. INTRODUCTION
In many engineering applications, an incomplete set of samples/measurements arises due to physical system constraints.In some cases randomly positioned samples/measurements are heavily corrupted so that it is better to omit them and consider as unavailable [1], [2].In these applications reduction of the considered dataset is not a result of an intentional compressive sensing strategy [3]- [8].Nevertheless, the primary goal is the signal recovery, as in the compressive sensing theory [8], [16].Recently, an adaptive-step gradient-based method for the reconstruction of sparse signals with missing/omitted samples has been proposed [17].The proposed method reconstructs the remaining missing samples/measurements in order to make a complete set of samples/measurements, in contrast to the common reconstruction methods that recover the signal in their sparsity domain.The final result in all algorithms is the same.It is full recovery of the signal.
In general, the reconstructed signal uniqueness is guaranteed if the restricted isometry property is used and checked with appropriate isometry constants [5].However, two problems exist in the implementation of this method.For a specific measurement matrix, it produces quite conservative bounds.In practice, it produces a large number of false alarms for nonuniqueness.In addition, uniqueness check with the restricted isometry property requires a combinatorial approach, which is an NP-hard problem (like the solution of the problem itself using the norm-zero in the minimization).In the adaptive gradient-based method, the missing samples/measurements are considered as the minimization variables.The available samples values are known and fixed.Obviously, the number University of Montenegro, Podgorica, ljubisa@ac.me,milos@ac.meThis is preprint version of article published in Mathematical Problems in Engineering (Hindawi Publishing Corporation), Article ID 629759, http://www.hindawi.com/journals/mpe/aip/629759/ of variables in the minimization process is equal to the number of missing samples/measurements in the observation domain.This approach is possible when the common signal transforms are the domains of signal sparsity.Then missing and available samples/measurements form a complete set of samples/measurements.
The discrete Fourier transform (DFT), as the most important signal transform, is considered in this paper as the sparsity domain of the signal.A theorem for the uniqueness of the reconstructed solution, based on the missing sample variations, is presented.Two forms of the theorem are presented.One stating the uniqueness condition for a given missing sample transformation matrix and the other providing the uniqueness check if a sparse signal is already recovered using a reconstruction algorithm.The solution is unique in the sense that the variation of the missing sample values can not produce another signal of the same or lower sparsity.The theorems provide an easy and computationally efficient uniqueness check.
The paper is organized as follows.After the introduction, the uniqueness theorems and corollaries are defined and illustrated on examples.The proofs are presented in Section 3. The worst case signal is derived and related to the group delay function in Section 4. Theoretical results are demonstrated on simple illustrative and statistical examples as well.

II. ON THE RECONSTRUCTED SIGNAL UNIQUENESS
Consider a signal x(n) with n ∈ N ={0, 1, 2, ...., N − 1}.Assume that Q of its samples at the positions q m ∈ N Q = {q 1 , q 2 , ...., q Q } are missing/omitted.The signal is sparse in the DFT domain, with sparsity s.The reconstruction goal is to get x(n), for all n ∈ N using available samples at n ∈ M = N \N Q .We will consider a new signal of the form where z(n) ≡ 0 for the available signal positions n ∈ M and z(n) may take arbitrary values at the positions of missing samples n = q m ∈ N Q = {q 1 , q 2 , ...., q Q }.The DFT of this signal is Positions of nonzero values in X(k) are k 0i ∈ K s = {k 01 , k 02 , ...., k 0s } with X(k 0i ) = σ i .In the minimization process the values of missing samples of x a (n) = x(n)+z(n) for n ∈ N Q are considered as variables.The goal of the reconstruction process is to get x a (n) = x(n), or z(n) = 0 for all n ∈ N.This goal should be achieved by minimizing the sparsity of the signal transform X a (k).Existence of the unique solution of this problem depends on the number of missing samples, their positions, and the signal itself.First, assume that the signal can take any form, including the worst possible one.Then the number of missing samples and their positions will be considered only.The uniqueness, in this case, means that if a signal with the transform X(k) of sparsity s, is obtained using a reconstruction method, with a given set of missing samples, then there is no other signal of the same or lower sparsity that satisfy the given set of available samples values, using the same set of missing samples as variables.
Theorem 1 Consider a signal x(n) that is sparse in the DFT domain with unknown sparsity.Assume that the signal length is N = 2 r samples and that Q samples are missing at the instants q m ∈ N Q .Assume that the reconstruction is performed and that the DFT of reconstructed signal is of sparsity s.The reconstruction result is unique if the inequality Example: Consider a signal with N = 2 5 = 32 and Q = 9 missing samples at q m ∈ N Q = {2, 3, 8, 13, 19, 22, 23, 28, 30}.
Using the theorem, we will find the sparsity limit s when we are able to claim that the reconstructed sparse signal is unique for any signal form.
-For h = 0 we use -For h = 1, the number Q 2 1 is the greater value of card{q : q ∈ N Q and mod(q, 2) = 0} = card{2, 8, 22, 28, 30} = 5 card {q : q ∈ N Q and mod(q, 2) = 1} = card{3, 13, 19, 23} = 4, i.e., the maximal number of even or odd positions of missing samples.Thus -Next Q 2 2 is calculated as the maximal number of missing samples whose distance is multiple of 4. For various initial counting positions b = 0, 1, 2, 3 the numbers of missing samples with distance being multiple of 4 are 2, 1, 3, and 3, respectively.Then -For Q 2 3 the number of missing samples at distances being multiple of 8 are found for various b = 0, 1, 2, 3, 4, 5, 6, 7. The value of -Finally we have two samples at distance 16 (samples at the positions q 2 = 3 and The reconstructed signal of sparsity s is unique if The theorem considers general signal form.It includes the case when the amplitudes of signal components are related to each other and related to the missing sample positions.The specific signal form required by the theorem, to reach its bound, is analyzed in Section IV.Since this kind of relation is a zero-probability event, we will define a corollary, neglecting the probability that the signal values are dependent to each other and related to missing sample positions at the same time. Corollary 2 Consider the signal x(n) that is sparse in the DFT domain.Assume that signal length is N = 2 r samples and that Q samples are missing at the instants q m ∈ N Q .Also assume that the reconstruction is performed and that the DFT of reconstructed signal is of sparsity s.Assume that the amplitudes of signal components are arbitrary with arbitrary phases so that the case when all of them can be related to the values defined by using the missing sample positions is a zero-probability event.The reconstruction result is not unique if the inequality holds.Integers Q 2 h are calculated in the same way as in the Theorem 1.
The sparsity limit s when we are able to claim that the reconstructed sparse signal is not unique is Pseudo-code for uniqueness check according to Theorem 1 and Corollary 2 is presented in Algorithm 1.Note that in Corollary 2 we used the condition that the reconstruction result is nonunique (instead of the condition that the reconstruction result is unique in Theorem 1) since the zero-probability events are included here.
Corollary 2 provides the uniqueness test for the given positions of unavailable samples.In the cases with h > 0 it exploits the periodic structure of the transformation matrix of missing samples.The periodical form assumes that the Algorithm 1 Sparsity limits -Theorem 1 and Corollary 2 Require: • Set of missing sample positions for b ← 0, 2 h − 1 do 5: end for 7: end if end if 14: end for Output: K T , K C • Every solution with sparsity s < K T is unique.
• Solution with sparsity s < K C is unique with probability one, excluding zero-probability event (when the amplitudes of signal components are related to each other with a relation defined by the missing sample positions).
positions of possible zero values in Z(k) do not interfere with the signal nonzero value positions.This is possible in the worst case analysis.For example, with two missing samples at the positions z(q m ) and z(q m + N/2), the reconstruction process assumes that there are N/2 − 1 zero values in Z(k) and that the same number of zero values is preserved in X(k) + Z(k).This event can occur if we assume that all nonzero values of X(k) have the same structure as Z(k).In this specific case, it means that all of the signal nonzero coefficients are either on odd or even positions in the frequency domain.
Numerical example: Signal with N = 64 samples is considered.The signal sparsity is varied from s = 1 to s = N − 1.
For each signal sparsity s number of missing samples is varied from Q = 1 to Q = N − 1.For each pair (s, Q) 1000 trials are performed with Q randomly positioned missing samples.Uniqueness is checked by Theorem 1 and Corollary 2. Percentage of trials where uniqueness is guaranteed is presented in Fig. 1.We can clearly see two regions one where uniqueness was achieved in each trial (P = 100%) and the other where the solution was always nonunique (P = 0%).In the transition between these two regions the uniqueness highly depends on the missing sample positions, producing 0% < P < 100%.
We see that there is a sharp transition, for example at Q = 16 (for N − Q = 48), from P (s, Q) = P (15, 16) ∼ = 1 when s = 15 to P (s, Q) = P (16, 16) ∼ = 0 for s = 16 (marked with white dots in Fig. 1 the solution will be unique for s = 15 and nonunique for s = 16.This equality is satisfied for Q 32 = 2, Q 16 = 3, Q 8 = 5, and Q 4 = 9.It means that at least one of the following holds: 1) There are Q 32 = 2 samples at distance 32, 2) There are Q 16 = 3 samples at the distance being multiple of 16, 3) There are Q 8 = 5 samples at the distance being multiple of 8, 4) There are Q 4 = 9 samples at the distance being multiple of 4. Probability that among Q = 16 samples out of N = 64 there are 2 samples at the distance 32 is P = 0.92.Since other events are with lower probabilities, this one is sufficient to explain the sharp change in P (s, Q).We may write P (15, 16) − P (16, 16) ∼ 0.92.
After signal reconstruction, we are in a position to additionally specify the uniqueness requirements using the reconstructed signal.When a sparse signal is reconstructed we want to check the uniqueness of this signal.It means that the signal x(n), with its transform X(k) which is nonzero at k i ∈ K s = {k 01 , k 02 , ...., k 0s }, is obtained and we want to check if there is another signal X(k) + Z(k) with the same or lower sparsity, where Z(k) is the DFT of arbitrary values of samples at the missing sample positions.The positions of nonzero values in X(k) are not arbitrary, while the positions of zero and nonzero values in Z(k) could change to produce minimal possible sparsity of X(k) + Z(k).In the previous example with two missing samples z(q m ) and z(q m + N/2), when the signal is already recovered, it means that we can not assume that all {k 01 , k 02 , ...., k 0s } are either odd or even.They are given since we have already reconstructed a sparse signal.
Theorem 3 Consider the signal x(n) that is sparse in the DFT domain with unknown sparsity.Assume that the signal length is N = 2 r samples and that Q samples are missing at the instants q m ∈ N Q .Also assume that the reconstruction is performed and that the DFT of reconstructed signal is of sparsity s.Assume that the positions of the reconstructed nonzero values in the DFT are Note that for S 2 r−h = 0 this Theorem reduces to the Theorem 1.For the DFT values equally distributed over all positions, this Theorem produces result close to s ≥ N − Q. Corollary 4 Consider the signal x(n) that is sparse in the DFT domain with unknown sparsity.Assume that signal length is N = 2 r samples and that Q samples are missing at the instants q m ∈ N Q .Also assume that the reconstruction is performed and that the DFT of reconstructed signal is of sparsity s.Assume that the positions of the reconstructed nonzero values in the DFT are Assume that the amplitudes of signal components are arbitrary with arbitrary phases so that the case when all of them can be related to the values defined by using the missing sample positions is a zero-probability event.Reconstruction result is not unique if the inequality holds.Integers Q 2 h and S 2 r−h are calculated as in the Theorem 3. The case when all of signal components can be related to the values defined by using the missing sample positions is considered here.
Pseudo-code for uniqueness check according to Theorem 3 and Corollary 4 is presented in Algorithm 2.
Example: Consider a signal with N = 32 and Q = 9 missing samples at By testing these two signals, we get the following decisions.According to Theorem 1 we can not claim uniqueness in either of these cases since s = 15 in the first case and s = 17 in the second case.Both are greater than the Theorem 1 bound s < 8.The same holds for Corollary 2 since both are s ≥ 15.By testing these results with Theorem 3 we get that that in case a) the solution is nonunique.It is due to a very specific form of the reconstructed signal with all components being found at the odd frequency positions.Since the sparsity was defined by periodicity 16 in q m ∈ N Q , then variations of two signal samples z(q 2 = 3) and z(q 5 = 19) can produce a signal X(k) + Z(k) with the same sparsity as the reconstructed signal.These two samples, as variables, are able to produce many (N/2) zero values in Z(k) either at odd or even positions in frequency (Section 4).In this case, they are at even positions of X(k) + Z(k).However, in signal b) that is not the case.Nonzero values are distributed over both even and odd frequency positions.Although sparsity of this signal is s = 17 the reconstruction is unique.The distribution of nonzero values in the reconstructed X(k) is such that by varying two samples z(q 2 = 3) and z(q 5 = 19) we can not produce a signal X(k) + Z(k) of the same or lower sparsity with nonzero z(q 2 = 3) and z(q 5 = 19).The limit, in this case, is defined by the lower periodicity in z(q) than N/2.Thus, if we obtain this signal using a reconstruction algorithm the solution is unique.
Example: Consider a signal with N = 1024 and Q = 512 missing samples at q m ∈ N Q = {0, 2, 4, ...1022}.The reconstructed signal is at the frequencies: a) K s = {3}, b) K s = {3, 515}.We can easily check that in all cases with Theorem 1, Corollary 2 and Theorem 3, the reconstruction is nonunique although s = 1 or s = 2 is much smaller than the available number of samples N − Q = 512.The answer is obtained almost immediately, since the computational complexity of Theorem 1, Corollary 2 and Theorem 3, is of order O(N ).
Numerical example: Signal with N = 64 samples is considered.The signal sparsity is varied from s = 1 to s = N − 1.
For each signal sparsity s the number of missing samples is varied from For each pair (s, Q) 1000 trials are performed with randomly positioned s nonzero DFT values and randomly positioned Q missing samples.In each Algorithm 2 Uniqueness check -Theorem 3 and Corollary 4 Require: • Set of missing sample positions end for 8: for b ← 0, 2 r−h − 1 do 10: end for Sort array P in non-decreasing order 13: R 2 ← 0 16: end if 20: end for 21: R ← R 1 + R 2

Output: R
• R = 2 when the considered solution is unique.
• R = 1 when the considered solution is unique with probability one excluding zero-probability event (when amplitudes of the signal components are related to each other with a relation defined by missing sample positions).• R = 0 when the considered solution is not unique.trial the uniqueness is checked by Theorem 3 and Corollary 4. Percentage of trials where the uniqueness is guaranteed is presented in Fig. 2. We can clearly see two regions one where the uniqueness was achieved in each trial (P = 100%) and the other region where the solution was nonunique in each trial (P = 0%).The transition between regions is quite sharp.In this transition region, for given (s, Q), the uniqueness highly depends on the positions of DFT values and missing samples producing 0% < P < 100%.The transition region is wider for the Theorem 3 comparing to transition region for Corollary 4.
In Fig. 3, regions where the probability is higher than 99% are presented.The first region is defined by Theorem 3, which guaranties uniqueness for any signal.The second region is defined by Corollary 4 and it guaranties uniqueness with a high probability (this region is signal dependent).These regions are combined in the third subplot.

A. Proof of Theorem 1
Consider the DFT of a signal x(n) with N samples.For the presentation simplicity we assumed common N = 2 r although the results can be generalized for any N .Assume that M available samples are at the instants n i ∈ M = {n 1 , n 2 , ...., n M } and the missing samples are at the instants q m ∈ N Q = {q 1 , q 2 , ...., q Q } and Q = N − M .Assume that the reconstruction process is done and the obtained signal is sparse in the DFT domain.Its sparsity is s with nonzero coefficients k 0i ∈ K s = {k 01 , k 02 , ..., k 0s }⊂{0, 1, 2, 3, ...., N − 1}.Under this assumption the signal x(n) is Amplitudes |σ i /N | of the signal components are arbitrary with arbitrary phases arg{σ i }.
Let us form a signal where z(n) = 0 for n ∈ M and takes arbitrary values z(q m ) at the positions of missing samples q m ∈ N Q .The DFT of this signal is Denote the number of non-zero values in X a (k) as X a 0 .The DFT of signal z(n) is The aim of minimization of X a 0 is to produce the smallest possible value X a 0 = s.If this is possible in trivial case z(n) = 0 only, then our solution is unique.If there exists a non-trivial solution for z(n) with X a 0 = s then we have two different solutions x(n) and x(n) + z(n) with the same sparsity s meaning that our solution is not unique.The DFT of signal z(n) can be written in a matrix form as Since we look for zeros in Z(k), without loss of generality, we can rewrite the system (2) in the form normalized with the first column corresponding to q 1 as where q 1n = q n − q 1 .
In general, during the minimization, we have Q variables z(q m ) (Q degrees of freedom).First assume that there is no common period (smaller than N ) in columns of the transform matrix for the missing sample positions q i .This case corresponds to pairwise coprime numbers q 12 , q 13 , ...., q 1Q .Then Q variables z(q m ) can be used to produce Q − 1 zeros in Z(k) at the positions where there is no signal and, in the worst possible case, to cancel out all s nonzero coefficients in X(k).Therefore the largest possible number of zero values in then the considered solution is unique since only the trivial solution z(n) = 0 results in sparsity s and every nontrivial solution results in X a 0 > s.If we obtain the reconstructed signal with sparsity s ≥ N − Q + 1 2 then the solution is not unique.However, if 2s < N − Q + 1 then it still does not mean that the solution is unique since we assumed that there is no periodicity in transform matrix (3).Next we will include all possible cases when some of the columns of transform matrix in (3) may have a common period lower than N .
The matrix of coefficients Z(k) can not be periodic with period smaller than Q.Thus, regarding to the period of the whole matrix the worst case would be the transformation matrix which repeats after exactly Q rows, where Q is divisor of N .Denote the number of repetitions as R = N/Q.
For periodic structure of transformation matrix in (3), with such a period that the rows are repeated R times within N , the largest number of zero values in Z(k) is now R(Q − 1).In addition, the nonzero values in Z(k), in the worst possible case, can cancel all s nonzero DFT values of X(k).Thus, in this case, the lowest possible number of nonzero values in For the unique solution it should be greater than the signal sparsity, i.e., N −R(Q−1)− s > s or N/(2Q) > s.This is the result of the uncertainty principle in the DFT, presented in Subsection III-B.
The process does not end here.In the minimization process, we must also consider all subsets of missing samples q m ∈ N Q = {q 1 , q 2 , ...., q Q }.Namely, it can happen that the worst case regarding the maximal number of zero values in Z(k) is not the case with the full set of variables z(q).It could happen that some subsets of the variables q m ∈ N Q may produce a higher number of zero values in Z(k) than the whole set of variables.It means that the reconstruction algorithm may find some variables z(q i ) to be zero valued and vary only a subset of remaining variables z(q i ).

Subsets of Missing Samples:
We have concluded that the periodicity R reduces the sparsity of X(k) + Z(k), by increasing the number of possible zero values in Z(k).Consider a general set of missing sample positions q m ∈ N Q = {q 1 , q 2 , ...., q Q }.Then the algorithm for uniqueness should check periods for all possible subsets of missing samples.Assuming as in Theorem 1, without loss of generality, that N = 2 r the cases are as follows: 1) Using all missing samples, assuming that there is no common period for all of them (in the sense of (3)), the unique solution is obtained if s < N − (Q − 1) − s.The cases with periodic matrix structure will be included in the steps that follows.
2) Consider minimal possible number of periods R > 1 for N = 2 r .It is R = 2 repetitions in the matrix of coefficients (3) with period N/2.Any subset of N Q = {q 1 , q 2 , ...., q Q } containing only even or only odd positions can be considered as a set of minimization variables with period N/2.The number of missing samples q m ∈ N Q at even positions can be written as The same should be done for odd positions in q m ∈ N Q N O = card{q : q ∈ N Q and mod(q, 2) = 1}.
Since we look for the worst case in our reconstruction algorithm, we choose the set with more variables (more degrees of freedom).It is Since the number of periods in matrix of coefficients is 2 for these two subsets of missing samples it means that at most zero values can be produced in Z(k).It can be larger than (Q − 1).Therefore beside the condition s < N − (Q − 1) − s considered in 1) we have also to check and use the worst case as the limit for s.It means that, at this point, we should compare Q and 2 (Q 2 − 1).
Note that the largest possible Q 2 is Q 2 = N/2.In this case, the solution is unique if This corresponds to the case when all even (or all odd) signal samples are missing.Then we can not uniquely reconstruct a signal even for sparsity s = 1.
3) This analysis should be continued for all possible periods in N .For N = 2 r the next possible period is N/4 with number of periods R = 4. Coefficients in the matrix (3) are periodic with N/4 if the distance between missing samples is four.Thus, we should divide set of all missing sample positions N Q into subsets where distances between q i are multiples of 4, i.e., when mod(q i , 4) is a constant.There are 4 such subsets obtained for mod(q i , 4) = b where b = 0, 1, 2, 3.In the same way as in step 2), denote the cardinality of the largest subset by {card{q : q ∈ N Q and mod(q, 4) = b}}.
If we find Q 4 such that then it means that in z(n) we can consider as variables (nonzero values) only the samples from the set containing q i such that mod(q i , 4) = b producing Q 4 .Then the unique solution is obtained only if The worst case is when the positions of all missing samples are such that Q 4 = N/4.Then the solution is unique if s < 2. In the worst case with as many as 3N/4 of available samples (Q = Q 4 = N/4) we may reconstruct only signals with sparsity s = 1.
4) Next possible period for N = 2 r is N/8.The periodicity with period 8 should be considered for any subset of q m ∈ N Q = {q 1 , q 2 , ...., q Q } calculating {card{q : q ∈ N Q and mod(q, 8) = b}}. If then the sparsity further reduces to The worst case when the positions of all missing samples are such that Q 8 = N/8 the solution is unique for s < 4. I this case even with 7N/8 of available samples we may reconstruct only the signals with sparsity s = 1, 2, 3.
5) The process should continue for all possible periods 2 h < N , h = 4, 5, ...r − 1.We should calculate If it is such that then the sparsity constraint for uniqueness is 6) Summing up all the cases we get the theorem result that the uniqueness condition is In the final stage if just two special samples Thus special positions of two missing samples reduce the maximal number of components that can uniquely be detected to N/4 − 1.
7) This kind of proof can be generalized for any signal length N with possible periods of matrix obtained as divisors of N .

B. Uncertainty Principle and Theorem 1 Bounds
The bounds for sparsity are compared with the results following from the uncertainty relation for the DFT where the number of nonzero values in z(q) is denoted by Q and the number of the nonzero values in its DFT Z(k) is N Q .In the worst case, for a given Q and for the worst possible distribution of positions q m ∈ N Q = {q 1 , q 2 , ...., q Q } we have Consider first the case when we can not exclude the possibility that the signal DFT assume values related to the missing sample positions and the worst possible distribution of positions q m ∈ N Q = {q 1 , q 2 , ...., q Q }.The maximal number of nonzero values is N Q .In the worst case s of nonzero signal DFT values can cancel s nonzero values in Z(k) meaning that the minimal number of nonzero values, in the worst case, is N Q − s.Since it should not be the solution of our minimization problem, the case with all z(q), producing the signal and its sparsity should be lower, For arbitrary signal whose values can not be described by the missing sample positions, and for the worst possible distribution of positions we get These two cases are obtained trough the previous analysis as the special worst cases of the Theorem 1 and Corollary 2 as well.

C. Proof of Corollary 2
In Theorem 1 we assumed that Z(k) takes maximal possible number of Q − 1 zero values and that at the same time the remaining nonzero values of Z(k) cancel out all signal components X(k).This assumption is very unlikely since after we assumed the maximal possible number of Q − 1 zeros in Z(k), we have only one remaining degree of freedom.It means that we can cancel out one signal component with one remaining variable and we assume that all other components have specific values so that they will also be canceled out.Consider now the case when signal components are not adjusted to these fixed values of Z(k).Then, in reality, we can expect to cancel out only one component in X(k) with one variable.Repeating Theorem 1 proof with

D. Proof of Theorem 3
Here we will start with two missing samples q m1 and q m2 at the distance These special positions of two missing samples reduce the maximal number of components that can uniquely be detected to s = N/2 − 2. However in this case the positions of nonzero DFT values k 0i ∈ K s = {k 01 , k 02 , ...., k 0s } should be very specific.All of them should be on either even or odd positions, producing If that is not the case then will reduce the count of zero values that can be achieved in the worst case in X(k)+Z(k).Since S 2 of the signal nonzero coefficients are not at the positions of nonzero values of Z(k) then, in the best possible case, only (s − S 2 ) out of s nonzero values of the DFT can be canceled out by the predetermined missing sample values (as in Theorem 1).In addition to at the nonzero positions of Z(k), there are S 2 nonzero values of X(k) positioned at the zero-value positions of Z(k).They can not be canceled by Z(k).The uniqueness condition for h = r − 1 will therefore require This correction of the uniqueness condition with S 2 1 should be done only if Denote with P h (l) the sorted array The correction S 2 1 = S 2 r−h with h = r − 1 can be calculated as 1 (when there are no two samples at the distance N/2) the upper summation limit will be 0 and this kind of correction will not be done, S 2 1 = 0. Note that the sum of all P h (l), l = 1, 2, ..., 2 r−h is equal to the signal sparsity s.
In the same way, for h = r − 2 (missing samples at a distance being multiple of N/4) the period of Z(k) is 4. In the worst case when Q N/4 = Q 2 r−2 = 4, the maximal number of nonzero values in X(k) at a distance being multiple of 4 can be canceled out (with nonzero values of Z(k)).The remaining nonzero values in the DFT of signal will be on the zero positions of Z(k) and can not be canceled out.Note that if Q N/4 = Q 2 r−2 = 3 then two nonzero values in Z(k) will remain (with 3 variables we can make only 2 zero values in each period of Z(k)).The total number of the nonzero values of X(k) that can be canceled is the sum of two largest numbers in P h (l) defined by P h (4)+P h (3) = s−P h (1)−P h (2), where The number of remaining nonzero DFT values that can not be canceled out is three nonzero values exists in one period of Z(k) (only one zero value of Z(k) can be obtained within a period) meaning that the total number of signal values that can be canceled out is P h (4) + P h (3)+P h (2) = s−P h (1).The number of remaining nonzero values of The uniqueness condition is that with and 2 r−h = 4 for h = r − 2. The same analysis is done for all h, using (4)- (5).The value of s satisfying (4) for all h produces Theorem 3 statement.

E. Proof of Corollary 4
This Corollary follows from Theorem 3 neglecting the probability that several DFT coefficients can be canceled out by predetermined values of the missing samples.Then instead of (s − S 2 r−h ) in ( 4) we have just one sample and is used for nonuniqueness instead of the uniqueness condition

IV. THE WORST CASE SIGNAL FORM
For the worst case, defined by the Theorem 1, the set of possible amplitudes and phases of signal components is related to the missing sample positions.For the missing sample positions q m ∈ N Q = {q 1 , q 2 , ...., q Q } used in the minimization process, the worst case in the minimization process is when the period of the transformation matrix is such that it repeats immediately after Q samples, in the sense described in the proof of Theorem 1. Then the minimal number of nonzero values in DFT Z(k) is N Q = N/Q.It means that Q variables z(q 1 ), z(q 2 ), ..., z(q Q ) can be determined such that Q − 1 values of Z(k) within k = 0, 1, 2, ..., Q − 1 are zerovalued and only one Z(k) is nonzero.In the worst case, this scenario repeats immediately after Q values (it repeats N/Q times).The maximal number of zero values in Z(k) is then (Q−1)N/Q.The number od nonzero values (sparsity) in Z(k) is N/Q.Now we will investigate the form of the signal DFT so that the Theorem 1 sparsity bound holds with equality sign.The worst case assumes existence of the maximal number of zeros in Z(k) and one nonzero value of Z(k) in each period N/Q.It also assumes that all s signal components can be canceled out by this nonzero value of Z(k) and corresponding periodic nonzero values Z(k + iQ).The maximal number of zeros in Z(k) within one period defines values of all nonzero values of variables z(n) in the time domain (with a possibility to cancel out one signal component in X(k) + Z(k) within the considered period).
Let us consider subsets of Q equations defined by (2).The first subset will be written for frequencies k = 0, 1, 2, ..., Q−1, the second for k = Q, Q + 1, ..., 2Q − 1, and so on until the last one for k z(q 1 ), z(q 2 ), ..., z(q Q )) is within the subset of equations for k = iQ, iQ + 1, ..., iQ + Q − 1.Then the solution of system (6) will produce maximal sparsity for this frequency range, i.e., Z(k) + X(k) = 0 for all considered frequencies k = iQ, iQ + 1, ..., iQ + Q − 1. Solution for the missing sample values is given by ( 7) Values of z(q), obtained from this system, do not change for other s − 1 nonzero signal DFT values X(k 0m ), k 0m = k 0i .With the previous system of equations and its solution for missing samples z(q) we have used all degrees of freedom of Q-dimensional variable z(q).More zero values in the DFT of the resulting signal X(k) + Z(k) (lower sparsity of this signal) can be obtained only if the remaining signal DFT values X(k 0m ) , k 0m = k 0i are canceled out, by chance.In the worst case, assumed by the Theorem 1, all remaining s−1 nonzero DFT coefficients k 0m = k 0i should be canceled out, with the already determined missing sample values z(q).Since the transformation matrix is periodic, this will happen only if all remaining signal DFT coefficients assume very specific positions k 0m = k 0i + r m Q and specific values since the periodicity is established with respect to q 1 .For a given set of missing sample positions q m ∈ N Q = {q 1 , q 2 , ...., q Q } probability that all components of a measured signal assume specific positions with specific amplitudes and phases (related to the missing sample positions) defined by ( 8) is a zero-probability event.

A. Group Delay and Missing Sample Positions
Consider a signal y(n), periodic with period N/Q, and defined for n = 0, 1, ..., N/Q − 1 as where n 0 is the reminder after q 1 is divided with N/Q.Note that the worst case requires that all missing sample positions q m are at the positions being multiple of N/Q.It means that the reminder after division of q m with N/Q is a constant denoted by n 0 .The DFT of the considered signal is It is interesting to note that the signal DFT values X(k 0m ), defined with (8), are obtained as a subset of s values from In the worst case, the DFT values of signal should be s samples of a full DFT of a periodic signal which would have group delay coinciding with the missing sample positions q m ∈ N Q = {q 1 , q 2 , ...., q Q }, since q m = q n + l N Q in this case.It means that if the missing samples produce a periodic structure, then the signal values should follow this structure as well.
Example: For the signal with N = 32 and missing samples at q m ∈ N Q = {2, 3, 8, 13, 19, 22, 23, 28, 30} the limit for sparsity s (when we claim that the reconstructed sparse signal is unique, assuming that all signal amplitudes may be related to the missing sample positions) is s < 8.
In this example, we will show what properties a signal must satisfy in the limit case s = 8 so that the solution is not unique.To simplify the notation assume that one DFT value of reconstructed signal is X(5) = 2.
The limit of sparsity s = 8 is obtained in the first example with Q 16 = 2 and 16(2−1) = 16.As explained, it corresponds to the missing sample positions q 2 = 3 and q 5 = q 2 + N/2.It means that the missing sample values of other samples z(q m ) will be adjusted to their correct zero positions and only z(q 2 ) = z(3) = z 3 and z(q 5 ) = z(19) = z 19 will assume nonzero values.In this case the set of missing samples variables reduces to q m ∈ N Q = {3, 19} with Q = Q 16 = 2.The DFT Z(k) of such a signal is equal to Z(k) = z 3 e −j2π3k/32 + z 19 e −j2π19k/32 = e −j2π3k/32 z 3 + (−1) k z 19 for k = 0, 1, ..., 31 In the worst case Z(k) should have maximal possible number of zeros.We conclude that either z 3 = z 19 or z 3 = −z 19 should hold, otherwise the sparsity of Z(k) would be 32.In addition, Z(k) should cancel out all signal components including assumed X(5) = 2. Since z 3 = z 19 would produce Z(2k + 1) = 0 it would not be able to cancel X (5) Both of these signals have the same sparsity s = 8 and satisfy the same set of available samples.
However, if sampled signal x(n) is not the signal of very specific form (9) then the solution of sparsity s = 8 will be unique for a given set of available samples.Then z(n) = δ(n − 3) − δ(n − 19) will not be in position to cancel all 8 DFT values of signal and the sparsity of X(k) + Z(k) will be V. CONCLUSION Reconstruction of a sparse signal, using recently proposed gradient-based method, is done by considering missing samples as variables.Theorems for the uniqueness of the solution obtained by varying missing samples, in the case of an arbitrary and already reconstructed signal, are stated and proved.
The calculation complexity of the proposed theorems is low.The theory is illustrated on numerical and statistical examples.
Proposed approach can be extended to other signal transforms including nonredundant basis (dictionaries).One of the possible redundant basis closely related to the presented DFT is short time Fourier transform with overlapping windows [18] [19].

Fig. 1 .
Fig. 1.Uniqueness probability calculated for various sparsity s and number of available samples (N − Q) by using Theorem 1 (left) and Corollary 2 (right) in 1000 random realizations.Lines s = N − Q and s = (N − Q)/2 are presented as guidelines.In this case the uniqueness is checked without using information about nonzero DFT coefficient positions in the reconstructed signal.

Fig. 2 .Fig. 3 .
Fig. 2. Uniqueness probability calculated for various sparsity s and number of available samples (N − Q) by using Theorem 3 (left) and Corollary 4 (right) in 1000 random realizations.In this case the uniqueness is checked after reconstruction using information about nonzero DFT coefficient positions in the reconstructed signal.