1. Introduction
Compressive sampling (compressed sensing, CS) is known as a new type of sampling theory that one can reconstruct a high dimensional spare signal from a small number of linear measurements at the sub-Nyquist rate [1–3]. Nowadays, the CS technique has attracted considerable attention from across a wide array of fields like applied mathematics, statistics, and engineering, including signal processing areas such as MR imaging, speech processing, and analog to digital conversion. The basic problem in CS is to reconstruct the unknown sparse signal x from measurements:
(1)y=Φx,
where Φ is an M×N (M≪N) sampling matrix. Suppose Φ=(Φ1,Φ2,…,ΦN), where Φi denotes the ith column of Φ. Throughout the paper, we will assume that the columns of Φ are normalized; that is, ∥Φi∥2=1 for i=1,2,…,N.

It is well understood that under some assumptions on the sampling matrix Φ, the unknown sparse signal x can be reconstructed by solving the l0-minimization problem:
(2)min∥x∥0 subject to y=Φx,
where ∥x∥0 denotes the number of nonzero entries of x. We say a signal x is K-sparse when ∥x∥0≤K.

However, the optimization problem is NP-hard, so one seeks computationally efficient algorithms to approximate the sparse signal x, such as greedy algorithm, l1 minimization, and lp (0<p<1) minimization [4–6].

Orthogonal matching pursuit (OMP), which is a canonical greedy algorithm, has receive much attention in solving the problem (2), due to its ease of implementation and low complexity. Algorithm 1 can be described below. Until recently, many popular generalizations of OMP are introduced, for example, OMMP and KOMP; for details, see [7, 8].

<bold>Algorithm 1: </bold>Orthogonal matching pursuit—OMP (<inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" id="M27"><mml:mi mathvariant="normal">Φ</mml:mi><mml:mo>,</mml:mo><mml:mi>y</mml:mi></mml:math></inline-formula>).

Input: Sampling matrix Φ, observation y

Output: Reconstructed sparse vector x* and index set

INITIALIZATION: Let the index set Ω0=⌀ and the residual r0=y. Let the iteration counter t=1.

IDENTIFICATION: Choose the index i subject to |ΦiTrt-1|>maxj≠i|ΦjTrt-1|.

UPDATE: Add the new index i to the index set: Ωt=Ωt-1∪i, and update the signal and the residual

xt|Ωt=arg minz||y-ΦΩtz||2, xt|Ω-t=0;

rt=y-Φxt.

If rt=0, stop the algorithm. Otherwise, update the iteration counter t=t+1 and return to Step IDENTIFICATION.

The mutual incoherence property (MIP) introduced in [9] is an important tool to analyze the performance of OMP. The MIP requires the mutual coherence μ of the sampling matrix Φ to be small, where μ is defined as
(3)μ=maxi≠j|ΦiTΦj|.
Tropp has shown that the MIP condition (2K-1)μ<1 is sufficient for OMP to exactly recover every K-sparse signal [6]. This condition is proved to be sharp in [10].

The restricted isometry property (RIP) is also widely used in studying a large number of algorithms for sparse recovery in CS, which is introduced in [11]. A matrix Φ satisfies the RIP of order K with the restricted isometry constant (RIC) δK if δK is the smallest constant such that
(4)(1-δK)∥x∥22≤∥Φx∥22≤(1+δK)∥x∥22
holds for all K-sparse signal x. A related quantity of the restricted orthogonality constant (ROC) θK,K′ is defined as the smallest quantity such that
(5)|〈Φx,Φx′〉|≤θK,K′∥x∥2·∥x′∥2
holds for all disjoint support K-sparse signal x and K′-sparse signal x′. It is first shown by Davenport and Wakin that the RIP condition
(6)δK+1<13K
can guarantee that OMP will exactly recover every K-sparse signal [12]. The sufficient condition is then improved to δK+1<1/(1+2K) [13], δK+1<1/2K [14], δK+1<1/(1+K) [7], and δK+KθK,1<1 [15, 16]. By contrast, Mo and Shen have given a counterexample, a matrix with δK+1=1/K where OMP fails for some K-sparse signals [17]. The main result of this note is to show that the sufficient RIP condition
(7)δK+KθK,1<1
is sharp for OMP.

2. Main Result
Theorem 1.
For any given positive integer K≥1, there exist a K-sparse signal x and a matrix Φ with the restricted isometry constant
(8)δK+KθK,1=1
for which OMP fails in K iterations.

Proof.
For any given positive integer K≥1, let
(9)Φ=(Φij)(2K-1)×(2K-1),
where
(10)Φij={0(i<j),2K2K-1·(-ii(i+1))(i=j),2K2K-1·1i(i+1)(i>j).

By simple calculation, we can get
(11)∥Φj∥22=2K2K-1·(j2j(j+1)+∑i=j+12K-11i(i+1))=2K2K-1 ·(jj+1+1j+1-1j+2 +⋯+12K-1-12Kjj+1+1j+1-1j+2)=1,〈Φl,Φj〉=2K2K-1·(-jj(j+1)+∑i=j+12K-11i(i+1))=2K2K-1 ·(-1j+1+1j+1-1j+2 +⋯+12K-1-12K-1j+1+1j+1-1j+2)=-12K-1
for any integers 1≤l<j≤2K-1.

Thus, for any index set Λ whose cardinality is K, we have
(12)ΦΛTΦΛ=(1-12K-1⋯-12K-1⋱-12K-1⋯-12K-11).
It is obvious that the eigenvalues {λi}i=1K of ΦΛTΦΛ are
(13)λ1=⋯=λK-1=1+12K-1,λK=1-K-12K-1.
Therefore, the restricted isometry constant δK of Φ is (K-1)/(2K-1).

Now, we turn to calculate the restricted orthogonality constant θK,1. In view of (11), we may, without loss of generality, assume that x=(x1,…,xK,0,…,0)T and x′=(0,…,0,xK+1′0,…,0)T. We have
(14)θK,1=max|〈Φx,Φx′〉|∥x∥2·∥x′∥2=max|〈Φx,ΦK+1〉|∥x∥2=max12K-1·|∑i=1Kxi|∥x∥2=K2K-1.
The last equality holds when x1=⋯=xK. It is easy to check that
(15)δK+KθK,1=K-12K-1+K·K2K-1=1.

Let x=(1,…,1︸K,0,…,0)T∈R2K-1; we have
(16)|Sj|=|〈Φx,Φj〉|=K2K-1, ∀j∈{1,2,…,2K-1}.
This implies that OMP fails in the first iteration. The proof is complete.