Sparse regression based unmixing has been recently proposed to estimate the abundance of materials present in hyperspectral image pixel. In this paper, a novel sparse unmixing optimization model based on approximate sparsity, namely, approximate sparse unmixing (ASU), is firstly proposed to perform the unmixing task for hyperspectral remote sensing imagery. And then, a variable splitting and augmented Lagrangian algorithm is introduced to tackle the optimization problem. In ASU, approximate sparsity is used as a regularizer for sparse unmixing, which is sparser than l1 regularizer and much easier to be solved than l0 regularizer. Three simulated and one real hyperspectral images were used to evaluate the performance of the proposed algorithm in comparison to l1 regularizer. Experimental results demonstrate that the proposed algorithm is more effective and accurate for hyperspectral unmixing than state-of-the-art l1 regularizer.
1. Introduction
With the wealth of spectral information, hyperspectral remote sensing has been widely used in the fields of target detection, material mapping, and environmental monitoring [1]. However, due to the low spatial resolution of a sensor and the complexity of nature diversity of surface features, each pixel of hyperspectral imagery often contains more than one pure material, leading to the existence of spectral mixing. To deal with this problem and effectively identify the components of the mixed spectral, hyperspectral unmixing technique [2], which aims at decomposing the measured spectrum of each mixed pixel into a collection of spectral pure constituent spectra (endmembers) and the corresponding fractions (abundance), is proposed. Linear spectral mixing model and nonlinear spectral mixing model are two basic models used to analyze the mixed pixel problem. Due to its computational tractability and flexibility, the linear mixture model has been widely applied for many different applications. In this letter, we focus on the linear spectral mixing model.
The linear spectral mixing model assumes that the spectral response of a pixel in any given spectral band is a linear combination of all of the pure endmembers. Assuming that the hyperspectral sensor used in data acquisition has L spectral bands, the linear spectral mixing model can be formulated as follows:
(1)y=Ax+n,
where y∈RL is the spectrum vector of a mixed pixel, A∈RL×m is the spectral library, x∈Rm is the abundance vector corresponding to the library A, and n∈RL is the noise vector. Owing to physical constraints, the fractional abundance of all of the endmembers for a mixed pixel cannot be negative, and their sum should be one. Under the linear spectral mixing model, a variety of unmixing approaches based on geometry [3, 4] and statistics [5, 6] has been proposed. However, these methods are unsupervised and could extract virtual endmembers with no physical meaning.
In recent years, the sparse signal model has received much attention in hyperspectral unmixing area [7–12]. Sparse unmixing, as a semisupervised unmixing method, is based on the assumption that each mixed pixel observation in the hyperspectral imagery can be modeled as linear combinations of a few spectra from a large spectral library that is known in advance [13]. Because the size of the spectral library is often large, the number of endmembers in the spectral library will be much greater than the number of spectral band, which implies that there are only a few nonzeros among the entries in abundance vector x. Thus, the sparse unmixing problem can be written as
(2)minx∥x∥0s.t.∥Ax-y∥22≤δ,x≥0,1Tx=1,
where ∥x∥0 denotes the number of nonzero components of x, δ is the tolerated error derived from the noise or the modeling error, and 1Tx=1 means that the sum of the fractional abundance is 1.
However, in terms of computational complexity, the l0 norm regularized problem (2) is a typical NP-hard problem, and it was difficult to solve. Recently, Candès and Tao [14, 15] proved that l0 norm can be replaced by l1 norm under a certain condition of the restricted isometric property (RIP). Therefore, in most cases, it is always relaxed to alternative l1 norm regularized convex problem, as follows:
(3)minx∥x∥1s.t.∥Ax-y∥22≤δ,x≥0,1Tx=1,
where ∥x∥1=∑i=1m|xi| is the l1 norm of x. Nevertheless, while l1 regularization provides the best convex approximation to l0 regularization and it is computationally efficient, the l1 regularization often introduces extra bias in unmixing and cannot estimate the fractional abundance with the sparser solutions.
In this letter, we model the hyperspectral unmixing as an approximate sparsity regularized optimization problem and propose to solve it via variable splitting and augmented Lagrangian algorithm. The experimental results demonstrate that the proposed ASU algorithm achieves a better spectral unmixing accuracy. The rest of this letter is organized as follows. In Section 2, the hyperspectral unmixing methodology based on approximate sparsity is described in detail. Section 3 presents the experimental results. Conclusions are drawn in Section 4.
2. Proposed Hyperspectral Unmixing Methodology2.1. Approximate Sparsity Model
The problems of using l0 norm in hyperspectral unmixing (i.e., the need for a combinatorial search for its minimization and its too high sensibility to noise) are both due to the fact that the l0 norm of a vector is a discontinuous function of that vector. The same as [16, 17], our idea is then to approximate this discontinuous function by a continuous one, named approximate sparsity function, which provides smooth measure of l0 norm and better accuracy than l1 regularizer.
The approximate sparsity function is defined as
(4)fσ(x)=2πarctan(|x|σ2),x∈R,σ∈R+.
The parameter σ may be used to control the accuracy with which fσ approximate the Kronecker delta. In mathematical terms, we have
(5)limσ→0fσ(x)={1x≠00x=0.
Define the continuous multivariate approximate sparsity function Fσ(x) as
(6)Fσ(x)=∑i=1mfσ(xi),x∈Rm×1.
It is clear from (5) that Fσ(x) is an indicator of the number of zero entries in x for small values of σ. Therefore, l0 norm can be approximated by
(7)∥x∥0≈Fσ(x)=∑i=1mfσ(xi).
Note that the larger the value of σ, the smoother Fσ(x) and the worse approximation to l0 norm; the smaller value of l0 norm, the closer behavior of Fσ(x) to l0 norm.
Figure 1 shows the sparsity of the proposed approximate sparsity function with respect to difference of value of σ. From Figure 1, we can see that the smaller value of σ, the closer behavior of approximate sparsity function to l0 norm. And the larger value of σ, the smoother approximate sparsity function.
The graph of approximate sparsity function with various parameter σ.
2.2. Unmixing Model Based on Approximate Sparsity
Based on the proposed approximate sparsity model and sparse unmixing model in (2) or (3), the proposed approximate sparse unmixing (ASU) model is built up as follows. Join the approximate sparsity terms, the abundance sum-to-one, and nonnegative regularization terms, and rewrite the hyperspectral unmixing optimization problem. The model of the proposed ASU algorithm is as follows:
(8)minxFσ(x)s.t.∥Ax-y∥22≤δ,x≥0,1Tx=1.
The above regularized optimization problem can be converted into an unconstrained version by minimizing the respective Lagrangian function, as follows:
(9)minx12∥Ax-y∥22+λFσ(x)+ιR+(x)+ι{1}(1Tx),
where λ is the regularization parameter. ιR+(x) and ι{1}(1Tx) are the abundance nonnegative and sum-to-one functions, respectively. ιS(x) is the indicator function. If x∈S, then ιS(x)=0; otherwise ιS(x)=∞.
2.3. Numerical Algorithm
The sparse unmixing is a typical underdetermined linear inverse problem, and it is very difficult to find a unique, stable, and optimal solution [18]. There are many proposed numerical algorithms in the literature to solve sparse unmixing problems. Greedy algorithms such as orthogonal matching pursuit (OMP) [19] and basis pursuit (BP) [20] are two alternative approaches in solving the l0 and l1 norm constrained sparse unmixing problems. Chen and Zhang [21] carried over the iteratively reweighted least squares algorithm to solve the lp constrained sparse unmixing optimization problem. Sun et al. [22] introduced reweighted iterative algorithm to convert the l1/2 problem into the l1 problem and used the split Bregman iterative algorithm to solve it. In [7], Dioucas-Dias and Figueiredo proposed a sparse unmixing algorithm via variable splitting and augmented Lagrangian (SUnSAL), the core idea of which is to introduce a set of new variables per regularizer and then exploit the alternating direction method of multipliers (ADMM) [23] to solve the resulting constrained optimization problems.
Due to the best performance of SUnSAL, in this letter, we carry over the variable splitting and augmented Lagrangian algorithm [24] to solve (9). We first introduce intermediate variable u and transform problem (9) into an equivalent problem, that is,
(10)minxvvvvvvv12∥Ax-y∥22+λFσ(u)+ιR+(u)+ι{1}(1Tx)subjecttou=x.
The augmented Lagrangian of problem (10) is
(11)L(x,u,d)=12∥Ax-y∥22+λFσ(u)+ιR+(u)+ι{1}(1Tx)+μ2∥x-u-d∥22,
where μ>0 is a positive constant and d/μ denotes the Lagrangian multipliers associated with the constraint u=x. The basic idea of the augmented Lagrangian method is to seek a saddle point of L(x,u,d), which is also the solution of problem (9).
By using the augmented Lagrangian algorithm, we solve the problem (9) by iteratively solving problem (12) and (13). Consider
(12)(xk+1,uk+1)=argminx,uL(x,u,d),(13)dk+1=dk-(xk+1-uk+1).
It is evident that the minimization problem (12) is still hard to solve efficiently in a direct way, since it involves a nonseparable quadratic term and nondifferentiability terms. To solve this problem, a quite useful ADMM algorithm [23] is employed, which alternatively minimizes one variable while fixing the other variables. By using ADMM, the problem (12) can be solved by the following two subproblems with respect to x and u.
(1) x-Subproblem. Let uk and dk be fixed; we solve
(14)xk+1=argminx12∥Ax-y∥22+ι{1}(1Tx)+μ2∥x-uk-dk∥22.
Problem (14) is a linearly constrained quadratic problem, the solution of which is
(15)xk+1=B-1w-C(1TB-1w-1),
where
(16)B=ATA+μI,C=B-11(1TB-11)-1,w=ATy+μ(uk+dk).
If ASC is critical [7], we can set C=0 to easily discard it.
(2) u-Subproblem. Let xk+1 and dk be fixed; we solve
(17)uk+1=argminuλFσ(u)+μ2∥xk+1-u-dk∥22+ιR+(u).
Clearly, formulation (17) is a quadratic function; therefore, the steepest descent method is desirable to use to solve (17). Consider
(18)uk+1=uk-αΔ,
where α is a constant and Δ=2λσ2[π(σ4+xk+12)]-1+μ(uk-xk+1+dk) is the gradient of formation (17). Considering the nonnegative constraint, we can project vector uk+1 onto the first orthant, and thus uk+1←max{0,uk+1}.
So far, all issues for handing the subproblems have been solved. According to the above description, the main implementation procedure of the proposed method is depicted in Algorithm 1.
<bold>Algorithm 1: </bold>Pseudocode of ASU algorithm.
Algorithm’s inputs: The measured hyperspectral data y, spectral library A, regularization λ, μ, parameter σ, the max iteration
number itermax, and the iteration stopping criterion εstop.
Algorithm’s output: The estimated abundance vector x.
Initializations: Initialize u0, x0, and d0.
Main iteration as follows:
Step 1. Compute xk by solving x-Subproblem using (15).
Step 2. Compute uk by solving u-Subproblem using (18).
Step 3. Update Lagrange multipliers dk according to (13).
Step 4. Increase k(k=k+1) and go to step 1.
Termination criteria: if ∥uk-xk∥22≤εstop or k>itermax, stop iteration.
3. Experiments
In this section, a series of experiments on simulated datasets and real hyperspectral image are implemented to evaluate ASU method. Consistent comparisons between the ASU algorithm and the representative sparse unmixing algorithms of SUnSAL [7] were carried out. In all the experiments, we set the maximum iteration number itermax=500 and the stopping condition εstop=10-4. And all the considered algorithms have taken into account the abundance nonnegativity constraint. Signal-to-reconstruction error (SRE) [2] was used as the quality metric to assess the accuracy of unmixing results, which is defined as follows:
(19)SRE(dB)=10*log10(E(∥x∥22)E(∥x-x^∥22)).
Here, x denotes the true abundance, x^ denotes the estimated abundance, and E(·) denotes the expectation function. Generally speaking, the larger the SRE is, the more accurate the unmixing algorithm is.
3.1. Simulated Datasets
In this experiment, the spectral library A∈R224×240 contains m=240 members with L=224 bands and was generated by randomly selecting from the USGS library denoted splib06 [25] and released in September 2007. The mutual coherence of the library was close to one. Under library A, three simulated datasets were generated with 500 pixels and each containing different numbers of endmembers: k1=2 (denoted by SD1), k2=4 (SD2), and k3=6 (SD3). In each pixel, the fractional abundance was also defined following the Dirichlet distribution [26]. Zero-mean i.i.d. Gaussian noises of three different variances were added to each of the simulated datasets with signal-to-noise ratios (SNRs) of 20, 30, and 40 dB, respectively.
Table 1 shows the SRE (dB) results after applying them to the three simulated datasets contaminated with white noise with different levels and the correspondingly optimal parameters of these two methods. From Table 1, it can be seen that ASU attains the highest SRE (dB) in all cases. As is known that the higher the SRE, the better the unmixing performance. That is to say, ASU achieves better unmixing accuracy than SUnSAL. For high SNR values especially, the improvements of SRE obtained with regard to the SUnSAL are significant. As it is expected in sparse unmixing [7], when the number of endmembers increases, the accuracy of two methods decreases. The ASU performs clearly better than SUnSAL, and the results of ASU method are acceptable.
SRE (dB) values obtained by different unmixing methods.
SRE (dB)
Data cube
SNR
SUnSAL
ASU
SD1(k1=2)
20 dB
3.52(λ=3·10-2)
4.23(λ=4·10-2;σ=0.6;α=0.1)
30 dB
9.01(λ=1·10-2)
13.19(λ=2·10-3;σ=0.4;α=0.1)
40 dB
18.22(λ=5·10-3)
28.13(λ=1·10-3;σ=0.4;α=0.1)
SD2(k2=4)
20 dB
3.30(λ=1·10-1)
4.07(λ=4·10-2;σ=0.7;α=0.1)
30 dB
7.24(λ=2·10-2)
9.25(λ=4·10-3;σ=0.5;α=0.1)
40 dB
13.36(λ=3·10-3)
19.88(λ=9·10-4;σ=0.4;α=0.1)
SD3(k3=6)
20 dB
1.7(λ=9·10-1)
2.42(λ=9·10-2;σ=0.8;α=0.1)
30 dB
4.55(λ=9·10-2)
5.36(λ=5·10-3;σ=0.6;α=0.1)
40 dB
9.44(λ=3·10-3)
11.57(λ=1·10-3;σ=0.5;α=0.1)
To illustrate further the performances of the proposed ASU method, Figure 2 shows the estimated fractional abundance of two unmixing methods in a simulated dataset SD2 with SNR of 30 dB, along with the ground-truth abundance. In order to better visualization, the same abundance of 50 randomly selected pixels is shown. From Figure 2, it can be observed that the abundance maps estimated by ASU are more similar to the ground-truth maps than the ones estimated by SUnSAL. Note that there are many disturbed abundance values which are not actually present in the abundance maps estimated by SUnSAL. ASU algorithm vanishes most of those values. This is because approximate sparsity regularizer used in ASU algorithm produces the sparser numbers of fractional abundance than l1 regularizer used in SUnSAL.
Ground-truth and estimated abundance in SD2 with SNR of 30 dB. (a) Ground-truth abundance. (b) SUnSAL estimated abundance. (c) ASU estimated abundance.
The unmixing performance comparison results with SUnSAL algorithm are also presented in Figure 3. From Figure 3, it can be seen that the proposed ASU algorithm can obtain better SRE results than the SUnSAL algorithm for all of the considered parameters σ. Considering the abundance sum-to-one and nonnegativity constraint, the range of abundance coefficients is [0,1]. Therefore, in this letter we set σ∈[0,1].
SRE comparison between ASU and SUnSAL algorithm for various σ. (a) SRE for SNR=30 dB in SD2. (b) SRE for SNR = 40 dB in SD2.
Figure 4 shows the sparsity comparison results of estimated abundance of considered unmixing algorithms in a simulated dataset SD2 with SNR of 40 dB. It demonstrates that the nonzero number of abundance value using approximate sparsity regularizer (ASU) is smaller than that of using l1 regularizer (SUnSAL). This proves that approximate sparsity regularizer is sparser than l1 regularizer.
Sparsity of estimated abundance in SD2 with SNR=40 dB. (a) Sparsity of SUnSAL algorithm. (b) Sparsity of ASU algorithm.
The parameter σ controls the sparsity of approximate sparsity regularizer and plays an important role in the unmixing algorithm. Figure 5(a) plots the SRE values with various σ values in a simulated dataset SD2 with SNR of 20, 30, and 40 dB. Figure 5(b) plots the SRE values with various σ values in a simulated dataset with 2, 4, and 6 endmembers at SNR = 40 dB. It can be seen that the SRE first increases with the σ, after reaching a peak value, and it then begins to decrease. The optimal σ, which corresponds to the peak SRE values for different SNR or different numbers of endmembers, are marked as red stars in Figure 5. From Figure 5(a), we can see that at the same numbers of endmember the smaller σ is much more optimal for the hyperspectral data with higher SNR. It can also be seen from Figure 5(b) that, for the hyperspectral data with the same SNR, the optimal σ value increases with the increase of the numbers of endmembers.
SRE of ASU algorithm with various σ for different SNRs and different numbers of endmembers K. (a) SRE for different SNR in SD2. (b) SRE for different numbers of endmembers K at SNR = 40 dB.
In the ASU model presented in (9), the regularization parameter λ plays an important role in controlling the tradeoff between fidelity to the data and sparsity of the unmixing results. To analyze the influence of λ when performing the ASU method, we plotted the curve of the SRE values with various λ values in a simulated dataset SD2 with SNR of 40 dB, as shown in Figure 6(a). As shown in Figure 6(a), lower or higher λ values result in a smaller SRE. And the higher λ values lead to a faster decrease in SRE than lower λ values. The relationship between the SRE and the combinations of the regularization parameter λ and the approximate sparsity parameter σ is presented in Figure 6(b).
SRE of ASU algorithm with various λ and σ in SD2 with SNR=40 dB. (a) SRE in relation to λ. (b) SRE in relation to λ and σ.
3.2. Real Datasets
In the real datasets experiments, our method has been tested on well-known Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) Cuprite dataset. The portion used in experiments corresponds to a 250×191-pixel subset, comprising 224 spectral bands between 400 and 2500 nm, with nominal spectral resolution of 10 nm. Prior to the analysis, bands 1-2, 105–115, 150–170, and 223-224 were removed due to water absorption and low SNR, leaving 188 spectral bands. In this experiment, the spectral library is the USGS library containing 498 pure endmember signatures with the corresponding bands removed. And the parameters were set as λ=4×10-3; σ=0.6; α=0.1. For illustrative purposes, the mineral map produced by Tricorder 3.3 software product is shown in Figure 7. It should be noted that the Tricorder map was produced in 1995, but the publicly available AVIRIS Cuprite data were collected in 1997. Thus, we only adopt the mineral maps as a reference to make a qualitative analysis of the performances of the different sparse unmixing algorithms.
USGS map showing the distribution of different minerals in cuprite mining district in NV (available at http://speclab.cr.usgs.gov/cuprite95.tgif.2.2um_map.gif).
Figure 8 shows a visual comparison between the fractional abundance maps estimated by SUnSAL and the proposed ASU algorithm for three different minerals (alunite, buddingtonite, and chalcedony). Comparison to the reference maps (Figure 7) demonstrates that fractional abundance estimated by ASU is generally more accurate or comparable in the regions assigned to the respective materials than SUnSAL. For the buddingtonite especially, the results of ASU exhibit better spatial consistency of minerals of interest and less outliers in isolated regions of the image.
Abundance fractions estimated by ASU and SUnSAL. (a) SUnSAL abundance fraction for alunite. (b) ASU abundance fraction for alunite. (c) SUnSAL abundance fraction for buddingtonite. (d) ASU abundance fraction for buddingtonite. (e) SUnSAL abundance fraction for chalcedony. (f) ASU abundance fraction for chalcedony.
4. Conclusions
In this paper, we have proposed a new model using the approximate sparsity regularizer to replace the l1 regularizer in a sparse regression model when dealing with hyperspectral unmixing problems. The proposed approximate sparsity regularizer model produces sparser and more accurate unmixing results than the traditional l1 regularizer model. We also have presented an effective method to solve the approximate sparse regression based unmixing problem via variable splitting and augmented Lagrangian algorithm. To evaluate the performance the proposed method, several experiments on a series of simulated data sets and a real hyperspectral image have been performed. Experimental results show that the proposed method achieves better performance than the traditional SUnSAL.
Although our experimental results are very encouraging, the proposed method focuses on analyzing the hyperspectral data without incorporating spatial structure information of hyperspectral data. In the future, we will include the spatial-contextual information to approximate sparse regression based unmixing to improve the performance. In addition, since our unmixing method is conducted in the pixel-by-pixel fashion, the procedure could be accelerated by using some hardware architectures such as graphics processing unit (GPU) or field programmable gate arrays (FPGAs).
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of the paper.
Acknowledgments
The authors would like to thank the anonymous referees for their valuable and helpful comments. The work was supported by the National Natural Science Foundation of China under Grants 61162022 and 61362036, the Natural Science Foundation of Jiangxi China under Grant 20132BAB201021, the Jiangxi Science and Technology Research Development Project of China under Grant KJLD12098, and the Jiangxi Science and Technology Research Project of Education Department of China under Grants GJJ12632 and GJJ13762.
ShippertP.Why use hyperspectral imagery?Bioucas-DiasJ. M.PlazaA.DobigeonN.ParenteM.DuQ.GaderP.ChanussotJ.Hyperspectral unmixing overview: geometrical, statistical, and sparse regression-based approachesChanT.ChiC.HuangY.MaW.A convex analysis-based minimum-volume enclosing simplex algorithm for hyperspectral unmixingChanT.MaW.AmbikapathiA.ChiC.A simplex volume maximization framework for hyperspectral endmember extractionSchmidtF.SchmidtA.TréguierE.GuiheneufM.MoussaouiS.DobigeonN.Implementation strategies for hyperspectral unmixing using Bayesian source separationArngrenM.SchmidtM. N.LarsenJ.Unmixing of Hyperspectral images using bayesian non-negative matrix factorization with volume priorIordacheM.-D.Bioucas-DiasJ. M.PlazaA.Sparse unmixing of hyperspectral dataGreerJ. B.Sparse demixing of hyperspectral imagesZhongY. F.FengR. Y.ZhangL. P.Non-local sparse unmixing for hyperspectral remote sensing imageryIordacheM.Bioucas-DiasJ. M.PlazaA.Total variation spatial regularization for sparse hyperspectral unmixingIordacheM.-D.Bioucas-DiasJ. M.PlazaA.Collaborative sparse unmixing of hyperspectral dataProceedings of the 32nd IEEE International Geoscience and Remote Sensing Symposium (IGARSS '12)July 20127488749110.1109/IGARSS.2012.63519002-s2.0-84873119738IordacheM.-D.Bioucas-DiasJ. M.PlazaA.Collaborative sparse unmixing of hyperspectral dataProceedings of the IEEE International Geoscience and Remote Sensing Symposium (IGARSS '12)74887491IordacheM.PlazaA. J.Bioucas-DiasJ. M.Recent developments in sparse hyperspectral unmixingProceedings of the 30th IEEE International Geoscience and Remote Sensing Symposium (IGARSS '10)July 20101281128410.1109/IGARSS.2010.56530752-s2.0-78650876624CandèsE. J.TaoT.Decoding by linear programmingCandesE. J.TaoT.Near-optimal signal recovery from random projections: universal encoding strategies?MohimaniH.Babaie-ZadehM.JuttenC.A fast approach for overcomplete sparse decomposition based on smoothed l0 normWangJ. H.HuangZ. Y.ZhouY. Y.WangF. H.Robust sparse recovery based on approximate l0 normBrucksteinA. M.DonohoD. L.EladM.From sparse solutions of systems of equations to sparse modeling of signals and imagesBrucksteinA. M.EladM.ZibulevskyM.On the uniqueness of nonnegative sparse solutions to underdetermined systems of equationsChenS. S.DonohoD. L.SaundersM. A.Atomic decomposition by basis pursuitChenF.ZhangY.Sparse hyperspectral unmixing based on constrained ℓ_{p}-ℓ_{2} optimizationSunL.WuZ. B.XiaoL.LiuJ. J.WeiZ. H.DangF. X.A novel sparse regression method for hyperspectral unmixingBioucas-DiasJ. M.FigueiredoM. A. T.Alternating direction algorithms for constrained sparse regression: application to hyperspectral unmixingProceedings of the IEEE 2nd Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS '10)June 20101410.1109/WHISPERS.2010.55949632-s2.0-78649271874AfonsoM. V.Bioucas-DiasJ. M.FigueiredoM. A. T.An augmented Lagrangian approach to the constrained optimization formulation of imaging inverse problemshttp://speclab.cr.usgs.gov/spectral.lib06NascimentoJ.Bioucas-DiasJ.Vertex component analysis: a fast algorithm to unmix hyperspectral data