Double Regression-Based Sparse Unmixing for Hyperspectral Images

Sparse unmixing has attracted widespread attention from researchers, and many effective unmixing algorithms have been proposed in recent years. However, most algorithms improve the unmixing accuracy at the cost of large calculations. Higher unmixing accuracy often leads to higher computational complexity. To solve this problem, we propose a novel double regression-based sparse unmixing model (DRSUM), which can obtain better unmixing results with lower computational complexity. DRSUM decomposes the complex objective function into two simple formulas and completes the unmixing process through two sparse regressions. The unmixing result of the first sparse regression is added as a constraint to the second. DRSUM is an open model, and we can add different constraints to improve the unmixing accuracy. In addition, we can perform appropriate preprocessing to further improve the unmixing results. Under this model, a specific algorithm called double regression-based sparse unmixing via K-means (DRSUMK−means) is proposed. The improved K-means clustering algorithm is first used for preprocessing, and then we impose single sparsity and joint sparsity (using l2,0 norm to control the sparsity) constraints on the first and second sparse unmixing, respectively. To meet the sparsity requirement, we introduce the row-hard-threshold function to solve the l2,0 norm directly. Then, DRSUMK−means can be efficiently solved under alternating direction method of multipliers (ADMM) framework. Simulated and real data experiments have proven the effectiveness of DRSUMK−means.


Introduction
The hyperspectral remote sensing technology has shown great development and application potential in recent years [1][2][3][4][5]. However, due to spatial resolution and spatial complexity limitations, there are often many mixed pixels in the images, which seriously affect the processing and application of hyperspectral data. Therefore, spectral unmixing technology for mixed pixels has received widespread attention, which aims to identify the spectral features (endmembers) and estimate the corresponding abundance [6]. The linear unmixing model [7], which assumes that mixed pixels are linear combinations of endmembers, has been widely used for its simplicity. Under this model, unmixing algorithms are mainly categorized into three: statistic based, geometry based, and sparse regression based.
Sparse unmixing [8] is a semisupervised unmixing algorithm, which assumes that the mixed pixels are linear combinations of only a few endmembers from the spectral library. It does not need to extract endmembers and estimate the number of endmembers. Numerous effective sparse unmixing algorithms have been proposed in recent years. The sparse unmixing algorithm via variable splitting and augmented Lagrangian (SUnSAL) [8] is a classic sparse unmixing algorithm. SUnSAL introduces new ideas into spectral unmixing. But the unmixing performance of SUn-SAL is affected by the mutual correlation of spectral libraries, and the sparsity degree is insufficient. Collaborative SUnSAL (CLSUnSAL) [9] imposes joint sparsity constraint on hyperspectral images, which uses the l 2,0 norm to control sparsity. However, to avoid the nondeterministic polynomial hard (NP-hard) problem, CLSUnSAL uses convex relaxation strategy to solve the l 2,0 norm, which makes the sparsity insufficient in many scenarios. Reweighted sparse unmixing algorithm adopts the reweighting strategy to enhance the sparsity [10]. Collaborative sparse unmixing algorithm using l 0 norm (CSUnL0) [11] solves the l 2,0 norm directly and gets better sparsity than many convex relaxation algorithms.
In addition to sparsity, how to make full use of spatial information in the hyperspectral images is also an important direction to improve unmixing performance. Sparse unmixing via variable splitting augmented Lagrangian and total variation (SUnSAL-TV) [12] introduces a total variation regularizer to utilize the spatial information. Spectral-spatial weighted sparse unmixing (S 2 WSU) [13] introduces spatial information by spatial weighting factor. Reference [14] combines total variation regularizer and joint-sparse-blocks to improve unmixing accuracy. Reference [15] proposed rowsparsity spectral unmixing via total variation (RSSUn-TV) and achieved promising results. Reference [16] uses the l 1 − l 2 sparsity model and total variation regularizer to depict the sparse characteristic. The high spatial correlation of hyperspectral images is associated with the low column rank, and the low-rank constraint can make full use of the global structure of data. Therefore, the low-rank property can also be used to exploit spatial information. The alternating direction sparse and low-rank unmixing algorithm (ADSpLRU) [17] imposes the low-rank constraint on single sparsity model. Reference [18] incorporates the low-rank constraint into sparse unmixing to suppress noises and preserve more details. Reference [19] proposed nonconvex joint-sparsity and low-rank unmixing with dictionary pruning. Reference [20] combines the joint-sparse-blocks with low-rank constraint. Reference [21] proposed nonlocal low-rank prior sparse unmixing algorithm and achieved promising results.
The above algorithms can all obtain promising results. However, higher unmixing accuracy often leads to higher computational complexity. To solve this problem, we propose a novel double regression-based sparse unmixing model (DRSUM). DRSUM decomposes the complex unmixing process into two simple sparse regressions, which can reduce the computational complexity. An important contribution of this paper is that the proposed DRSUM is an open model and framework. Therefore, we can add different constraints on the first and second sparse regressions to improve unmixing accuracy. In addition, we can perform appropriate preprocessing to further improve the unmixing results. Under this model, a specific algorithm called double regression-based sparse unmixing via K-means (DRSUM K−means ) is proposed. We first use the improved K -means clustering algorithm [22] for preprocessing, which can classify pixels into different categories. Pixels belonging to the same category have the same active endmember set. Therefore, we use the mean vector of each category instead of all pixels for the first sparse unmixing. In other words, the number of pixels participating in the first sparse unmixing is the number of categories, which can greatly reduce the computational complexity. Then, we impose single sparsity constraint to unmix these mean vectors. The unmixing results of mean vectors are considered as abundance estimate of all pixels in the corresponding categories. After the first unmixing, we take the result as a constraint and impose joint sparsity (using l 2,0 norm to control the sparsity) constraint on the original hyperspectral image for the second unmixing. However, it is NP-hard to solve the l 2,0 norm directly [23]. Most algorithms adopt the convex relaxation strategy to solve the l 2,0 norm at the expense of unmixing accuracy and sparsity. To address this issue, we introduce the row-hard-threshold function to solve the l 2,0 norm directly, which can improve the unmixing accuracy and sparsity. Simulated and real data experiments have proven that the proposed DRSUM K−means algorithm can obtain better unmixing results.

Sparse Unmixing
Sparse unmixing is aimed at finding an optimal linear combination of endmembers from the spectral library for mixed pixels. Let Y ∈ R L×n represent a hyperspectral image, which contains n pixels and L spectral bands. The mixed model can be written as where A ∈ R L×m represents the known spectral library, which contains a total of m atoms (endmembers). X ∈ R m×n denotes the abundance matrix, which represents the abundance of these m endmembers in the hyperspectral image. N ∈ R L×n is the error term. However, the number of endmembers in the real hyperspectral image is far less than that of the endmembers in the spectral library. Most endmembers of the spectral library do not appear in the hyperspectral image. Therefore, most of the elements in the abundance matrix X are zero. In other words, there are only a few elements in X that are nonzero [24]. Therefore, the abundance matrix X is sparse, and we can impose sparsity constraint to solve equation (1). In addition, the abundance nonnegative constraint (ANC) and the abundance sum-toone constraint (ASC) should also be met. It has been proven in [8] that the ANC will automatically impose a generalized ASC. Therefore, we only impose ANC and relax the ASC. Then, the sparse unmixing problem can be written as: where k⋅k F represents the Frobenius norm, and λ denotes the regularization parameter. kXk 1,0 represents the number of nonzero elements in X. ≻ denotes element-wise comparison. It is well known that solving equation (2) directly is NPhard. The SUnSAL algorithm relaxes the kXk 1,0 norm to kXk 1,1 norm to solve it. SUnSAL is single-pixel sparse unmixing algorithm, and it is seriously affected by the mutual correlation of spectral libraries. CLSUnSAL imposes collaborative sparsity constraint to improve unmixing accuracy, and the unmixing problem is where kXk 2,1 = ∑ m i=1 kX ir k 2 and X ir represent the i-th row of X. It is worth noting that kXk 2,1 is a convex relaxation of kXk 2,0 to avoid NP-hard problem. However, this will reduce the sparsity constraint and unmixing accuracy. Although SUnSAL and CLSUnSAL are faster than many algorithms, 2 Journal of Sensors their unmixing accuracy is insufficient. To obtain higher unmixing accuracy with low computational complexity, the double regression-based sparse unmixing model is proposed.

Double Regression-Based Sparse Unmixing Model
3.1. DRSUM. The core idea of DRSUM is to decompose the complex unmixing process into two simple sparse regressions. We can add different constraints on the first and second sparse unmixing to improve unmixing accuracy. In addition, we can perform appropriate preprocessing to further improve the unmixing results. The first sparse unmixing problem is as follows: where Y denotes the hyperspectral image, and X 1 is the corresponding abundance matrix. gðX 1 Þ denotes the constraint item. After the first sparse unmixing, the obtained abundance matrix X 1 is added as a constraint to the second unmixing process. The second sparse unmixing problem is where α denotes the regularization parameter, and f ðXÞ denotes the constraint for X. There are no specific restrictions on gðX 1 Þ and f ðXÞ. In different application scenarios, gðX 1 Þ and f ðXÞ can be different types of constraints to guarantee the unmixing accuracy, such as sparsity constraints, low-rank constraints, graph regularization constraints, total variation regularizer, nonlocal similarity, and image edge information. Therefore, DRSUM is flexible and has wide applicability.

Preprocessing.
DRSUM is an open model and framework. Under this model, we can perform appropriate preprocessing and add different constraints to get different algorithms. In this paper, we first use the improved K -means clustering algorithm to classify the hyperspectral image. The K-means algorithm is simple and fast. It has only one parameter to be determined: the number of categories K.
It should be noted that the original K-means algorithm classifies pixels only based on spectral distance, which ignores spatial information in the hyperspectral image. To improve classification accuracy, we consider both spectral distance d 1 and spatial distance d 2 to classify pixels. Therefore, the distance calculation formula for pixels i and j becomes where y i and y j denote the spectral vectors. ða i , b i Þ and ða j , b j Þ denote the position of pixels i and j, respectively. To effectively control the weight, d 1 ði, jÞ and d 2 ði, jÞ are normalized to obtain d 1 ′ ði, jÞ and d 2 ′ ði, jÞ. max ðd 1 Þ and max ðd 2 Þ denote the maximum value of spectral distance and spatial distance for any two pixels. ρ denotes the weight between spectral distance and spatial distance. To simplify the calculation, ρ is set to 1 in this paper. In each iteration, every pixel is assigned to the nearest cluster center. After each round of classification is completed, the cluster centers are updated by the average of the spectral vectors and position coordinates of all pixels in the corresponding categories. The classification process is completed when the cluster centers no longer change. More details about the K-means algorithm can be seen in reference [22]. After preprocessing, the pixels are classified into K categories. Pixels belonging to the same category have the same active endmember set. Therefore, we can use the mean vector of each category instead of all pixels for the first unmixing, which can greatly reduce the computational complexity. In other words, the number of pixels participating in the first sparse unmixing is the number of categories. In addition, the improved K-means clustering algorithm considers both spectral and spatial information for classification, which can improve the information utilization of hyperspectral images.

First Sparse Unmixing.
After preprocessing, we calculate the mean vector of each category and impose single sparsity constraint to unmix these mean vectors. Then, we can get the following unmixing problem: where Y M = ½y 1 , y 2 , ⋯, y K ∈ R L×K denotes mean vectors of the K categories. X M denotes the corresponding abundance matrix and kX M k 1,1 = ∑ m i=1 ∑ K j=1 jx i,j j. λ 1 denotes the regularization parameter. Before solving equation (10), we introduce the indicator function to denote the nonnegative constraint, and the unmixing problem becomes where l R+ ðX M Þ represents the indicator function, i.e., l R+ ðX M Þ = 0 if X M ∈ R + and l R+ ðX M Þ = +∞ otherwise. Then, equation (11) can be efficiently solved under alternating direction method of multipliers (ADMM) framework [25]. Set V 1 = AX M , V 2 = X M , V 3 = X M , and then equation (11) can be written as Set Then, equation (12) becomes The following Lagrangian function can be constructed by equation (17) L where μ > 0 denotes a positive constant, and D/μ denotes the Lagrangian multiplier of the constraint GX M + BV = 0. Under the ADMM framework, each variable is independent when updated. First, expand equation (18) to get When updating each variable, only the items related to it need to be considered. The objective optimization functions of variables The corresponding solutions are where soft denotes the soft-threshold function softðy, τÞ = sign ðyÞ max fjyj − τ, 0g. Then, the final algorithm flow for the first sparse unmixing can be seen in Algorithm 1.

Second Sparse Unmixing.
After the first sparse unmixing, we get the abundance estimate X M of these mean vectors. Then, the abundance matrix X M ∈ R m×K for these mean vectors is converted to the abundance matrix X 1 ∈ R m×n for the entire image. Specifically, the pixels in the same category have the same abundance estimate as the category's mean vector. Then, X 1 is added as a constraint for the second sparse unmixing, and we impose joint sparsity constraint on the original hyperspectral image. It is well known that solving the l 2,0 norm directly will cause NPhard problem. Most algorithms use the l 2,1 norm instead of l 2,0 norm to avoid NP-hard problem. However, the l 2,1 norm just represents the sum of row-norms, which cannot meet the sparsity requirement sometimes. To address this issue, we introduce the row-hard-threshold function to solve the l 2,0 norm directly. Then, we can get the following unmixing problem: where λ 2 and α represent the regularization parameters, and the l 2,0 norm is defined as follows where X ir denotes the i-th row of X. We also use the ADMM method to solve equation (28). Set V 1 = AX, V 2 = X, and V 3 = X. Then equation (28) can be written as We can construct the following Lagrangian function using the same method as equation (19) 4 Journal of Sensors The iterative update formulas of variables X and V 1 are as follows Before solving variable V 2 , we first remove items independent of it and get Inspired by CSUnL0 algorithm [11], we introduce the row-hard-threshold function to solve equation (34). Define where ðRH t ðXÞÞ i,j represents the j-th element in the i-th row of the row-hard-threshold function RH t ðXÞ. x i,j denotes the j-th element in the i-th row of X and t denotes the threshold. It has been proved in reference [11] that H = RH t ðXÞ is the closed-form solution for Equation (34) can be written as follows Then, we can get the iterative update formula of V 2 from equation (36) and (37).
The iterative update formulas of other variables are similar to those in the first sparse unmixing. Then, the final algorithm flow for the second sparse unmixing can be seen in Algorithm 2.

Simulated Data Experiments
In this section, we use two sets of simulated hyperspectral data to validate the effectiveness of the proposed DRSU M K−means algorithm. The famous USGS spectral library [26], which contains 498 substances and 224 bands, was used in this paper. Its spectral range is from 0.4 to 2.5 μm. For simplicity, we only use a subset A 1 ∈ R 224×240 of the USGS spectral library. The angle between any endmembers (Em) in A 1 is greater than 4.44 degrees. The DRSU M K−means algorithm was compared with five state-of-the-art algorithms: SUnSAL [8], CLSUnSAL [9], SUnSAL-TV [12], CSUnL0 [11], and S 2 WSU-W1 [13]. The metrics signal-toreconstruction error (SRE) and reconstruction overall rootmean-square error (rRMSE) [27] were used to evaluate the unmixing effect of these algorithms. Let x i denote the true abundance vector andx i an estimate of x i . The metric SRE can be defined as follows where Eð⋅Þ represents the expectation function. The larger the SRE (dB), the higher the unmixing accuracy. The metric rRMSE is defined by whereŶ = AX represents the reconstruction image, andX represents the estimated abundance matrix. n and L represent the number of pixels and spectral bands, respectively. The smaller the rRMSE, the higher the unmixing accuracy.  Figure 1 shows the true abundance map of each endmember.

Generation of Simulated
To be more realistic and detect the antinoise ability of each algorithm, DC1 was contaminated with white Gaussian noise for three signal-to-noise ratio (SNR) levels: 20, 30, and 40 dB.

Simulated Data Cube 2 (DC2).
A more complex dataset (DC2), which contains 100 × 100 pixels, was used for this experiment. We also used the A 1 spectral library and randomly selected 9 spectral signatures from it as the endmembers: Wollastonite HS348.3B, Heulandite GDS3, Mordenite GDS18, Almandine WS478, Nepheline HS19.3, Barite HS79.3B, Grossular WS485, Brookite HS443.2B, and Albite HS324.3B. Their corresponding abundance conforms to the Dirichlet distribution and is shown in Figure 2. We can see from Figure 2 that the endmember distribution in DC2 is more complicated than that in DC1. Both steep and smooth transitions exist in DC2, which is more in line with the real distribution of objects. In addition, DC2 was also contaminated by white Gaussian noise with SNRs of 20, 30, and 40 dB.

Three Processes Analysis of DRSUM K−means .
In this section, we will analyze the three processes of DRSU M K−means : preprocessing, the first sparse unmixing, and the second sparse unmixing. We first use the improved K -means clustering algorithm to classify pixels into different categories. The purpose of preprocessing is to improve the unmixing accuracy and reduce the computational complexity. The choice of K value is an important factor of preprocessing. If the value of K is too small, each category will contain more pixels. There may be large differences between pixels belonging to the same category. The standard deviation of each category will become larger. Then, pixels of the same category may have different active endmember set. In other words, it is inappropriate to use the mean vector of each category instead of all pixels for the first sparse unmixing. If the value of K is too large, the utilization of spatial information will decrease. Pixels with the same active endmember set may be classified into different categories. This will also affect the unmixing accuracy. Therefore, there should be a balance for the parameter K. After preprocessing, we calculate the mean vector of each category and impose single sparsity constraint on these mean vectors for the first sparse unmixing. If there are large differences between pixels belonging to the same category, the results of the first sparse unmixing will be quite different from those of the real situation. Generally speaking, the larger the number of categories, the smaller the difference between pixels in the same category. However, as the number of categories increases, the utilization of spatial information will decrease and the calculation time will increase. Therefore, the classification results have a great influence on the first sparse unmixing.
After the first sparse unmixing, we get the abundance estimate X M of these mean vectors. The abundance matrix X M ∈ R m×K for these mean vectors is converted to the abundance matrix X 1 ∈ R m×n for the entire image. Then, X 1 is   Journal of Sensors added as a constraint for the second sparse unmixing, and we impose joint sparsity constraint on the original hyperspectral image. The abundance matrix X 1 can be regarded as a reference for the second sparse unmixing. The closer X 1 is to the true abundance, the higher the unmixing accuracy. Therefore, the first unmixing results can directly affect the second unmixing accuracy.

Parameters Selection.
The DRSUM K−means algorithm contains three processes. We can know from Section 4.2 that the previous process has a great influence on the latter one. However, these three processes still have a certain degree of independence. Therefore, their parameters can be selected independently.  Figure 3 presents the influence of parameters on the performance of DRSUM K−means for DC1 with SNR 30 dB. We can see that the parameters K, λ 1 , λ 2 , and α all have a great influence on the SRE value, which indicates that the preprocessing, the first, and second sparse unmixing can affect the unmixing effect greatly.

Comparison with Other Algorithms.
In this section, we will compare the proposed DRSUM K−means algorithm with several state-of-the-art algorithms: SUnSAL [8], CLSUnSAL [9], SUnSAL-TV [12], CSUnL0 [11], and S 2 WSU-W1 [13]. The parameters of all algorithms were constantly adjusted to obtain their optimal performance. The SRE and rRMSE values, the optimal parameter values, and the required time for all algorithms are shown in Tables 1 and 2. We can see that the DRSUM K−means algorithm has the highest SRE values and lowest rRMSE values under all the SNRs. Both CSUnL0 and S 2 WSU-W1 algorithms can also achieve promising results. However, they are sensitive to noise and their unmixing accuracy is low when SNR is 20 dB. The SUnSAL-TV algorithm takes much more time than DRSU M K−means , and its unmixing accuracy is still worse than that of DRSUM K−means . Although the SUnSAL and CLSUnSAL algorithms are faster than DRSUM K−means , their unmixing accuracy is much worse than that of DRSUM K−means .   Figure 4 that the SUnSAL, CLSUnSAL, CSUnL0, and S 2 WSU-W1 algorithms have more impurities in the background region than SUnSAL-TV and DRSUM K−means algorithms. The number of square region of SUnSAL-TV is less than that of DRSU M K−means , especially for Em2. We can see from Figure 5 that the estimated abundance maps of SUnSAL and CLSUnSAL algorithms are quite different from the real ones. The estimated abundance maps of SUnSAL-TV, CSUnL0, S 2 WSU-W1, and DRSUM K−means algorithms are similar. However, it can still be seen that DRSUM K−means obtains higher unmixing accuracy than other algorithms, especially for Em1 and Em2. Therefore, the simulated data experiments can validate the effectiveness of the proposed DRSU M K−means algorithm.

Computational Cost Analysis.
To analyze the computational complexity of the proposed DRSUM K−means algorithm, all algorithms were run in MATLAB R2018a on a laptop equipped with an Intel Core 7 processor (2.3 GHz main frequency) and 16 GB of memory. The maximum number of iterations and the tolerance for all algorithms are set to 1000 and 10 −4 , respectively. Tables 1 and 2 show the running time of all algorithms for DC1 and DC2. We can see that SUnSAL-TV takes the most time among all algorithms. S 2 WSU-W1 and CSUnL0 algorithms take much more time than DRSUM K−means . Although DRSUM K−means takes a little more time than SUnSAL and CLSUnSAL algorithms, the unmixing effect of DRSUM K−means is much better  Journal of Sensors than that of them. Therefore, the DRSUM K−means algorithm can obtain promising results with low computational complexity. It should be noted that the time required by SUnSAL-TV is several times longer than that of DRSU M K−means . Therefore, though DRSUM K−means has more parameters to be set, the time required by DRSUM K−means for selecting parameters is still less than that of SUnSAL-TV.
4.6. Convergence Analysis. After preprocessing, we impose single sparsity constraint on these mean vectors for the first sparse unmixing. From equation (10), we can know that the first sparse unmixing is similar to the SUnSAL algorithm. Reference [8] has proved that the SUnSAL algorithm is convergent. Therefore, we can conclude that the first sparse unmixing is also convergent. From equation (28), we can know that the result of the first sparse unmixing and the l 2,0 norm is introduced as constraints for the second sparse unmixing. It is difficult to give the theoretical convergence analysis of equation (28) directly. Figure 6 presents the convergence curves of equation (28) for DC1 and DC2 with different SNRs. We can see from Figure 6 that the SRE value no longer changes when the number of iterations reaches a certain value. Therefore, the residuals and the maximum number of iterations can be used as iteration stop conditions of DRSUM K−means . In all experiments, the tolerance and the maximum number of iterations for DRSU M K−means are set to 10 −4 and 1000, respectively. In addition, it is worth noting that though convergence is not theoretically guaranteed, and the DRSUM K−means algorithm exhibits a robust convergence behavior. Parameters λ = 0:05 λ = 0:1 λ = 0:001 λ TV = 0:005 a 0 = 0:005 λ = 0:005 K = 120, λ 1 = 0:001 α = 10, λ 2 = 0:01

Real Data Experiment
The public Cuprite dataset (http://aviris.jpl.nasa.gov/html/ aviris.freedata.html) was used for this experiment. It is representative and has been widely used for hyperspectral unmixing experiments. The Cuprite dataset was collected by the Airborne Visible Infrared Imaging Spectrometer in 1997. To simplify the calculation, we used a 250 × 191 subset of the Cuprite dataset, similarly as in [28][29][30]. It contains 224 spectral bands, but after removing the low-SNR and high-water-absorption bands (1-2, 105-115, 150-170, 223-224), 188 spectral bands were retained. The whole USGS spectral library A ∈ R 224×498 was used. Figure 7 shows the distribution of different substances in Cuprite dataset. It was produced in 1995 by the USGS, and it is an important reference to qualitatively evaluate the unmixing performance. We selected three representative endmembers (buddingtonite, alunite, and chalcedony) from this dataset and compared the abundance maps estimated by each algorithm with those generated by USGS software Tricorder 3.3 2 , which are presented in Figure 8. The parameters of SUnSAL, CLSUnSAL, SUnSAL-TV, CSUnL0, and S 2 WSU-W1 are set according to references [9,11,13]. Table 3 presents the parameter values, processing time, and rRMSE values of each algorithm for the Cuprite dataset.
We can see from Figure 8 that the backgrounds of SUnSAL and SUnSAL-TV algorithms are not as pure and clear as those of other algorithms, especially for the mineral buddingtonite. There are also more oversmoothing and blurred edges phenomenon in the abundance maps of SUnSAL-TV. CLSUnSAL and CSUnL0 algorithms have worse abundance maps than other algorithms for the mineral chalcedony. The unmixing effects of S 2 WSU-W1 and DRSUM K−means algorithms are similar, but it can still be seen that DRSUM K−means has higher abundance for the mineral chalcedony and is closer to the true distribution. In addition, it can be seen from Table 3 that the DRSU M K−means algorithm has the lowest rRMSE value among all algorithms. That is to say, DRSUM K−means has the highest unmixing accuracy for the real dataset. Therefore, the real dataset can also prove the effectiveness of the proposed DRSUM K−means algorithm.          Journal of Sensors

Conclusions
In this paper, a novel double regression-based sparse unmixing model (DRSUM) was proposed. DRSUM decomposes the complex unmixing process into two simple sparse regressions, which can reduce the computational complexity.
DRSUM is an open model and framework. In different application scenarios, we can add different types of constraints into DRSUM to guarantee the unmixing accuracy.
In addition, we can perform appropriate preprocessing to further improve the unmixing results. Under this model, a specific algorithm called double regression-based sparse unmixing via K-means (DRSUM K−means ) is proposed. By appropriate preprocessing, DRSUM K−means can comprehensively utilize spatial information, spectral information, and sparsity. Simulated and real data experiments have proven that DRSUM K−means can obtain better unmixing results with lower computational complexity.

Data Availability
The MATLAB code and dataset used to support the findings of this study have been deposited in the Zenodo repository (doi:10.5281/zenodo.4052818).