Hyperspectral Image Reconstruction Based on Reference Point Nondominated Sorting Genetic Algorithm

Spatial and spectral features of hyperspectral imagery reconstruction have gained increasing attention in the latest years. Based on the study of orthogonal matching pursuit (OMP) idea, a hyperspectral image reconstruction algorithm based on reference point nondominated sorting genetic algorithm (NSGA) is proposed. Instead of directly reconstructing the entire hyperspectral data as a traditional OMP reconstruction algorithm, the proposed algorithm explores the idea of the evolution process in the reconstruction. The Gabor redundancy dictionary is established as the sparse basis of hyperspectral images, and the reconstruction model of multiobjective optimization is constructed. In the reconstruction process, the NSGA-III algorithm is used to ﬁ nd the optimal atoms to represent the original signal, and Hermitian inversion lemma is also used to realize the recursive update of the residuals. The initial solution generation, the de ﬁ nition of reference points, the association and niche-preservation operation, and the crossover and mutation operation in NSGA-III are presented in detail. Experimental results on hyperspectral data demonstrate that the proposed algorithm could maintain the reconstruction accuracy, as well as the computational e ﬃ ciency, and are superior to the state-of-the-art reconstruction algorithms. The proposed algorithm could be applied in the classi ﬁ cation and unmixing in hyperspectral images.


Introduction
Hyperspectral images (HSIs) contain rich spatial geometric information and spectral feature information, suitable for target detection, image classification, sparse representation, image denoising, image fusion, biomedicine, and other fields [1]. The progress of hyperspectral imaging spectrometer, the analysis and processing of hyperspectral image data, and the improvement of image interpretation have become frontier research fields that have attracted much attention. The continuous expansion of application fields requires hyperspectral images to provide complete information on the target, to improve the spatial resolution and spectral resolution of the imaging system to achieve more complete spatial and interspectral information [2]. The greater the spatial and spectral resolution, the more difficult it is for the data storage and transmission capacities of the system. With the rapid increase of people's demand for information [3], data dimensionality reduction caught people's attention. Furthermore, finding effective data compression and reconstruction algorithms is very important for the growth of hyperspectral remote sensing technology [4].
According to the Nyquist sampling theorem, the data acquisition process is to first calculate the highest frequency of the signal, then perform uniform sampling at more than twice the Nyquist rate, and then restore the original data information [5]. The consequence of this is that a huge amount of data is collected and discarded in the compression stage. This method required more storage and transmission resources [6].
Compressed sensing (CS) theory was proposed in 2006 [7]. The theory assumes that the information flow of continuous-time signals should be significantly lower than the signal's maximum frequency and that the degree of freedom of discrete-time signals should be significantly lower than the signal's length [8]. Therefore, the compressed sensing theory believes that the information characteristics of the signal can be directly collected, completing the reconstruction of the signal [9].
According to compressed sensing theory, if the signal has sparsity, the original signal can be remodeled from the measured value [10,11]. The signal reconstruction process is accomplished by solving convex optimization problems [12], including l 1 minimization [13] or gradient projection for sparse reconstruction (GPSR) algorithm [14]. Some other greedy algorithms, such as total variation-based algorithms [15][16][17], have also been proposed to reconstruct the original signal. Compared to Nyquist-based sampling methods, using CS to acquire and process signals has the advantage of saving transmission and storage resources. It has been widely used in SAR [18][19][20], airborne [21,22], classification [23,24], shadow detection [25], and other fields [26].
Analyzing the imaging process of hyperspectral images, we can find that the spatial information and interspectral information of hyperspectral images are redundant, indicating that the images can be compressed. Using compressed sensing theory to reconstruct, classify, and detect hyperspectral images [27][28][29][30][31] can greatly decrease the amount of data. The point of this article is to think about how to restructure the original image from a lesser number of measurements.
When using compressed sensing theory to reconstruct hyperspectral images, spatial correlation and interspectral correlation are need to be considered. The same measurement matrix is employed to measure each band image, and after the measurement data is obtained, each band image is reconstructed using prior knowledge [32,33], such as the dictionary sparse representation algorithm proposed by Geng [34] and the deep residual attention network algorithm proposed by Kohei [35]. Block compression sampling can reduce the storage space required by the measurement matrix at the sampling end, and better reconstructed images can be obtained by using the smoothed projected Landweber algorithm (BCS_SPL) [36,37]. The same measurement matrix is used to measure each pixel vector [38,39], and after the measurement data is obtained, principal component analysis [40,41] or multihypothesis prediction [42] is used to complete the signal reconstruction. Bacca [43] proposed recursive spectral band reconstruction from single pixel hyperspectral measurements.
Hyperspectral images also have spectral mixing properties [44,45], which are described using a linear mixture model. This property suggests that hyperspectral images can be decomposed into endmember matrices and abundance matrices [46,47], and unmixing using spectral mixing models facilitates subpixel mapping [48] and resolution enhancement [49].
A compressed sensing reconstruction algorithm with reference point nondominated sorting genetics is proposed for hyperspectral images. The method samples spatially during the sampling process to acquire available information for reconstruction. In the reconstruction process, a Gabor sparse basis redundancy dictionary and a multiobjective optimization reconstruction model are constructed. The algorithm uses the reference point nondominated sorting genetic algorithm (NSGA-III) [50] to optimize the matching process and finds the optimal atoms. Furthermore, the algorithm uses Hermitian inversion lemma to complete the residual updating in a recursive way. The proposed algorithm is established on the idea of orthogonal matching pursuit (OMP) [51], using the Hermitian matrix characteristics and NSGA-III to obtain the multiobjective optimization reconstruction model; therefore, the algorithm is named HN_OMP. The reconstruction results would be further applied to image classification and interpretation.
The remainder of the manuscript is organized into five sections. In Section 2, the mathematical model includes the Gabor dictionary basis construction and the multiobjective reconstruction model construction. And then in Section 3, the NSGA-III method is introduced firstly, with its selection operator, crossover operator, mutation operator, and its steps are given. The Hermitian inversion process used to accelerate the residual updating is described. The reconstruction process of algorithm HN_OMP and the diagram are described. In Section 4, experimental datasets and evaluation metrics are presented. The results and comparisons of hyperspectral data are analyzed in Section 5, and Section 6 is about conclusion.

Mathematical Model
Different from the Nyquist sampling theorem, CS theory does not sample each point of the signal but uses a measurement matrix Φ, to measure sparse signals x linearly, in which M < <N, to obtain the observation value y. The process is represented as where Θ = ΦΨ is a M × N dimensional matrix, called the CS information operator and Ψ represents the sparse basis of the signal. The sampling rate is defined as S R = M/N. The reconstruction process is to recover the sparse signal s with the known observation value y and information operator Θ, so as to acquire the original signal x. The sampling and reconstruction procedure of compressed sensing theory is given in Figure 1.
According to the compressed sensing theory that if the sensing matrix fulfills the constraint isometric property, then the measurement is performed in the above manner. The measured value already carries enough information of the original signal. By solving the inverse process of the above measurement process, the original signal can be restored. Because of M < <N, it is hard to obtain an effective solution x directly from the measurement y . When the signal is sparse, the above problem can be converted into solving the minimum l 0 norm problem [52]: where ð⋅Þ −1 denotes the inverse of matrix. However, l 0 optimization is still an NP-hard problem. Under certain conditions, the solutions of l 1 norm and l 0 2 Mobile Information Systems norm are equivalent, namely: x = arg min x∈R N Ψ −1 s 1 , s:t:y = ΦΨs: ð3Þ After the above analysis, it is found that the key point to reconstruct the signal is how to find the optimal atom which can signify the original sparse signal from the sparse basis. This process consists of two steps: one is to construct a sparse basis, and the second is the method to find the best atom. In the following, we elaborate on the composition of sparse basis and the establishment of reconstruction model.

Sparse Basis Construction.
The time-frequency atom analysis approach involves performing a time-frequency joint analysis on a super-complete redundant dictionary made up of time-frequency atoms. The time-frequency atoms can effectively express the localized structure of the signal, which is beneficial to extract useful information from the complex signal. The key of this method is how to select the time-frequency atom library suitable for the local structural characteristics of the signal and how to search for the most characteristic time-frequency atom from the library. By performing time translation, scale transformation, and frequency modulation on standard Gaussian function atoms, a Gabor redundant dictionary with time-frequency characteristics is obtained. Using the Gabor function to generate a redundant dictionary for sparse representation of the signal can ensure the global information of the signal and the intensity of the signal change in any local time. The generating function of Gabor atoms [53] can be expressed using Gaussian window function: where n = 1, 2, ⋯, N, N indicates the signal length, γ = ðs , u, υ, ωÞ is time-frequency parameter, and Γ shows the set of parameters γ.
The time-frequency parameters are discretized according to the following methods: where a = 2, Δu = 1/2, Δυ = π, Δω = π/6, 0 < α 1 ≤ log 2 N, The Gabor dictionary is highly redundant, and the number of atoms is The hyperspectral image is composed of multiple images of different bands. Each band image reflects the spatial distribution information of ground objects. It is the same as the ordinary two-dimensional natural image. There is information redundancy between adjacent pixels, which is sparse and can be used for redundancy. The time-frequency atoms contained in the constructed Gabor redundancy dictionary are various transformations of Gaussian function atoms. Therefore, it could be constructed as the sparse basis to perform sparse approximation for hyperspectral image reconstruction.

Multiobjective Reconstruction Model.
Using the basic idea of the orthogonal matching pursuit algorithm, through continuous iteration, we find some optimal atoms that can characterize the sparse signal s from the information operator Θ. The residual between the signal and the measured value is minimized, thereby completing the reconstruction of the signal.
The redundant dictionary is Ψ = fg γ g γ∈Γ , and g γ is the atom defined by the parameter group γ. The process of using OMP to achieve reconstruction is described in Algorithm 1.
Analyzing the reconstruction process of OMP, we find that the selection of the optimal atom only considers the correlation between the atoms in the redundant dictionary and the residuals whereas ignores the correlation between the atoms in the redundant dictionary and the selected optimal atom. From the perspective of signal analysis, only when the selected atoms are uncorrelated, the smallest set of uncorrelated atoms representing the signal can be found. Therefore, we choose to build a multiobjective reconstruction model to enhance the accuracy of the algorithm.
After k − 1 iteration, the residual is R k−1 , and the optimal atom set is Θ k−1 , and then in the kth reconstruction process, the first objective function that needs to be optimized is the maximum inner product of matching atoms with residuals. In other words, the first objective function is computed as At the same time, we require that the currently selected atom is not related to the selected atom. In other words,

Mobile Information Systems
Inputs: the measurement value y, the measurement matrix Φ, the sparse basis Ψ, the atom number K. Output: the reconstructed signalx. 1: Initialization parameters, residuals R 0 = y, optimal atom set Ψ 0 = ½, number of iterations k = 1. 2: Whilek ≤ Kdo 3: Find the atom g k that best matches the residuals from the redundant dictionary, which satisfies g k = max γ∈Γ jhR k−1 , Φg γ ij.

4:
Update the optimal atoms set with g k to obtain Ψ k = ½Ψ k−1 , g k .
Inputs: the initial feasible solution Z int , the population size P. Output: the initial solution Z Par 0 . 1: Set the current individual p = 1. 2: Whilep ≤ Pdo 3: Randomly generate one number N rand between ½1, N atom , set the N rand th atom in Z int as the p-th initial solution in Z Par 0 , Z Par 0 ð : ,pÞ = Z int ð: ,N rand Þ. 4: Compute p = p + 1.
Inputs: the population S g , the reference points H n Output: the closest reference point πðzÞ, the distance dðzÞ between πðzÞ and z 1: for each reference point h ∈ H n do 2: Compute reference line: k = h 3: endfor 4: for each population member z ∈ S g do 5: for each reference line k ∈ H n do 6: Compute d ⊥ ðz, kÞ = z − k T z/kkk 7: endfor 8: Assign πðzÞ = k : arg min d ⊥ ðz, kÞ, k ∈ H n 9: Assign dðzÞ = d ⊥ ðz, πðzÞÞ 10: endfor Algorithm 4: Associate. 4 Mobile Information Systems Inputs: the number needed to select P L , the number associated with the reference point ρ j , the closest reference point πðzÞ, the distance dðzÞ, the reference points H n , the L-th level F L . Output: the next generation population Z Par g+1 1: m = 1 2: While m ≤ P L do 3: J min = fj : arg min ρ j , j ∈ H n g 4: j = randomðJ min Þ 5: I j = fz : πðzÞ = j, z ∈ F L g 6: if I j ≠ φ then 7: if ρ j = 0 then 8: Z Par g+1 = Z Par g+1 ∪ ðz : arg min dðzÞ, z ∈ I j Þ 9: else 10: Z Par g+1 = Z Par g+1 ∪ randomðI j Þ 11: endif 12: Points to be choosen from F L : P L = P − jS g j 15: Normalize the objective functions and create the reference points using Algorithm 3.

16:
Associate each member of S g with a reference point using Algorithm 4.

17:
Compute the niche count of the reference point using Algorithm 5 to obtain Z Par g+1 . 18: endif 19: Perform crossover and mutation operations to generate the offspring population Z Spr g+1 . 20: g = g + 1 21: endWhile Algorithm 6: NSGA-III. 5 Mobile Information Systems the inner product between the currently selected atom and the selected atoms should be minimized. Therefore, the second objective function is computed as In the multiobjective reconstruction model, we choose the maximum inner product of the matching atom with the image residual and the minimum correlation between the atoms, as the two optimization goal. The multiobjective reconstruction model is expressed as Finding the optimal atom is actually a search in a set of optimal solutions; we can try to simulate this process with a genetic algorithm based on reference point nondominated sorting to explore the feasibility of improving computational efficiency. The detailed steps are described in Section 3.
In the residual update process of OMP, as the number of atoms increases, the inversion operation of a large matrix appears, and the computational complexity will increase. Through analysis, we found that the matrix Θ T k Θ k is a posi-tive definite Hermitian matrix. Thence, in Section 3, we will introduce the Hermitian inversion lemma to accelerate its inversion process, thereby improving the calculation efficiency of the residual update process.

Proposed Reconstruction Algorithm
Nondominated sorting genetic algorithm is a multiobjective optimization method based on genetic algorithm, which optimizes multiobjectives based on the Pareto frontier. Considering the accuracy and efficiency, NSGA-II [54] using the elite strategy and nondominated quick sort is a good choice.
To enhance the performance of the algorithm, the literature [50] uses the target spatial reference point to replace the crowded distance in NSGA-II and proposes NSGA-III. NSGA-III, unlike NSGA-II, helps maintain population diversity by supplying and adaptively updating a number of widely distributed reference points. The main advantage of the algorithm NSGA-III is that, by defining a hyperplane and multiple reference points [55,56], it selects the individuals which can best adapt the environment, in order to maintain the diversity of the population and ensure that decision-makers can find the optimal solution.
Firstly, we describe the NSGA-III algorithm in detail, including the initial solution generation, the definition of reference points, the association and niche-preservation operation, and the crossover and mutation operation. Following that, the Hermitian inversion process is presented, and the implementation of HN_OMP is at the last.
3.1. NSGA-III Method. The constructed optimization model is a multiobjective system with many constraints. The traditional NSGA-III algorithm puts forward the concept of constraint control, and formulate that the definite feasible solution must be better than the infeasible solution. However, its completely random generation rules will lead to cause the algorithm to converge slowly. To improve the convergence speed, the method of generating initial solutions is improved in generating initial solutions.
3.1.1. Initial Solution Generation. In light of the evolutionary algorithm's properties, to solve the multiobjective reconstruction model, there is no need to generate a Gabor Inputs: the measurement value y, the measurement matrix Φ, the atom number K, the parameters of NSGA (including the population size P, the maximum evolution generation G). Output: the reconstructed signalx. 1: Initialize residual R 0 = y, the optimal atoms set Ψ 0 = ½, and the current atom number k = 1. 2: While k ≤ K do 3: According to Algorithm 6: NSGA-III, with the parameters P and G, the optimal individual Z o is obtained as the optimal atom g k . 3: Use g k to update the optimal atoms set Ψ k = ½Ψ k−1 , g k . 4: Update residual with Hermitian inversion, using formula (19) Algorithm 7: HN_OMP. 6 Mobile Information Systems dictionary in advance, and a real-valued encoding method is used to construct chromosomes, and each chromosome represents an atom. According to the generating formula of Gabor atom, the length of chromosome is D = 4, and its value is the vector represented by ðs, u, υ, ωÞ. Suppose the population size is P, the maximum evolution generation is G, and the number of objective functions is T. According to the discretization method of the ðs, u, υ, ωÞ, all atoms are generated to form the initial feasible solution set Z int , the size of Z int is N × N atom , and the element in Z int is expressed in formula (9), where ðs, u, υ, ωÞ is the n atom th time-frequency parameter in set Γ.
We select all alternative atoms as the initial feasible solution Z int and randomly select P solutions as the initial solution in our algorithm, which can meet the constraints and increase the diversity of the solutions. The specific method to obtain the initial solution Z Par 0 is described in Algorithm 2.

Classification of Population into Nondominated Levels.
Assume the parent population at this generation is Z Par g with size P, and Z Spr g is the offspring population with size P. The first phase is to take the top P members from the combined population Z Com g = Z Par g ∪ Z Spr g , thus allowing preserve best members of the parent population. To complete this, the combined population Z Com g is sorted based on different nondomination levels using the usual domination principle.
The individuals of the population are divided into L layers according to the nondominated rules [50]. The total population members from nondominated front level F 1 to level F l are denoted as S g , and the number is jS g j. If jS g j = P, no further operations are needed, and the next generation is started with Z Par g+1 = S g , or else you need to select P L = P − jS g j population members from the Lth level F L .

Identification of Reference
Points on a Hyperplane. The user can either predefine the reference points in a structured way or supply them preferentially. In this study, we employ an efficient technique that spaces points on a normalization hyperplane with an intercept of one on each axis and is We highlight population members who are in some way related with each of these reference sites, in addition to nondominated solutions. The derived solutions are expected to be dispersed on or near the Pareto-optimal front, as the reference points are widely scattered across the complete normalized hyperplane.
3.1.4. Normalized Objective Function. Initially, the ideal point of the population S g is identified by recognizing the minimum value b min t , for each objective function t = 1, 2, ⋯ , T and by building the ideal point fb min Each objective value of S g is then interpreted by deducting objective obj t by z min t , so that the ideal point of decoded S g turns out to be a zero vector. This translated objective is computed as Thereafter, the extreme point in each objective axis is identified by finding the solution z ∈ S g that makes the following achievement scalarzing function minimum with The intercept a t of the tth objective axis and the linear hyperplane can then be computed, and the objective functions can be normalized as follows.
The structured reference points are already located on the normalized hyperplane. The NSGA-III process main-tains a variety in the space covered by S g , because the normalizing procedure and the building of the hyperplane are done at each generation using terminal points. The procedure is described in Algorithm 3.

Association Operation.
After adaptively normalizing each objective function, we must correlate each population member of S g with a reference point. By linking each reference point with the origin of the hyperplane, we may create a reference line that relates to each reference point on the hyperplane. Next, the perpendicular distance of each population member of S g from each of the reference lines is computed. In the normalized objective space, the reference point whose reference line is closest to a population member is considered related with the population member. The procedure is presented in Algorithm 4. Through association operation, we associate individuals with reference points. It may appear that a reference point is associated with one or more population individuals, or some reference points are not associated with any population individual. At this point, we need to perform niche-preservation operation.
Determine the number ρ j of first l − 1 layer individuals associated with each reference point j, and let the reference points with a smaller number of associations form a set J min . Randomly select a reference point from j 1 among them, if ρ j 1 = 0 and j 1 is associated with multiple individuals in lth layer, and then select the individual closest to the reference line j 1 O to enter the next generation.
If ρ j 1 = 0 and j 1 is not associated with multiple individuals in lth layer, then randomly select an individual to enter the next generation. Repeatedly select reference points from J min , until the individuals are supplemented and the parent population is completed. The procedure is presented in Algorithm 5.
3.1.7. Crossover and Mutation Operation. After Z Par g+1 is formed, crossover and mutation operation are used to create the offspring population Z Spr g+1 . According to the crossover probability, it is judged whether the individual has crossover and the gene position where the crossover occurs. The analog binary crossover operator is used to generate the progeny population. According to the mutation probability, determine whether the individual has mutation and determine the gene location where the mutation occurred and adopt the basic mutation method to determine the individual value after mutation.
3.1.8. Steps of NSGA-III. Combined with the description of the above initial solution generation and genetic operator, the implementation steps of NSGA-III is summarized in Algorithm 6.
After obtaining the optimal solution set Z Spr g+1 , according to the contribution degree of the individual objective function value in the total objective function value, the individual where Z p ðp = 1, 2, ⋯, PÞ ∈ Z Spr g+1 .

Hermitian Inversion
where During the iteration process, the update of the optimal atoms set is expressed as Ψ k = ½Ψ k−1 , g k , where Ψ k−1 is the set of atoms formed after the first k-1 iteration and g k is the optimal atom found in the kth iteration, and then Θ k Compared with formula (15) Substituting H k−1 and H k into equation (16), the recursive expression of matrix inversion in the residual update process is obtained.

Experimental Datasets and Evaluation Metrices
This section describes the datasets used in the tests, and the metrics are used to compare the proposed algorithm's performance.

Hyperspectral Datasets.
Four hyperspectral datasets [57,58] with diverse signatures are included in the investigations, allowing for a complete quantitative and quantitative comparative evaluation of the proposed scheme. The Airborne Visible Infrared Imaging Spectrometer collected the Cuprite1 and Cuprite2 images, which correspond to the first two scenes in the online data (AVIRIS). The sensor collects 224 spectral bands between 0.4 and 2.5 m, with a halfmaximum of 10 nm and spatial resolution of 20 m per pixel. Due to water absorption or noisy, several bands were removed from the study, leaving a total of 188 bands for the experiments [46]. The Pines photos are divided into 128 by 128 pixels to avoid impacting the reconstruction algorithm's performance, whereas the other datasets are represented in 256 by 256 pixels. The basic situation of the four datasets is shown in Table 1. Extract three band images; the pseudocolor images of four datasets are shown in Figure 2. In the
The band-based PSNR measured in dB is defined as where X l andX l are the original and reconstructed band images, max ðX l Þ is the peak value of X l , and MSEðX l ,X l Þ is the mean squared error [46] and is computed as The band-based SSIM between X l andX l is defined as where μ 1 and μ 2 are the mean values of X l andX l , σ 1 and σ 2 are the standard deviation values of X l andX l , σ 12 represents the correlation coefficient between X l andX l , and C 1 and C 2 are constants related to the dynamic range of the pixel values. The details for these parameters can refer to [59].
The average PSNR and average SSIM are calculated by averaging the band-based PSNR and band-based SSIM over all bands of the hyperspectral dataset.
Vector-based SNR measured in dB is defined as where X½n andX½n shows the original and rebuilt spectral vectors, var ðX½nÞ shows the variance of X½n, and the mean squared error is MSEðX½n,X½nÞ.
The vector-based SAD measured in degrees is presented as The average SNR and average SAD are calculated by averaging the vector-based SNR and vector-based SAD over all spectral vectors in the hyperspectral dataset.

Experimental Results and Comparisons
This section describes the experimental results and findings used to assess the proposed algorithm's performance. The parameters are firstly determined, including population size and evolution generation. Following that, a comparison is given between the proposed algorithm HN_OMP and existing reconstruction algorithms. AMD quad-core CPU (3.80GHz), 16G memory, and Matlab2012b are the hardware and software environments employed for these studies.

Parameter Selection for Reconstruction
Algorithm. The parameter selection would undoubtedly have a major impact on the recon-construction performance. The results of the proposed algorithm HN_OMP on four hyperspectral pictures with varying evolution parameters are reported in this section.

Parameters Selection of Population Size and Evolution
Generation. Firstly, the algorithm HN_OMP is used to compress reconstruct the 40th band image of the 4 groups of hyperspectral data, and the influence of the maximum evolution generation, population size, and atom number on the performance of the algorithm is analyzed. The variation range of population size P is 10~50 at the interval of 5, the variation range of the maximum evolution generation G is 14 Mobile Information Systems 10~50 at the interval of 5, and the variation range of the atom number K is 10~100 at the interval of 10. The sampling rates are set as 0.1 to 0.5 at the interval of 0.1. Figure 3 shows the variation of the average PSNR of the reconstructed image of Cuprite1 with the population size and evolution generation, when the atom number is 40. The sampling rate in Figure 3(a) is 0.3, while the sampling rate in Figure 3(b) is 0.4. From the two subfigures, we can find that under the same evolution generation, after the population size increases to 20, the PSNR tends to be stable no matter what the sampling rate is. Under the same population size, the PSNR has a small oscillation with the increase of evolution generation. It demonstrates that the reconstructed PSNR is more sensitive to population size than that to evolution generation. Without loss of generality, we set the population size as 20 to discuss the influence of other parameters on the reconstruction results.
When the population size is 20, the influence of the evolution generation and atom number on the reconstructed PSNR of dataset Cuprite1 is shown in Figure 4. The sampling rate in Figure 4(a) is 0.2, while the sampling rate in Figure 4(b) is 0.3. From the two subfigures, we can clearly see that the influence of the atom number on the reconstruction result is far greater than the influence of the evolution generation on the reconstruction result. As the number of atoms increases, the reconstructed PSNR steadily increases. However, when the number of atoms further increases to more than 50, the reconstruction performance no longer increases significantly, but gradually stabilizes. The reason for this situation may be that particularly large number of atoms will affect the selecting of elite atoms. Therefore, the algorithm performance cannot be further improved. Comparing Figure 3(a) and Figure 4(a), we find that the atom number and the population size have a relatively large impact on the results, and the evolution generation has a small impact. Without loss of generality, we set the evolution generation to 10 to compare the effects of the atom number and the population size on the results.
When the maximum evolution generation is 10, the influence of population size and atom number on reconstruction performance is shown in Figure 5. The sampling rate in Figure 5(a) is 0.3, while the sampling rate in Figure 5(b) is 0.4. Figure 5(a) shows that the atom number has a greater influence on the reconstruction result than the population size. As the number of atoms increases, PSNR can grow rapidly and stabilize after the number of atoms reaches 50. This conclusion is consistent with that of Figure 4 As shown in Figure 5(b), we can see that when the population increases to 20, the reconstructed PSNR reaches the maximum value and is very stable, which is consistent with the conclusion of Figure 3. This also fully shows that it is reasonable for us to set the number of populations to 20.
The experimental results of the other three groups of hyperspectral images are similar to Cuprite1; considering the reconstruction accuracy, the selected population size in HN_OMP is P = 20, and the maximum evolution generation is G = 10.

Parameters Selection of Atom Number.
From the above experimental results, we have found that after the number of atoms reaches 50, the algorithm performance may have stabilized. Now we employ OMP algorithm and HN_OMP algorithm to compress and reconstruct the hyperspectral image to accurately determine the number of atoms. The HN_OMP algorithm and the OMP algorithm are used to reconstruct the 40th band image of the hyperspectral     17 Mobile Information Systems datasets, and the algorithm is set to terminate when the atom number reaches 50. In the experiments, the variation range of the atom number is 1-50 with the interval of 1, the sampling rate range is 0.1 to 0.5, and the interval is 0.1.
The changes of the reconstructed PSNR of Cuprite2 obtained by the two algorithms are shown in Figure 6. Whether it is the OMP algorithm or the HN_OMP algorithm, except for the sampling rate of 0.1, the PSNR will

18
Mobile Information Systems gradually increase as the atom number increases and finally towards stability. The reason is, the higher the sampling rate, the more information is available in the reconstruction process and the reconstruction accuracy is higher. When the sampling rate is 0.1, after the atom number increases to 27, the performance of the two algorithm drops sharply, and accurate reconstruction is no longer possible. This is because the accurate reconstruction using compressed sensing theory needs to meet the condition that the number of measurements is greater than or equal to the sparsity of the signal. In the two reconstruction algorithms, the signal is sparsely represented by several atoms in the redundant dictionary, so the sparsity is equal to the number of atoms. Therefore, for accurate reconstruction, the number of measurements should be greater than or equal to the number of atoms. When the signal length is 256 and the sampling rate is 0.1, the number of measurements is 26. Therefore, when the number of atoms increases to 27, the conditions for accurate reconstruction are no longer met, and the algorithm performance will drop sharply.
The PSNR comparison with two algorithms is shown in Figure 7; the sampling rates are 0.2 in (a), 0.3 in (b), 0.4 in (c), and 0.5 in (d), respectively. We found that the reconstruction PSNR of HN_OMP algorithm rises less slowly than the OMP algorithm. This is because the HN_OMP algorithm has a certain degree of randomness when looking 19 Mobile Information Systems for the optimal atom, and it cannot find the optimal atoms step by step like the OMP algorithm. Fortunately, as the number of atoms increases, the HN_OMP algorithm adopts an elite strategy to ensure that the newly added optimal atom can restore the original signal to the greatest extent, thereby ultimately ensuring the accuracy of reconstruction. Figure 8 shows the changes of the reconstructed SSIM of Cuprite2 obtained by the two algorithms. Whether it is the OMP algorithm or the HN_OMP algorithm, except for the sampling rate of 0.1, the SSIM will gradually increase as the atom number increases and finally towards stability. Specially, the SSIM of the HN_OMP algorithm continues to rise and tends to stabilize, while the OMP algorithm has a slight downward trend when the number of atoms is relatively large. Similar to the result of PSNR, when the sampling rate is 0.1, the result is not good. Under other sampling rate conditions, the result of HN_OMP algorithm is better than OMP.
Not surprisingly, we also noticed that the conclusions obtained here are also consistent with section 5.1.1. When the number of atoms reaches 50, the performance of the algorithm will stabilize. Comprehensive analysis of the above PSNR and SSIM results, in the following experiments, the optimal atom number of the two algorithms is set to K = 25 when the sampling rate is 0.1, and other sampling rates are set to K = 50 to compare reconstruction performance.

20
Mobile Information Systems

Reconstruction Performance Comparison.
Four exemplary reconstruction techniques are compared in order to better understand the possibilities of the proposed HN_ OMP. The first is the minimization total variation (min-TV) algorithm with quadratic constraints [60]. This approach uses a random Gaussian matrix as the spatial measurement matrix. It performs the reconstruction by maximizing the total variation minimization in the reconstruction process, which makes use of the image's gradient sparseness. The second is BCS_SPL, the block compressed sensing measures the image in blocks at the sampling end and uses SPL to efficiently and accurately restore the original image [36]. The third one is GPSR [14]; this algorithm uses random Gaussian matrix to sample each band image and uses gradient projection to solve the convex reconstruct problem. The fourth one is OMP [51]; the hyperspectral images are sampled using random Gaussian matrix to sample each band image block. In the reconstruction process, each image block is reconstructed by looking for optimal atoms to linearly represent the original signal, and the specific steps are described in Section 2.2. Similar with the fourth algorithm, the proposed algorithm HN_OMP also uses random Gaussian matrix to sample each band image block. But its reconstruction process is different from all the above algorithms. In the reconstruction process, utilizing the concept of evolutionary algorithm, algorithm HN_OMP transformed the reconstruction process into the process of population evolution. Through analyzing the process of OMP algorithm, a multiobjective reconstruction model is established, and the optimal atoms are found through population evolution, thereby completing the reconstruction of the original signal. Furthermore, the NSGA-III algorithm is used to complete the matching process of the optimal atom. For the residual update process, the Hermitian inversion lemma is introduced to speed up the execution of the algorithm.
The sample rates in the simulations range from 0.1 to 0.5 with a 0.1 interval. With reference, the parameters for algorithm min-TV and BCS SPL are set to default. The parameters of OMP is set as K = 50 at sampling rate 0.2~0.5, and K = 25 at sampling rate 0.1. The parameters of HN_OMP algorithm are set as P = 20, G = 10, and the atom number is set as K = 50 at sampling rate 0.2~0.5, and K = 25 at sampling rate 0.1.
The band-based PSNR of four datasets is shown in Figure 9. It can be seen from the figure that as the sampling rate increases, the amount of measurement data becomes larger and larger, and the more information available for reconstruction, the higher the reconstruction accuracy. Figures 10(a) and 10(b) show the average reconstructed PSNR of Cuprite1 and Cuprite2, respectively. Because the two photos were captured by the same sensor and their imaging objects were the Cuprite region, the simulation results are equivalent. When the sampling rate is greater than 0.2 for the Cuprite1 image, the proposed HN_OMP has the highest PSNR, followed by GPSR. The algorithm BCS_SPL which explores smoothed projected Landweber in the reconstruction could gain almost the same performance as GPSR. The OMP algorithm using optimal atoms to recover the signal has better PSNR than that of min-TV algorithm. Because the reconstruction procedure solely uses the spatial correlation of hyperspectral pictures, the PSNR of min-TV is the lowest. The results of Cuprite2 could get the same conclusion with the results of Cuprite1 images.
Figures 10(c) and 10(d) illustrate the results of Indian Pines and Pavia University, respectively. The performance of min-TV algorithm, OMP algorithm, and GPSR algorithm are almost the same. The proposed HN_OMP algorithm has the same performance as the BCS_SPL algorithm. The difference is that the GPSR algorithm cannot get superior results for these two datasets. The reason might lie in that, compared with the other two datasets, the spatial distribution of these two datasets is quite different, and the image gradient can be used to better represent the image characteristics, to obtain better reconstruction results.
When the sample rate is reduced to 0.1, the proposed HN_OMP method can successfully rebuild Indian Pines and Pavia University images, whereas it cannot get good results for Cuprite images. Overall, compared with min-TV algorithm, OMP algorithm or GPSR algorithm, the proposed algorithm HN_OMP has a certain degree of improvement in terms of PSNR, which proves the efficacy of the proposed method. Table 2 shows the SSIM comparison results for four hyperspectral datasets. For Cuprite1 and Cuprite2 images, the SSIM of OMP algorithm is the lowest, which is not different from the PSNR comparison in Figures 10(a) and 10 (b). The SSIM of the proposed HN_OMP algorithm has a weak advantage over that of min-TV and GPSR. The SSIM of the algorithm HN_OMP is lower than BCS_SPL at lower sampling rates; fortunately, the SSIM of HN_OMP finally approaches BCS_SPL. For Indian Pines and Pavia University, the situation will have a little change. At this moment, the SSIM of min-TV would be the lowest, while the SSIM of OMP and GPSR are very close to each other. Unfortunately, at this time, the proposed HN_OMP does not have an advantage. In general, the HN_OMP algorithm can restore the structure of the original image to a certain extent, which is a compelling proof of the proposed algorithm's usefulness.
The following comparison with several other state-ofthe-art algorithms is presented to further showcase the algorithm's performance. Table 3 lists Pavia University's results in comparison to the algorithms CPPCA-KMEANS [41] and CPPCA-MH [42] employing vector-based SNR and SAD. To achieve the reconstruction, the two methods, both based on CPPCA, combined the advantages of classification and multihypothesis prediction. Seen from the table, the SNR of HN_OMP are better than that of the other two algorithms, when the sampling rate is 0.4 and 0.5. The SAD result is not good as the SNR; the reason might be in the reconstruction process; we did not consider the correlation between spectral vectors.
Secondly, the results of Indian Pines, comparing with the algorithms CPPCA-MH [42] and HNP [12] using vectorbased and band-based SNR, are listed in Table 4. The idea of HNP is to combine the advantages of a fast l 0 norm and 21 Mobile Information Systems a reliable l 1 norm. Unfortunately, the performance of HN_ OMP is worse than that of CPPCA-MH or HNP. The reason may lie in two aspects. One is that the reconstruction parameters are not ideal, resulting in unsatisfactory reconstruction outcomes. The other is that the optimization methods of the three algorithms differ, making it inappropriate to compare them.
Thirdly, the results of Pavia University and Indian Pines, comparing with SSAHTV [15], SSTV [16], and HSSTV [17] using PSNR and SSIM, are shown in Table 5. The PSNR and SSIM values of these algorithms are extracted from the original literature. Concerning Indian Pines, the proposed HN_ OMP has less advantage; it could only be compared to SSTV. Regarding Pavia University, the PSNR of the proposed HN_ OMP is higher than that of the other three algorithms in most instances. Meanwhile, the SSIM of the proposed HN_ OMP can keep the difference with other algorithms within 0.1.
In conclusion, numerous comparisons between the proposed algorithm HN_OMP and some state-of-the-art algorithms have been demonstrated in order to assess reconstruction performance. Especially in comparison with the OMP algorithm, the results and analysis fully indicate that it is feasible to use the evolutionary algorithm to simulate the optimal atom search process. The optimal atom searched by NSGA-III could approximate the original signal, which can ensure the accuracy of image reconstruction and fully demonstrate the reliability of the algorithm. Although the results of some images are slightly worse than those of other algorithms, it can still show that the use of evolutionary algorithms can be used as a way of image reconstruction.

Runtime
Comparison. The runtime of five reconstruction algorithms is shown in Figure 11. It can be found from the histogram that the algorithm OMP needs to traverse all the atoms in the dictionary, thus leading the lowest running speed. The computational complexity of min-TV and BCS_ SPL are equivalent, while the time consuming by GPSR could reduce to a half. Fortunately, our proposed algorithm HN_OMP, introducing the evolution concept, could greatly decrease computational complexity of reconstruction process. Compared with OMP, the proposed HN_OMP can increase the calculation speed by an order of magnitude. Combing with previous performance comparison, although the PSNR or SSIM of the proposed HN_OMP is in a bad situation, the computing efficiency will become the advantage of the algorithm.

Reconstruction Image Comparison.
As compared to PSNR, a viewer's preference for pleasant visual quality is more relevant. Figures 12 and 13 show the reconstructed pictures of Cuprite2 and Pavia University with a sampling rate of 0.5. The images are the 50th band of the dataset, and the corresponding reconstructed PSNR of each image is also given. Seen from these reconstruction images, these five reconstruction algorithms can restore image features more accurately. The reconstructed image can well describe the detailed features of the original image, which fully demonstrates that the HN_OMP algorithm can find the optimal atoms by using the evolution process of the chromosome, could get visual effect with a cleaner image, and fully demonstrate the effectiveness of the algorithm.
Although the algorithm has a certain degree of blocking effect, the visual effect is not as clear as the BCS_SPL algorithm, but the time-consuming of proposed HN_OMP is the lowest among these algorithms. Overall, we can conclude that the algorithm HN_OMP can improve the reconstruction efficiency through evolutionary algorithm NSGA-III and Hermitian inversion, while maintaining the reconstruction accuracy.

Conclusion
Analyzing the characteristics of hyperspectral images, a hyperspectral image reconstruction algorithm based on HN_OMP is proposed in this study. The algorithm utilizes reference point nondominated sorting genetic method and Hermitian inversion to solve the multiobjective reconstruction model. The algorithm uses the chromosomes of the genetic algorithm to simulate the atoms of the redundant dictionary and simulates the crossover, selection, mutation, and other operators as the optimal atom search process. Furthermore, the algorithm uses the Hermitian inversion lemma to complete the update operation of the residual matrix in a recursive way. The proposed HN_OMP algorithm is used to reconstruct four groups of hyperspectral images, including the influence of the population size, evolution generation, and atom number on the reconstruction performance is discussed, and reasonable parameters are set to ensure the reliability of the algorithm. The proposed algorithm HN_OMP provides benefits over advanced reconstruction algorithms in terms of high reconstruction accuracy and low computing complexity, as demonstrated by results on real hyperspectral data. Since the proposed algorithm's reconstruction accuracy is lower than that of some classic reconstruction algorithms, future research will focus on the possibility of local convergence and randomness concerns in evolutionary processes to conduct highaccuracy reconstruction.

Data Availability
The data used to support the findings of this study are included within the article.

Conflicts of Interest
The authors declare no conflict of interest.

Authors' Contributions
Conceptualization was performed by Li Wang; funding acquisition was conducted by Li Wang   Mobile Information Systems