MPE Mathematical Problems in Engineering 1563-5147 1024-123X Hindawi Publishing Corporation 708985 10.1155/2013/708985 708985 Research Article A Novel Algorithm for Satellite Images Fusion Based on Compressed Sensing and PCA http://orcid.org/0000-0002-4524-8574 Yang Wenkao Wang Jing Guo Jing Zhou Shangbo School of Electronic and Information Engineering Beijing Jiaotong University Beijing 100044 China njtu.edu.cn 2013 28 10 2013 2013 24 05 2013 24 06 2013 2013 Copyright © 2013 Wenkao Yang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

This paper studies the image fusion of high-resolution panchromatic image and low-resolution multispectral image. Based on the classic fusion algorithms on remote sensing image fusion, the PCA (principal component analysis) transform, and discrete wavelet transform, we carry out in-depth research. The compressed sensing (CS) abandons the full sample and shifts the sampling of the signal to sampling information that greatly reduces the potential consumption of traditional signal acquisition and processing. We combine compressed sensing with satellite remote sensing image fusion algorithm and propose an innovative fusion algorithm (CS-FWT-PCA), in which the symmetric fractional B-spline wavelet acts as the sparse base. In the algorithm we use Hama Da matrix as the measurement matrix and SAMP as the reconstruction algorithm and adopt an improved fusion rule based on the local variance. The simulation results show that the CS-FWT-PCA fusion algorithm achieves better fusion effect than the traditional fusion method.

1. Introduction

Numerous interference factors are always mixed in the process of image acquisition and transmission. The images we get are mostly random. PCA , which is also known as Karhunen-Loéve transform , aims to transform random images. It conducts multidimensional orthogonal linear transformation based on image statistical characteristics. According to dimension reduction technique, it transforms multiple components into a few comprehensive components, which contain as much original variable information as possible. PCA concentrates variance, compresses data size, and shows remote sensing information of the multiband data structure more precisely, which gets a best approximation to the original image statistically. PCA, which is with wide application, is mainly focused on fusion of multi-band images. Chavez is the first person to apply PCA to multisensor image fusion. He fused the Landsat-TM multispectral and Spot Pan panchromatic images, achieving a sensational result .

Olshausen and Field  published a paper on Nature in 1966, indicating that mammalian visual cortex expresses the image features sparsely. After that, the research on image sparse modeling has attracted broad attention; excellent tools (Curvelet , Bandelet , etc.) and methods (basis pursuit (BP) , matching pursuit (MP) , etc.) of image sparse representation were proposed. The development of compressed sensing theory  is based on sparse representation. CS samples and compresses at the same time; its basic idea is to collect information which is directly related to the useful object. The obtained value is the projection projected from high to low dimensional. The main research contents of CS include measurement method of projection, reconfigurable conditions, and image reconstruction methods .

We combine compressed sensing theory into PCA and propose a kind of fusion method based on CS-FWT-PCA algorithm. We apply the proposed algorithm, the traditional PCA transform, and some improved PCA transform, respectively, in image fusion. Simulation results show that the fusion image based on CS-FWT-PCA has good spatial resolution and also efficiently keeps the spectrum feature of the original multispectral image.

2. Compression Sensing Theory of Satellite Remote Sensing Image Fusion

Candes and Tao  pioneered the conception of compression sensing in 2006. Based on signal harmonic analysis, matrix analysis, sparse representation, statistics and probability theory, time-frequency analysis, functional analysis, and optimal reconfiguration, CS develops rapidly. It aims to obtain information from signal directly and gets rid of the contact with physical measurement such as signal frequency. As long as a signal has a compressible sparse domain, its transform coefficients can be linearly projected to the low-dimensional observation vector by taking advantage of the measure matrix which is incoherent with the transformation matrix. Original signal can be precisely reconstructed from fewer sampling value by using sparse optimization theory, since the sampling value contains enough information. It is very suitable for satellite remote sensing image recovering high-resolution signal from low-resolution observations. CS theory mainly includes three parts: the sparse representation, the design of measurement matrix, and the reconstruction algorithm. And in view of the signal which can be sparse, its advantage is that it combines the traditional data acquisition with data compression and compresses data during obtaining signal. This can greatly reduce the potential consumption in traditional signal acquisition and processing.

2.1. Mathematical Model of Compressed Perception Theory

The traditional linear measurement model written in matrix form is as follows: (1)y=Φx,ΦRM×N. From the signal theory, we know that N-dimension signal x has a linear representation by orthogonal basis ψ=[φ1,φ2,,φN]RN×N (φi is an N-dimension vector): (2)x=i=1Nθiφi=ψθ. Expansion coefficient vector θ=[θ1,θ2,,θN]T, θi=x,φi=φiTx.

Putting (2) to (1) and thinking that the CS information operator ACS=Φψ, we get (3)y=Φx=Φψθ=ACSθ. The number of measurements is far less than the number of signal (MN) on the condition of compression. From (1) we get that it is an ill-conditioned problem to recover x from y because the number of unknowns is greater than the number of equations; this means there exist infinite solutions. But if x is a compressible sparse signal, θ in formula (2) is also sparse, although recovering θ from y is also an ill-conditioned problem; the number of unknowns will be greatly reduced, making signal reconstruction possible . Signal reconstruction in compressed sensing theory is to look for optimum solution in a constraint condition. It utilizes optimization problem under 0 norm to extract the signal. It can be formulated as (4)minθθ0s.t.  y=ACSθ. From formula (4) the sparse coefficient θ can be estimated. Convex optimization compressing sensing recovery framework under p norm is an important innovation proposed by Donoho and Candes. Its main idea is replacing the nonconvex optimization objective in formula (4) by p norm: (5)minθθps.t.  y=ACSθ.

Thus, the optimization problem in formula (4) has been turned into a solution of convex optimization problem. The result can be obtained in a way of solving linear programming problem.

In conclusion, the implementation of compressed sensing theory includes three basic elements: signals’ sparse expression, noncorrelated observation of the measurement matrix, and nonlinear optimization reconstruction of signals. Signal sparsity is the necessary condition for CS theory, measurement matrix is the key, and nonlinear optimization is an approach of CS theory to reconstruct signal . The framework of compressed sensing theory is as Figure 1.

Compressed sensing theory framework.

The differences between CS theory and traditional sampling theorem  are as follows.

Firstly, traditional sampling theorem takes the infinite-length continuous signal into consideration, but CS theory concerns the vector of finite dimension.

Secondly, traditional sampling theorem obtains data by uniform sampling; by contrast, CS theory gets observed data by utilizing the inner product of signal and measurement function.

Lastly, the difference between signal reconstructions is as follows. Traditional sampling recovery uses linear interpolation of SINC function to obtain signal, but CS theory turns to solve highly nonlinear optimization problem from the current observed data to get signal.

3. CS-FWT-PCA-Based Satellite Remote Sensing Image Fusion

We apply compressed sensing theory which is combined with PCA to satellite remote sensing image fusion and choose fractional B-spline wavelet as the sparse basis. The fusion rules are improved to increase the spatial resolution, enhance the spectral information, and accelerate the fusion speed in large data fusion. Fractional B-spline wavelet transform is similar to traditional wavelet transform; the wavelet transform coefficients consist of a small number of large numerical coefficients and a large number of small numerical coefficients that can adequately reflect the local variation of the original image, which provides favorable conditions for image fusion.

3.1. Fractional <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" id="M27"><mml:mrow><mml:mi>B</mml:mi></mml:mrow></mml:math></inline-formula>-Spline Wavelet Transform

In 1999 for the first time Unser and Blu popularized spline function to fractional order on the basis of polynomial splines by fractional B-spline functions of differential structures and gave a concrete expression . Then, we prove that the good performance of these functions can be used as the wavelet basis function to conduct the wavelet transform. Since this transform order of wavelet transform can be a fraction, it is called the fractional-order B-spline wavelet transform.

α (α>-1, real number) order symmetric fractional B-spline function is defined as follows: (6)β*α(x)=1Γ(α+1)kZ(-1)k|kα+1||x-k|*α, where (7)Γ(α+1)=0+xαe-xdx,(x-k)+α=(max(x-k,0))α,(αk)=Γ(α+1)Γ(k+1)Γ(α-k+1),|x|*α={|x|α-2sin((π/2)α),αis  othersx2nlogx(-1)1+nπ,αis  even. The α order symmetric fractional B-spline function has been proved to owe the characteristics of multiresolution in the literature  by Unser and Blu, which can construct the wavelet basis function and satisfy the following two-scale equation: (8)β*α(x2)=kZhα(k)β*α(x-k). Symmetric fractional B-spline function is one of Riesz bases; it can become orthonormal basis through orthogonalization and standardization. It can be used as a sparse basis to conduct sparse transformation about signals.

Fractional B-spline function satisfies the following conditions : (9)β*α(x)L1,α>-1β*α(x)L2,α>-12. The experiment selects α>-1/2 because the wavelet transform is carried out in the L2 space. With symmetry fractional B-spline wavelets of the L2 space, the orthogonal filter bank can be constructed to get the corresponding symmetric fractional B-spline wavelet transformation.

3.2. Determining the Fusion Rules

This paper presents an improved fusion rule: registering images before PCA transform to the MS image. Then, we select the J-layers symmetrical fractional B-spline wavelet to conduct sparse transform of the matched PAN image and the first principal of the PCA transformed MS image. After sparse transform, each layer can be decomposed into a low-frequency sparse matrix and a series of high-frequency sparse matrixes. We fuse coefficients separately because the high- and low-frequency sparse coefficients have different characteristics. The low-frequency sparse coefficients represent the approximate image, and the variation of the coefficients is not obvious, so the ordinary weighted average fusion  is used in the low-frequency subpicture, but high-frequency coefficients are obviously different and have significant details of the original image, such as bright lines and edges; in order to obtain better fusion effect, the fusion rules based on regional characteristics selection  are adopted. The fusion method which selects different fusion strategies adaptively according to the different areas of the image (high-frequency subpicture and the low-frequency subpicture) can improve the quality of the image fusion more effectively.

3.3. CS-FWT-PCA-Based Satellite Remote Sensing Image Fusion Algorithm

The flow chart of the satellite remote sensing image fusion algorithm based on compressed sensing, PCA transform, and fractional B-spline wavelet transform (CS-FWT-PCA) is shown in Figure 2.

Satellite remote sensing image fusion method based on the CS-FWT-PCA.

The concrete steps are as follows.

( 1 ) Register the PAN image with the MS image by using SURF-based algorithm registration method to get the image PAN1.

( 2 ) Apply PCA to the MS image to get the first principal component M1 and other principal components Ms; then have a α-order symmetric fractional B-spline J-layers wavelet decomposition to the first principal component M1 and make it sparse to obtain high-frequency sparse matrixes ψm1h and low-frequency coefficient matrix ψm1l in different layers.

( 3 ) Perform histogram matching  of PAN1 with the first principal component of MS image obtained in step (2) to get the enhanced PAN2 image; then have a α symmetric fractional B-spline J-layers wavelet decomposition and sparse to obtain high-frequency sparse matrixes ψph and low-frequency sparse matrix ψpl in different layers.

( 4 ) Fuse the low-frequency sparse matrixes ψm1l and ψpl of different layers by using the weighted average method  to obtain a low-frequency coefficient ψl of the fusion image.

The correlation coefficient of two sets of low-frequency subpictures ψm1l,ψpl is defined as (10)C(M,P)=i=1mj=1n(P(i,j)-P-)(M(i,j)-M-)i=1mj=1n(P(i,j)-P-)2(M(i,j)-M-)2, wherein M,P, respectively, represent ψm1l,ψpl. M-,P-, respectively, represent average numbers of wavelet low-frequency coefficients of ψm1l and ψpl, and the size of the images are m*n. Fusion weight values μ1,μ2 are defined as follows: (11)μ1=12(1-|C(M,P)|),μ2=1-μ1. Then, the fusion low-frequency coefficient ψl is calculated by the following formula: (12)ψl(i,j)=μ1M(i,j)+μ2P(i,j).

( 5 ) Fuse the two high-frequency sparse matrixes ψm1h, ψph of each layer via the regional feature selection method  to obtain the fused high-frequency coefficients ψh.

Determine the size of a local region Q sized as 3×3 whose center point is p. Point p is at the pixel (i,j) of the wavelet frequency coefficient matrix. C(M,p) and C(P,p) are the wavelet coefficients of ψm1h and ψph, respectively, at point p, while μ-(M,p) is the mean of C(M,p) in the domain Q.

First, the local deviation is defined as (13)G(M,p)=qQω(q)|C(M,p)-μ-(M,p)|2, wherein ω(q) represents weighting factor and satisfies the condition (14)qQω(q)=1. The nearer the distance between q and p is, the greater the weighting factor is; the rule is available in getting G(P,p).

Second, the matching matrix is expressed as (15)M(p)=2qQω(q)|C(M,q)-C(M,p)||C(P,q)-C(P,p)|G(M,p)+G(P,p). The range of values of the points in match matrix is [0,1]; the closer to 1 indicates the higher correlation degree of the two low-frequency images.

Set the threshold of matching degree T in the range of [0.5,1].

If M(p)>T, (16)C(ψh,p)={λmaxC(M,p)+λminC(P,p),G(M,p)>G(P,p)λmaxC(P,p)+λminC(M,p),G(P,p)G(M,p). Otherwise, (17)C(FUSEDh,p)={C(M,p),G(M,p)>G(P,p)C(P,p),G(P,p)G(M,p), where (18)λmin=12-12(1-M(p)1-T),λmin=1-λmax.

( 6 ) According to the formula y=Φx=Φψθ=ACSθ, use the fused sparse matrixes ψh and ψl to get value Y and then obtain the fused component M1 according to the reconstruction algorithm SAMP.

( 7 ) Do a α-order symmetric fractional B-spline J-layers wavelet reconstruction to M1 to get the new first principal component NEWM1.

( 8 )    M 1 in the step (2) is replaced by NEWM1. Perform the PCA inverse transform on the NEWM1 with other principal components of the MS image to obtain fused image.

4. Experiment Result and Analysis

We simulate the proposed algorithm by using MATLAB 7.8. Two groups of experimental data are adopted: one is the Landsat-TM (MS image, resolution ratio is 30 m, 256×256 pixels) multispectral image and SPOT (PAN image, resolution ratio is 10 m, 256×256 pixels) panchromatic image; the other group is the IKONOS (MS image, resolution ratio is 4 m, 256×256 pixels) multispectral image and IKONOS (PAN image, resolution ratio is 1 m, 256×256 pixels) panchromatic image. Figure 3 illustrates the two groups of the source images.

Group one: (a) Landsat-TM image (30 m, 256×256); (b) SPOT image (10 m, 256×256). Group two: (c) IKONOS image (4 m, 256×256); (d) IKONOS image (1 m, 256×256).

4.1. The Analysis of Symmetry Fractional <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" id="M137"><mml:mrow><mml:mi>B</mml:mi></mml:mrow></mml:math></inline-formula>-Spline Wavelet Order

When taking the α-order J-layers symmetry factional B-spline wavelet transformation, J is set to 3. When an image is subjected to symmetry fractional B-spline wavelet transformation, the effectiveness of the fusion differs with the change of α. Based on , we can get the entropy (EN), average gradient (AG), correlation coefficient (CC), and the degree of distortion (DE); they are illustrated in Figure 4.

The fusion parameters change curve corresponding to the different numbers of wavelet order. (a) EN, AG, CC, and DE of fusion image of Landsat-TM and SPOT. (b) EN, AG, CC, and DE of fusion image of IKONOS.

In Figure 4, the abscissa of each part is the wavelet order α; the ordinate, respectively, represents the entropy, the definition (average gradient), the correlation coefficient, and the degree of distortion of the fusion image. These figures show that when the order α of the wavelet transformation of symmetry fractional B-spline increases gradually, as shown in column 1 and column 3, the information entropy and correlation coefficient tend to decline after a short period of slight increase, the definition shown in column 2 increases after a short period of slight decline, and the torsion resistance in column 4 keeps the trend of increase. Image fusion aims to get comparatively large information entropy, average gradient and correlation coefficient, and a minimum torsion resistance. After multiple experiments, we set order −0.25 for Landsat-TM and SPOT images in group one and set order α-0.15 for IKONOS images in group two. After giving α a proper value, we can get the best effectiveness of fusion image and achieve an optimal balance point between the four indexes of quality evaluation.

4.2. Comparison and Analysis with Traditional Fusion Algorithm

Four different methods are adopted, respectively, to fuse satellite remote sensing images, including traditional PCA transformation , wavelet transform (DWT) , the PCA and fractional B-spline wavelet based fusion method (FWT-PCA)  (the wavelet in this conference is also fractional B-spline wavelet), and the fusion method proposed in this paper (CS-FWT-PCA). The fused images are illustrated in Figures 5 and 6. The sampling rate of compressed sensing is M/N=0.4.

Fusion image of Landsat-TM and SPOT. (a) PCA; (b) DWT; (c) FWT-PCA; (d) CS-FWT-PCA.

Fusion image of IKONOS. (a) PCA; (b) DWT; (c) FWT-PCA; (d) CS-FWT-PCA.

Comparing the relevant images in Figures 3, 5, and 6, with the subjective visual effect, we can find that the spatial resolution of these fusion images is quite close and is higher than the resolution of the multispectral image before merging (Figures 3(a) and 3(c)). From the view of the spectral signature, the fusion method based on traditional PCA shows spectral distortion (Figures 5(a) and 6(a)). Though the fusion method based on combining fractional B-spline wavelet transformation and PCA transformation (Figures 5(c) and 6(c)) in  has fair fusion effectiveness, the algorithm proposed in this paper has higher spatial resolution and richer spectral information than the fusion method in  and has distinct and visible image contour.

In this paragraph, the objective evaluation method will be used to analyse the information entropy, average gradient correlation coefficient, and torsion resistance of fusion images for each fusion method. Tables 1 and 2 evaluate the fusion performance of the experimental data of Landsat-TM and SPOT. Tables 3 and 4 evaluate the fusion performance of the experimental data of IKONOS.

Landsat-TM and SPOT image data fusion performance evaluation 1.

EN AG
R G B R G B
PAN 7.4075 7.4075 7.4075 13.1554 13.1554 13.1554
MS 7.4638 7.4244 7.4122 6.6873 6.485 6.6413
PCA 7.4942 7.5570 7.4307 10.8695 10.6396 10.7393
DWT 7.5512 7.5345 7.5034 11.1628 11.0917 11.3812
FWT-PCA 7.6187 7.7086 7.5206 13.2971 13.1768 13.2113
CS-FWT-PCA 7.7103 7.7092 7.6820 14.0967 13.8774 13.9965

Landsat-TM and SPOT image data fusion performance evaluation 2.

CC DE
R G B R G B
PAN 0.7091 0.6961 0.6128 32.2877 27.6492 28.5331
MS 0.7091 0.6961 0.6128 0 0 0
PCA 0.7176 0.6830 0.6556 27.5283 27.1905 27.5072
DWT 0.8403 0.7926 0.7849 16.9660 27.6493 28.5331
FWT-PCA 0.8874 0.8618 0.7301 16.7670 16.5377 16.6828
CS-FWT-PCA 0.9092 0.8993 0.8728 15.9702 15.7063 14.3999

IKONOS image data fusion performance evaluation 1.

EN AG
R G B R G B
PAN 7.7376 7.7376 7.7376 29.6375 29.6375 29.6375
MS 7.8316 7.6849 7.6202 12.2342 12.0416 11.5112
PCA 7.8468 7.7716 7.7817 24.5896 22.3556 22.0533
DWT 7.8592 7.7098 7.7089 27.4281 27.6365 27.5637
FWT-PCA 7.8723 7.7476 7.7722 29.4191 29.6374 29.6375
CS-FWT-PCA 7.9106 7.8315 7.8044 31.8498 28.6302 28.4154

IKONOS image data fusion performance evaluation 2.

CC DE
R G B R G B
PAN 0.7847 0.7846 0.7822 28.9117 27.3725 43.107
MS 0.7847 0.7846 0.7822 0 0 0
PCA 0.8086 0.8058 0.7954 31.3666 30.8954 31.0786
DWT 0.8391 0.8279 0.8326 29.1504 28.8737 29.7480
FWT-PCA 0.8654 0.8519 0.8063 25.5072 27.3725 26.1070
CS-FWT-PCA 0.9019 0.8608 0.9011 18.3169 18.5298 18.0108

From the data in Tables 1 and 2, it can be seen that the evaluation indexes of the fusion image based on FWT-PCA are superior to those of the fusion images based on DWT and PCA. The average gradient of the FWT-PCA-based image was increased 17.99% over DWT-based image; and we can see a 31.66% decrease in distortion. The fusion image based on CS-FWT-PCA has an improvement in entropy (0.0845), average gradient (0.7618), and correlation coefficient (0.0674) and also significantly reduces the degree of distortion (1.3037).

In Tables 3 and 4, although the source images are different, the simulation results are similar to the results based on Landsat-TM and SPOT images. Compared with PCA algorithm, the DWT algorithm increases the average gradient and correlation coefficient and decreases the entropy and the degree of distortion. the differences between two algorithms are not obvious; each index of the FWT-PCA-based fused image is better than that of PCA-based and DWT-based images. While the four indicators of the CS-FWT-PCA-based image have been greatly optimized, the mean entropy of RGB channel is 7.8488, higher than 7.7974 in FWT-PCA-based image, the mean average gradient and mean correlation coefficient are improved to 29.6318 and 0.8879 respectively, and the mean distortion is reduced to the minimum 18.2858.

These parameters show that after the traditional PCA transform, the information entropy is minimum, the average gradient and torsion resistance are comparatively large, the correlation coefficient is minimum, and the fusion effectiveness is worse than other methods. The reason of this is that in PCA transformation, the first principle component represents the image that changes most and the image of the first principle component has more spatial details. So it has more similar correlation with panchromatic image; the fusion image obtained by this method remains more spectral information and has better comprehensive effectiveness.

With great approximation capability, symmetry fractional B-spline wavelet transformation can gain better effectiveness when obtaining the detailed information of the image. Combining symmetry fractional B-spline wavelet transformation and PCA transformation, FWT-PCA transformation can improve the textural features of the image by PCA transformation and thereby enhances the expression of spatial details, while it can keep the richness of the spectral information of the image by symmetry fractional B-spline wavelet transformation. So it improves the definition of the fusion image, meanwhile significantly reducing the torsion resistance, and the information entropy and correlation coefficient improve significantly. The algorithm of CS-FWT-PCA which is proposed in this paper significantly reduces the sampling time by compressive sensing sampling. Symmetry fractional B-spline wavelet transformation brings sparsification. Combining with PCA transformation, it achieves the highest definition of fusion image. CS-FWT-PCA-based fusion image is closest to MS image in color and has the minimum torsion resistance and maximum comprehensive index. This method maximizes the high spatial resolution of origin image and richness of spectral information, improves the fusion quality, and obtains the optimal fusion effectiveness by using fewer sampling points.

5. Conclusion

In this paper, we introduced the compressed sensing and its application and then described the image fusion algorithm based on CS-FWT-PCA. In the simulation that followed, two groups of experimental data are fused separately by using the proposed algorithm, the classical PCA fusion method, the wavelet transform, and FWT-PCA fusion rules. A conclusion can be drawn that the FWT-PCA and CS-FWT-PCA algorithms are obviously superior to others, and the effect of the CS-FWT-PCA algorithm is optimal. But the compressed sensing-based algorithm requires too much time in simulation. Our next job can be focused on improving the image fusion efficiency of the proposed algorithm and reducing the simulation time.

Shlens J. A Tutorial on Principal Component Analysis 2005 Systems Neurobiology Laboratory, University of California at San Diego Proppe C. Multiresolution analysis for stochastic finite element problems with wavelet-based karhunen-loève expansion Mathematical Problems in Engineering 2012 2012 15 215109 10.1155/2012/215109 ZBL1264.65221 Chavez P. S. Jr. Sides S. C. Anderson J. A. Comparison of three different methods to merge multiresolution and multispectral data: Landsat TM and SPOT panchromatic Photogrammetric Engineering & Remote Sensing 1991 57 3 295 303 2-s2.0-0025919780 Olshausen B. A. Field D. J. Emergence of simple-cell receptive field properties by learning a sparse code for natural images Nature 1996 381 6583 607 609 2-s2.0-0029938380 10.1038/381607a0 Candès E. J. Donoho D. L. New tight frames of curvelets and optimal representations of objects with piecewise C2 singularities Communications on Pure and Applied Mathematics 2004 57 2 219 266 10.1002/cpa.10116 MR2012649 ZBL1038.94502 le Pennec E. Mallat S. Sparse geometric image representations with bandelets IEEE Transactions on Image Processing 2005 14 4 423 438 10.1109/TIP.2005.843753 MR2128287 Chen S. S. Donoho D. L. Saunders M. A. Atomic decomposition by basis pursuit SIAM Review 2001 43 1 129 159 10.1137/S003614450037906X MR1854649 ZBL0979.94010 Mallat S. G. Zhang Z. Matching pursuits with time-frequency dictionaries IEEE Transactions on Signal Processing 1993 41 12 3397 3415 2-s2.0-0027842081 10.1109/78.258082 ZBL0842.94004 Donoho D. L. Compressed sensing IEEE Transactions on Information Theory 2006 52 4 1289 1306 10.1109/TIT.2006.871582 MR2241189 ZBL1163.94399 Candès E. J. Romberg J. Tao T. Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information IEEE Transactions on Information Theory 2006 52 2 489 509 10.1109/TIT.2005.862083 MR2236170 Candes E. J. Wakin M. B. An introduction to compressive sampling: a sensing/sampling paradigm that goes against the common knowledge in data acquisition IEEE Signal Processing Magazine 2008 25 2 21 30 2-s2.0-41949092318 10.1109/MSP.2007.914731 Baraniuk R. G. Compressive sensing IEEE Signal Processing Magazine 2007 24 4 118 124 2-s2.0-34548253373 10.1109/MSP.2007.4286571 Fang W. Image processing and reconstruction based on compressed sensing Journal of Optoelectronics Laser 2012 23 1 196 202 Zhu Z. Wahid K. Babyn P. Cooper D. Pratt I. Carter Y. Improved compressed sensing-based algorithm for sparse-view CT image reconstruction Computational and Mathematical Methods in Medicine 2013 2013 15 185750 Jing L. ChongZhao H. XiangHua Y. Feng L. Splitting matching pursuit method for reconstructing sparse signal in compressed sensing Journal of Applied Mathematics 2013 2013 8 804640 10.1155/2013/804640 ZBL1266.94009 Candes E. J. Tao T. Near-optimal signal recovery from random projections: universal encoding strategies? IEEE Transactions on Information Theory 2006 52 12 5406 5425 10.1109/TIT.2006.885507 MR2300700 Xu W. Lin J. Niu K. He Z. Performance analysis of support recovery in compressed sensing AEU—International Journal of Electronics and Communications 2012 66 4 294 296 2-s2.0-84857363689 10.1016/j.aeue.2011.08.003 Xiong-wei Z. Jian-jun H. Tao Z. Compressive sensing: innovative theory in information processing field Journal of Military Communications Technology 2011 32 4 83 87 Unser M. Blu T. Fractional splines and wavelets SIAM Review 2000 42 1 43 67 10.1137/S0036144598349435 MR1738098 ZBL0940.41004 Blu T. Unser M. Fractional spline wavelet transform: definition and implementation 1 Proceedings of the IEEE Interntional Conference on Acoustics, Speech, and Signal Processing June 2000 512 515 2-s2.0-0033676849 Unser M. Blu T. Construction of fractional spline wavelet bases Wavelet Applications in Signal and Image Processing VII 1999 3813 7 422 431 10.1117/12.366799 Ya-chun L. Jin-gang W. Analysis on image fusion rules based on wavelet transform Computer Engineering and Applications 2010 46 8 180 182 Heng C. Research on Pixel-Level Image Fusion and Its Key Technologies 2008 Chengdu, China University of Electronic Science and Technology of China Juang Y. S. Ko L. T. Chen J. E. Shieh Y. S. Sung T. Y. Chin Hsin H. Histogram modification and wavelet transform for high performance watermarking Mathematical Problems in Engineering 2012 2012 14 164869 10.1155/2012/164869 ZBL1264.94018 Guo L. Li H. Bao Y. Image Fusion 2008 Beijing, China Publishing House of Electronics Industry Press Zhou J. Civco D. L. Silander J. A. A wavelet transform method to merge Landsat TM and SPOT panchromatic data International Journal of Remote Sensing 1998 19 4 743 757 2-s2.0-0032029701 10.1080/014311698215973 Yang W. Gong Y. Multi-spectral and panchromatic images fusion based on PCA and fractional spline wavelet International Journal of Remote Sensing 2012 33 22 7060 7074 10.1080/01431161.2012.698322