Region-Based Image-Fusion Framework for Compressive Imaging

A novel region-based image-fusion framework for compressive imaging (CI) and its implementation scheme are proposed. Unlike previousworks on conventional image fusion, we consider both compression capability on sensor side and intelligent understanding of the image contents in the image fusion. Firstly, the compressed sensing theory and normalized cut theory are introduced. Then region-based image-fusion framework for compressive imaging is proposed and its corresponding fusion scheme is constructed. Experiment results demonstrate that the proposed schemedelivers superior performance over traditional compressive image-fusion schemes in terms of both object metrics and visual quality.


Introduction
Image fusion is a process of combining information from multiple images into a single fused image.According to the development of fusion, image-fusion technologies can be classified into three hierarchical levels, pixel-level [1,2], feature-level [3], and decision-level [4].Since real-world objects usually consist of structures at different scales, some multiresolution representations, such as pyramid [5], gradient [6], wavelet [7][8][9], and bidimensional empirical mode [10], have been proposed and used in pixel-level image fusion.There are three categories of methods for computing the weighted coefficient of w 1 (p) at passion p: coefficient-based, window-based, and region-based [10,11].Generally, regionbased image fusion is more intelligent and has better performance over coefficient-based and window-based fusion schemes.
In recent years, with the rising attention on compressive sensing (CS) theory [12,13] from the academic and industrial worlds, the image fusion for compressive sensing has attracted the attention of many researchers.Compared with the traditional image fusion which requires the whole acquisition of the source images, compressive sensing image fusion does not have any requirement on the source image samplings.CS ensures that if a signal is sparse on a certain basis, it can be recovered from a relatively small set of random linear projections on another basis which is incoherent with the sparsifying basis.Therefore, with the superiority in reducing computational and transmission costs, CS has become a much preferred algorithm for image fusion.
The most presented literatures on CS image fusion explore the fusion schemes based on sampling model and fuse CS coefficients directly in the fusion schemes [14][15][16][17][18][19][20][21][22][23][24][25][26].There are different ensembles of CS matrices defined in previous CS literature [27][28][29].SBHE sampling operator [27] employed in this paper has very good properties and it can be easily applied to the optical domain (such as single pixel cameras), with low requirement on storage space and high computing speed.In addition, in contrast with the presented CS fusion literatures, this paper proposed a novel region-based imagefusion framework, which considers the fusion consistence in the same regional partition.Therefore, it is more intelligent for object detection in the fusion result.
Since the multiple image sensors are widely employed in many fields such as multifocus, military, and medical imaging, to increase the capabilities of intelligent machines and systems, it is necessary to explore the adaptabilities of different fusion algorithms for different scenarios.For 2 Journal of Applied Mathematics example, imaging cameras usually have only a finite depth of field.Only those objects within the depth of field of the camera are focused, while other objects are blurred.Therefore, multifocus image fusion [18][19][20][30][31][32] can create a better description of the scene than any of the individual focused images.In the infrared (IR) and visible images fusion scenario [14][15][16]33], IR image is sensitive to IR light with low definition, while visible image is captured with more details of the scene.Thus, the fusion result of IR and visible images can deliver a comprehensive representation of both important objects detected by IR image and environmental details from visible images.In medical imaging, image fusion has been widely utilized for diagnosis and treatment [17,34].For instance, CT image is sensitive for the shooting of bone structure, while MRI has better imaging for soft tissue.In this way, by compositing CT and MRI images, additional diagnostic information can be obtained [34].
The remainder of this paper is organized as follows.Section 2 introduces the background of CS and the normalized cut theory.In Section 3, the region-based fusion framework is proposed.The sampling operator, joint regional partition algorithms, and the fusion schemes are elaborated in this section.The experimental results and discussion are given in Section 4. Finally, Section 5 concludes this paper and lists the contributions.

Background
2.1.Compressive Sensing.The protocols of compressive sensing are nonadaptive and parallelizable [12].They do not require knowledge of the signal/image acquired in advance, neither do they attempt any understanding of the underlying object to guide.Consider a length-, real valued signal x, and suppose that it is -sparse in a certain basis Ψ such as an orthonormal wavelet basis, a Fourier basis, or a local Fourier basis.In terms of matrix notation, the signal x can be expressed as the decomposition of the basis where  represents a vector of the transform coefficients with only  nonzero components.This implies that the signal x is sparse on a certain basis, so that the CS theory can be applied to it.Recent studies show that a signal can be accurately reconstructed by taking only  = (log ) linear, nonadaptive measurements if it is sparse on the orthonormal basis.Measurements can be obtained through the following linear system: where y represents the  × 1 measurement vector and Φ is an  ×  measurement matrix which is incoherent with Ψ; that is, ΦΨ conforms the restricted isometry property (RIP).Actually, the CS theory consists of two steps.(1) designs a measurement matrix Φ that is incoherent with the sparsifying basis Ψ to get the measurement vector y.The recent studies on CS show that, on the fixed sparsifying basis Ψ, Gaussian or Bernoulli iid matrices offer universal and optimal performance as measurement matrices but with a high computational complexity.In [12], a new sampling operator called scrambled block Hadamard ensemble (SBHE) is introduced, which is also quite universal but with a lower complexity.Equation (2) reconstructs the signal x from the CI measurement y.Since  < , the recovery of signal x from y is ill-posed which makes it impossible to solve the inverse transform from (2) directly.There have been several reconstruction algorithms proposed in recent years, such as gradient projection for sparse reconstruction (GPSR) [35], basis pursuit [36], total variation minimization [37], orthogonal matching pursuit (OMP) [38], and L1-norm minimization [39,40].In this paper, the original image is reconstructed with GPSR algorithm [35].[27] is quite universal but with a lower complexity.In this paper, SBHE sampling is used as the compressive sampling operator.In the block-based CS, the source images are first divided into small blocks with size  × .For each block, the same sampling operator   is used as the measurement matrix.  is formed by the partial block Hadamard transform with its columns randomly permuted as SBHE [27].The sampling operator Φ is a block diagonal matrix of   :

SBHE Sampling. Scrambled block Hadamard ensemble (SBHE)
In each block, we apply a linear sampling operator   with SBHE structure, and the number of measurements is  bi .Let x  represent the vectorized signal of the th block through raster scanning; then the corresponding block measurement output vector can be described as At last, we unite the corresponding block measurement output vector for reconstruction.

Normalized Cut Method.
Image segmentation can be treated as a problem of graph partitioning.In this paper, the normalized cut (NCut) [41,42] method is selected as the image segmentation method.NCut method was put forward by Shi and Malik and aimed to solve the clustering and image segmentation problems.NCut considers the image as a weighted-graph G = (V, E), in which V stands for the set of nodes (the pixels in the image) and E represents the edge set connecting the nodes.The weight of the edge between node  and node  is (, ), indicating the approximation relation between pixels, and (, ) = (, ) ≥ 0.
NCut method divides the nodes of image  by partition.If two node sets, A and B, meet the requirement A∪B = V,A∩ B = 0, the edge connecting these two sets can be removed to divide the image into two parts.The similarity between set A and set B is defined as in which cut (A, B) is the sum of all the edges' weights between A and V: assoc (B, V) is the sum of all the edges' weights between B and V: The graph G = (V, E) can be partitioned into two disjoint sets A and B by satisfying the NCut value is minimum.Since it is quite time-consuming to calculate the eigenvalue and eigenvector of similarity , generally we take the similar Lanczos method to work it out.NCut method's calculating complexity is ( 2 ). is the pixel number of the image, and  is the iterative step number the Lanczos method needs to converge.Thus it can be seen that even though the image is small, the computation amount is still large.
In this paper, we accelerate the NCut method by taking block,  ×  pixel matrix, as the basic node.In the block-based CS, block is taken as the basic sampling unit.Correspondingly it is taken as the basic unit in similarity matrix construction of NCut method.Therefore, in blockbased graph partition, V denotes block, E denotes frontier set connecting the two blocks, and edge weight between block  and block  is (, ), representing the approximate relationship between blocks, and (, ) = (, ) ≥ 0.

Overview of Region-Based Fusion
Framework.The purpose of image fusion is to composite scenes, which contributes a better understanding of the scene than any individual scene.In multiresolution image-fusion schemes, regionbased image-fusion scheme is more intelligent and has better performance than pixel-based image fusion [43].Compared with pixel-level image fusion and feature-level image fusion, image fusion based on regions has its own advantages: enhancing robustness of fusion system and overcoming certain problems existing in the pixel-level fusion, like sensitivity to noise, edge blur, and so forth.In CS image fusion, it is necessary and valuable to explore whether region-based CS scheme has better performance.In this way, region-based fusion framework for compressive image is illustrated as in Figure 1.
Firstly, the source image is compressed through compressive sensing, so as to facilitate the transmission of the sensor.Meanwhile, source images are segmented by region partitioning, with the purpose of getting an intelligent understanding of the image contents.By combining the regional partition results of source images, the joint regional partition result is obtained.In the fusion phase, region-based fusion schemes are adopted based on joint regional partition, making the fusion scheme consistent in the same region.At last, inverse transformation is applied to the coefficients derived from fusion, and the fused image is obtained eventually.
In the image-fusion scheme, fusion weight coefficients are calculated by mathematical combinations of image channels.The calculation methods of fusion weight coefficient are mainly based on average [44], mean, variance, PCA [19,45,46], and mutual information [47].In Section 3.3, the fusion scheme based on mutual information (MI) is described and fusion scheme based on regional mutual information is proposed.

Joint Regional Partition.
In region-based image fusion, fusion scheme is conducted based on the same region of source images.For this reason, it is required that source images have the unique and the same regional partition result.Joint regional partition is the partition that combines regional partition of source images, so that these source images can deliver the same regional partition result.
Joint regional partition can be treated as the process of combining label matrices. 1 and  2 represent partition label matrix of two images.Joint label matrix is defined as follows: Here,  =  1 *  2 , which refers to the number of all possible combinations of regional partitions.In this way,  is the integer big enough to guarantee that different regions are equipped with different signs.In the dividing matrix S, if regions of images are the same in corresponding place, they have the same sign.Those different dividing regions are granted with new signs.
Here we use NCut as the regional partition method and use the joint label matrix as the joint regional partition method in the multifocus scenario, illustrated in Figure 2. Figures 2(a) and 2(b) described two multifocus clocks.Figures 2(c) and 2(d) show the block-based partition results of these two clocks.The joint regional partition is obtained as shown in Figure 2(e).In this way, source images have the same regional partition results.

Region-Based Fusion Schemes.
Since the measurement is the random projections of the signal rather than the simple pixel value of the source images, it is improper to use the traditional fusion schemes directly.For example, the maxabs scheme or the simple mean scheme, both, select some coefficient of a transform to represent the significance of the images.However, in CS imaging, the magnitudes of the random measurements do not have those interpretations.
Imitating the traditional fusion scheme, we adopt a linear fusion scheme via weighted average on the project measurements and such scheme is performed in a region-wise manner.Consider It is supposed that   is formed as (9), where  1 and  2 are the th CS region measurement vectors of the two source images, respectively.Then, the challenge is to find a way to decide the proper  1 and  2 to represent the importance of the source images behind the random measurement.In this section, fusion scheme based on mutual information is implemented and fusion scheme based on regional mutual information is proposed.

Fusion Scheme Based on Mutual Information.
In most presented literature [14][15][16][17][18][19][20][21][22][23][24][25][26], the measurements of multiple input images are fused into composite measurements via weighted average, in which the weights are calculated based on entropy metrics of the original measurements.Here we use the entropy metrics to measure the amount of information in the images, which are well-established with information theory.The most widely used entropy metrics include the simple entropy (), the joint entropy ( 1 ,  2 ), and the mutual information ( 1 ,  2 ), shown in ( 10), (11), and ( 12), and they have the relationships in (13): Intuitively, in the linear weighted average, the set of measurements that contain more information should be assigned with a larger weight.Therefore, the following weight assignment according to the distribution of information measured by the above entropy metrics is proposed:

Fusion Scheme Based on Regional Mutual Information.
Different from the traditional CS fusion, a novel regionbased CS image-fusion scheme is constructed.Region-based mutual information provides more intelligent information entropy.It is defined that S 1 represents the dividing matrix of the first image and S 2 represents the dividing matrix of the second image.Therefore, their joint dividing matrix can be defined as In (15),  =  1 *  2 , which refers to the number of all possible combinations of regional partitions.In this way, N is the integer big enough to guarantee that different regions are equipped with different signs.In the dividing matrix S, if regions of images are the same in corresponding place, they have the same sign.Those different dividing regions are granted with new signs.With the aid of S, weight matrices W 1 and W 2 are calculated, respectively, as follows: In (16),  ∈ {1, 2}, () is the unit impulse function: In the weight matrix W new  , the same regions have the same weights.

Experiment Setup.
To assess the fusion quality and adaptive capacity for different fusion scenarios, we set up three groups of image fusion for different application scenarios.These scenarios are the fusion of CT with MRI, the fusion of infrared with visible images, and the multifocus image fusion.
Objective assessments are also included in our experiments.In this paper, IE, Xydeas's [48], and Piella's [49] metrics are used as nonreference objective metrics.Information entropy is generally applied to measure the amount of information.The more information entropy there is, the better fusion result is obtained.In addition, Xydeas's and Piella's metrics are applied to the assessment of the salient information transferred from the input images to the fused images.Piella's metric takes the image correlation coefficient, mean luminance, contrast, and edge information into account in a comprehensive manner.The dynamic ranges of three Piella's indexes, ,   , and   , are [0 1].The closer the values are to 1, the better the fusion performance is expressed.

Application Scenario 1: Medical Image Fusion.
Medical image fusion is a common and valuable fusion scene of image fusion.Due to the high density resolution, CT image has excellent performance in the shooting effect of bone structure and calcified structure.MRI is not ideal for shooting bone structure but performs well in contrastive resolution of the soft tissue.Therefore, by compositing these two modes, image fusion can reduce information redundancy and make mutual information be complementary.It is of important research value in clinical practice.
We implement CS fusion and regional CS fusion to the experiment set of CT and MRI images.Experiment results and objective measurements are given in Figure 3 and Table 1.
In medical image fusion, although PCA scheme has obtained high performance in objective metric, it generates a significant block effect in human visual perception.From the fusion result, it is demonstrated that our proposed method avoids the blocking effect and has the best performance in Piella's metric.It means that salient information from the inputs is well presented in the fused image.In the military image fusion of infrared and visible light, infrared light has a strong ability in discovering important military targets, while the texture expression ability of visible light is excellent.Therefore, image fusion not only embodies the important military targets, but also embodies the good texture expression ability in the fusion results.Also we implement CS fusion and region-based CS fusion to the experiment set of infrared and visible light image.Experiment results and objective measurements are given in Figure 4 and Table 2.
From Table 2, it is proved that, in the fusion scenario of infrared image with visible light image, our proposed method delivers better performance in IE, Xydeas' , and Piella's metrics than other five fusion schemes.It is also demonstrated that the fused image reserved the edge and salient information

Application Scenario 1: Multifocus Image Fusion.
In photography, due to the limitation of the depth of field of the camera lens, only objects on the imaging plane can be focused, while objects that are not on the imaging plane are vague.Multifocus image fusion is used to fuse images of different depths of field so that objects on different imaging planes are all clear in the fusion results.Also we implement CS fusion and regional CS fusion to the experiment set of multifocus clock images.Experiment results and objective measurements are given in Figure 5 and Table 3.  8 Journal of Applied Mathematics From Table 3, it is also proved that in the fusion scenario of multifocus images, our proposed method is the best in terms of both objective assessment and visual perception.In addition, region-based image-fusion scheme can provide better fusion results than the method mentioned in Section 3.3.1,under the condition of the same parameter setting, dictionary, and fusion rule.In summary, the proposed scheme has a better performance compared with the other five fusion schemes in these three scenarios.Also it is indicated that the proposed scheme is adaptive for different scenarios.

Conclusion
In this paper, a novel region-based image-fusion framework for compressive imaging is proposed and region-based fusion scheme is constructed, delivering a better fusion result in comparison with that of the traditional CS fusion.The key contributions are as follows.
(1) This paper explores the region-based fusion framework and scheme for compressive imaging.In the presented literatures, CS fusion directly combines the coefficients after CS sampling, without exploring or utilizing the relationship between those coefficients.Based on the intrinsic relation between image blocks, image blocks can be divided into different region partitions.It is necessary and valuable to explore the region-based CS fusion scheme based on the region partitions.
(2) From the experimental result, it is shown that regionbased CS fusion result has better performance than CS fusion which directly combines the coefficients after CS sampling.For example, in medical image fusion, traditional schemes such as mean, variance, and PCA schemes may generate blocking effect that could cause confusion in medical diagnosis.In this way, region-based CS fusion scheme is better not only in objective metrics but also from human visual perspective.
(3) Since image fusion has been widely employed in military and civil use, we apply the proposed method into different fusion scenarios to test its adaptive capability.From the experimental result, region-based CS fusion scheme is adaptive for different fusion scenarios and has better performance than previous CS fusion schemes.

Figure 2 :
Figure 2: (a) Left-focus clock, (b) right-focus clock, (c) regional partition of left-focus clock, (d) regional partition of right-focus clock, and (e) joint partition of two clocks.

Figure 3 :
Figure 3: (a) CT image, (b) MRI image, (c) fusion result of average scheme, (d) fusion result of mean scheme, (e) fusion result of variance scheme, (f) fusion result of PCA scheme, (g) fusion result of MI scheme, and (h) fusion result of the proposed method.

Figure 4 :
Figure 4: (a) Infrared image, (b) visible light image, (c) fusion result of average scheme, (d) fusion result of mean scheme, (e) fusion result of variance scheme, (f) fusion result of PCA scheme, (g) fusion result of MI scheme, and (h) fusion result of the proposed method.

Figure 5 :
Figure 5: (a) Left-focus clock, (b) right-focus clock, (c) fusion result of average scheme, (d) fusion result of mean scheme, (e) fusion result of variance scheme, (f) fusion result of PCA scheme, (g) fusion result of MI scheme, and (h) fusion result of the proposed method.

Table 1 :
The quantity assessment of fusion methods for CT and MRI images.

Table 2 :
The quantity assessment of fusion methods for infrared and visible light images.

Table 3 :
The quantity assessment of fusion methods for multifocus images.