MRI Volume Fusion Based on 3D Shearlet Decompositions

Nowadays many MRI scans can give 3D volume data with different contrasts, but the observers may want to view various contrasts in the same 3D volume. The conventional 2D medical fusion methods can only fuse the 3D volume data layer by layer, which may lead to the loss of interframe correlative information. In this paper, a novel 3D medical volume fusion method based on 3D band limited shearlet transform (3D BLST) is proposed. And this method is evaluated upon MRI T2* and quantitative susceptibility mapping data of 4 human brains. Both the perspective impression and the quality indices indicate that the proposed method has a better performance than conventional 2D wavelet, DT CWT, and 3D wavelet, DT CWT based fusion methods.


Introduction
Medical image fusion is a special case of image fusion and has been studied for decades. It is widely applied in medical diagnostics [1,2]. It refers to extracting and merging the feasible information from different source images, which were captured by different kinds of sensors, such as CT, MRI, and PET, or different configurations of the same sensor, such as MRI T2 * and quantitative susceptibility mapping (QSM). Some information is correlated, but most of the information is complementary, because special sensors or special configurations of the same sensor are sensible to special sources. For example, CT images provide the details of dense hard tissues, MRI images give information of soft tissues: T2 * provides the contrast of the tissue relaxation time, and QSM can be provide susceptibility contrast information, which is produced by a range of endogenous magnetic biomarkers and contrast agents such as iron, calcium, and gadolinium (Gd). If different data can be properly fused, the fused data contain all the feasible information of the scanned object, which can reveal the details of structure more clearly than each single sensor. Previously, all source data need to be registered. Because 3D T2 * magnitude image and QSM image are acquired from the real and imaginary part of the same scan, QSM images are exactly registered to T2 * images.
Nowadays, many researches on medical fusion method only consider the 2D case. However, many medical sensors can provide 3D data volume, and the value of each point in the volume is correlated not only to the adjacent points in the same layer but also to the points in neighboring layers. Therefore, it is necessary to develop the volume fusion method instead of 2D image fusion method which can only fuse the data in single layer.
Fusion methods can be performed in spatial domain or certain transformed domain. In spatial domain, the intuitive fused image is selected as the weighted average image of source images. This kind of methods is relatively easy to implement, but the performances are low and some feasible information is reduced or even lost. The transformed domain of fusion methods is usually following the following steps: (1) registering source images, (2) performing the forward transform to sources images, (3) acquiring the fused coefficients from coefficients of source images under fusion rules, and (4) performing backward transform to fused coefficients to get the fused image. In this type of methods, the research works are usually focused on two points: the choice of the transform 2 International Journal of Biomedical ImagingF igure 1: The partition of frequency domain for 3D pyramid-adapted shearlet transform. and the design of fusion rule. Many multiscale transforms are applied in fusion methods, such as DWT, DTCWT, curvelets [3], and shearlets [4].
Shearlets emerged in recent years among the most successful frameworks for the efficient representation of multidimensional data. Indeed, many other transforms were introduced to overcome the limitation of traditional multiscale transforms due to their poor ability of capturing edges and other anisotropic features. However, shearlet transform stands out since it has many advantages uniquely. It has a single or finite set of generating functions; it provides optimally sparse representations for multidimensional data; it allows a unified treatment of the continuum and digital realms. With these advantages, shearlet transform has been widely utilized in many image processing tasks such as denoising [5], edge detection [6], and enhancement [7]. And in many papers [4], as well as this paper, shearlet transform is indeed also very suited to image fusion. In this paper, the 3D band limited shearlet transform, which is the discrete implementation of shearlet transform, is selected for medical volume fusion.
Three fusion rules are utilized in this paper: maximum points' modulus (MPM), which considers only the value of single point; maximum regional energy (MRE), which considers the information for the local region [8] and treats each point of the region equally; and maximum sum of modified laplacian [9], which also considers the information in the region but treats the center point of the region and the points around it differently. Other more complicated fusion rules have also been proposed. The above three methods were selected as representatives. These three classic fusion rules are expanded into 3 dimensions. In order to evaluate the performance of proposed method, the quality indices also are extended into 3D version.
The rest of the paper is organized as follows. In Section 2, the basic theories about 3D shearlet transform and the discrete implementation, 3D BLST, are briefly introduced. In Section 3, fusion method based on 3D BLST with three fusion rules are proposed. Using the experiments of Section 4, the comparison of 2D and 3D methods and the performance of the proposed methods are illustrated and discussed. Finally, we draw conclusions in Section 5.

3D Shearlet Transform
In this section, the basic theory of 3D shearlet transform and its discrete implementation, 3D band limited shearlet transform (3D BLST), are introduced. Figure 1, the 3D frequency domain can be partitioned into three pairs of pyramids given by

Basic Theory of the 3D Shearlet Transform. As shown in
(1) and the center cube The partitioning of frequency space into pyramids allows restricting the range of the shear parameters. Without such partitioning, one must allow arbitrarily large shear parameters, which leads to a treatment biased toward one axis. The defined partition, however, enables restriction of the shear parameters to [⌈−2 /2 ⌉, ⌈2 /2 ⌉]. This approach is the key to provide an almost uniform treatment of different directions in a sense of a good approximation to rotation. Pyramid-adapted shearlets are scaled according to the paraboloidal scaling matrices as 2 ,̃2 and̆2 , ∈ Z defined by 2 = ( ). Directionality is encoded by the shear matrices ,̃, and̆, = ( 1 , 2 ) ∈ Z 2 , given by = ( ), respectively. The translation lattices will be defined through the following matrices: where 1 > 0 and 2 > 0. Then the definition of 3D pyramidadapted discrete shearlet can be given as follows.

Definition 1. For
= ( 1 , 2 ) ∈ (R + ) 2 , the pyramidadapted discrete shearlet system SH( , ,̃,; ) generated by , ,̃,̆∈ 2 (R 3 ) is defined by SH ( , ,̃,; ) = Φ ( ; 1 ) ∪ Ψ ( ; ) ∪Ψ (̃; ) International Journal of Biomedical Imaging , where 1 and 2 satisfy the following assumptions: Thus, in frequency domain, the band limited shearlet function ∈ 2 (R 3 ) is almost a tensor product of one wavelet with two "bump" functions and, thereby, a canonical generalization of the classical band limited 2D shearlets. This implies the support in frequency domain by a needle-like shape with the wavelet acting in radial direction and ensures high directional selectivity. The derivation from a tensor product in fact ensures a favorable behavior with respect to the shearing operator and thus a tiling of frequency domain which leads to a tight frame for 2 (R 3 ).

Theorem 2.
Let be a band limited shearlet defined as before; the family of functions P Ψ( ) forms a tight frame foṙ By this theorem and a change of variables, the shearlet tight frames foṙ2(P),̇2(P), anḋ2(P) can be constructed, respectively. Furthermore, wavelet theory provides choice of ∈ 2 (R 3 ) such that Φ( ; 1/8) forms a tight frame foṙ2(C). Since R 3 = C∪P∪P∪P as a disjoint union, any function can be expressed by ∈ 2 (R 3 ) as = C + P +P +P , where denotes the orthogonal projection onto the closed subspacė2( ) for some measurable set ⊂ R 3 . More details of 3D shearlet theory and the implementation of band limited shearlet transform as well as other implementations can be found in [10][11][12][13][14].

Proposed Fusion Method
The proposed fusion method in this paper belongs to the voxel-level fusion, with average rule for low frequency coefficients and three different fusion rules for high frequency coefficients.
The fused high coefficients are the coefficients that have the larger modulus as represented in (5), where , ∈ { , , } means the high frequency coefficients; , label two sources, respectively; refers to the fused result. This fusion rule considers only the point information.
(b) Max Region Energy (MRE) [8]. One has where local region,̄is the mean of all in Ω, and Ω is the number of coefficients in Ω. The fused high coefficients are the coefficients that have the larger local energy. This kind of method considers not only the information of the current position but also that information around it.
(c) Max Region Sum of Modified Laplacian (MSML). The fused high frequency coefficients are acquired according to (7). 3D version of modified Laplacian index is calculated through (9), and the sum of them is calculated as (8); Ω is a local region. In this paper, the variation step equals 1: SML ( , , ) = ∑ , , ∈Ω The steps of proposed fusion method are given in Figure 2. Firstly, 3D BLST are performed to both source volumes; the low frequency is the average of both source coefficients, the low frequency coefficients are the average of both low frequency coefficients of source images, the high frequency coefficients are acquired by equations (5)- (9). Finally, the backward 3D BLST is performed to fused coefficients, and the output is the fused volume as represented by .

Experiment
In this section, the performances of proposed methods are evaluated on 4 human brain subjects. The human study was approved by our Institutional Review Board. MR examinations were performed with a 3.0T MR system (Signa HDxt, GE, USA), using an 8-channel head coil. A 3D T2 * weighted multiecho gradient echo sequence was used with the following parameters: FA = 20 ∘ ; TR = 57 ms; number of TEs = 8; first TE = 5.7 ms; uniform TE spacing (ΔTE) = 6.7 ms; BW = ±41.67 kHz; field of view (FOV) = 24 cm; a range of resolutions were tested: 0.57 × 0.75 × 2 mm 3 . The 3D T2 * magnitude and QSM images reconstructed by NMEDI [15] are interpolated to 128 × 128 × 128 for fusion. Because in QSM processing the magnetic field outside the brain parenchyma was corrupted by noise, QSM region was cropped by a mask, which was obtained using a brain extraction tool (BET) [16]. In following comparisons, the fusion regions are performed in the mask.

2D versus 3D:
The Consistency along the z-Axis. The volume data has 3 dimensions, that is, -, -, and -axis. In the first experiment, the 2D methods are performed along -and -axis frame by frame. And the 3D methods directly fuse the whole volume data. One major difference between these two type methods is the treatment of data along -axis. The consistency along -axis is compared by the visual effect of inter frame difference (IFD) images and the measurement of IFD MI [17,18]. When calculating IFD MI, only the voxel which located in both masks of frames is calculated, because the data out of the mask is invalid and the mask is International Journal of Biomedical Imaging  International Journal of Biomedical Imaging From the visual impression of the IFD images (Figure 3), it can be noticed that the fused images by 2D methods have several obvious distortions which make the fused images similar to neither of the source images. While, in results by 3D methods, the IDF images are more consistent with the IDF of source data, which means the IDF images are highly correlated to the IDF of source images (Figure 3). The IDF images for fused volumes are very similar to the IDF of QSM data. And the difference among the 3D methods can hardly be noticed. This conclusion can still be drawn from the quality index of IFD MI, as given in Tables 1, 2, 3, and 4. The 3D BLST with MPM fusion has the highest value of IFD MI, and all the values for 3D methods are higher than that of the 2D methods with the same fusion rule.

Performance of Proposed Methods.
In the second experiment, the visual effect and the quality index are compared among the fusion methods based on 2D, 3D DWT, 2D, International Journal of Biomedical Imaging Figure 5: Axial source and result images. 3D DTCWT, and 3D BLST. Two widely used performance indices are selected as the subjective measurements of the fused results: mutual information (MI) and AB| . However, in document [19], the index of AB| is in only suited for the case of 2D images; it is necessary to be expanded into 3D in our experiment, where the 2D "Sobel" operator was substituted by 3D "Sobel" operator and the number of angels increases to 3. In most of the previous image fusion research, the quality index is calculated through the whole voxels (or pixels) of the source images (or volumes). However, in this experiment, only the points which located in the area of mask are taken into consideration, because those points outside the 8 International Journal of Biomedical Imaging mask carry no useful information about the brains and are set to arbitrary value manually; consequently they are ignored in evaluation step.
One layer of each coronal, axial, and sagittal plane is selected as the representation; the source and result images are shown in Figures 4, 5, and 6. From the perspective impression, it is hard to tell which fusion method is better, because the resulting images are more similar to each other. The distinctions among them can only be noticed after carefully observation. This phenomenon suggests that the proposed method and all conventional methods can fulfill the fusion task effectively. The performance of different fusion methods can be compared through the quality indices which are listed in Tables 5, 6, 7, and 8. From the tables, it can be noticed that the quality indices of proposed method are larger than the methods based on DWT or DT CWT. And in the case of 3D BLST, among different fusion rules, the rule of MRE has the largest indices.
The method of this paper belongs to the voxel-level fusion method, which considers only the distribution of the shearlet coefficients. In the future, the inner structure feature of the organ will be taken into account to see whether it can further improve the quality of the fused medical volume.

Conclusion
In this paper, the 3D medical volume fusion method based on 3D band limited shearlet transform is proposed. From the principles of methods and the experiments the following conclusion can be drawn: (1) the 3D transform based methods have better consistency along -axis (the third dimension) than conventional 2D transform based methods in medical volume fusion. (2) From both perspective impression and the quality indices, the proposed 3D BLST medical fusion method is better than 3D DWT or 3D DTCWT. (3) Among the fusion rules using 3D BLST, the MRE fusion rule has better performance than the other two fusion rules of MPM and MSML.