Clinical diagnosis has high requirements for the visual effect of medical images. To obtain rich detail features and clear edges for fusion medical images, an image fusion algorithm FFST-SR-PCNN based on fast finite shearlet transform (FFST) and sparse representation is proposed, aiming at the problem of poor clarity of edge details that is conducive to maintaining the details of source image in current algorithms. Firstly, the source image is decomposed into low-frequency coefficients and high-frequency coefficients by FFST. Secondly, the K-SVD method is used to train the low-frequency coefficients to obtain the overcomplete dictionary
With the development of imaging devices, different sensors can acquire different information from images of the same scenario [
In recent years, the image fusion method based on multiscale geometric analysis has been widely used in the image processing due to its multiresolution characteristics [
The FFST-SR-PCNN first decomposed the registered source images into low-frequency
FFST-SR-PCNN medical image fusion algorithm process.
Set
For
For
Specially, define wavelet function
Set
The shearlet transform generated shearlet functions with different features by scaling, shearing, and translating basis functions. Image decomposition based on the shearlet transform included the following: (1) decompose images into low-frequency and high-frequency subbands at different scales with Laplacian pyramid algorithm; (2) directionally subdivide subbands of different scales with the shear filter to realize multiscale and multidirectional decomposition and to make the size of the decomposed subband images consistent with the source images [
To obtain a discrete shearlet transformation, this algorithm discretized the scaling, shearing, and translating parameters in formula (
The expression of the frequency domain was
To obtain the shearlets in the whole frequency domain,
Thus, the discrete shearlet can be expressed as
The shearlet defined by formula (
The basic idea of sparse representation is to represent or approximately represent any signal by the linear combination of a small number of nonzero atoms in a given dictionary [
In FFST-SR-PCNN, first, the K singular value decomposition (K-SVD) method was used to train low-frequency coefficients and obtain the matrix
With the complete dictionary
Formula (
The fusion process of low-frequency coefficient based on sparse presentation is illustrated in Figure
Low-frequency coefficient fusion process based on sparse representation.
In Figure
Pulse-coupled neural network (PCNN) can combine the input high-frequency coefficients with human visual characteristics to obtain detailed information such as texture, edge, and contour [
High-frequency coefficient fusion used a pixel as the neuronal feedback input to stimulate the simplified PCNN model. SF was
It got ignition maps through PCNN ignition and selected fusion coefficients according to the number of ignition times.
The process was implemented as follows:
where
The process was implemented as follows:
In order to verify the effectiveness of FFST-SR-PCNN, five representative algorithms were selected as the controls for medical image fusion experiments. Five indicators including spatial frequency (SF), average gradient (AG), mutual information (MI), edge information transfer factor QAB/F (high-weight evaluation indicator) [
In this experiment, six pairs of brain images in different states were selected for fusion. The first three pairs are CT/MR-T2 images and the last three pairs are MR-T1/MR-T2 images. The resulting images fused by different algorithms are shown in Figures
CT/MR-T2 medical image fusion results. (a) CT original image. (b) MR-T2 original image. (c) Method 1. (d) Method 2. (e) Method 3. (f) Method 4. (g) Method 5. (h) FFST-SR-PCNN.
CT/MR-T2 medical image fusion results. (a) CT original image. (b) MR-T2 original image. (c) Method 1. (d) Method 2. (e) Method 3. (f) Method 4. (g) Method 5. (h) FFST-SR-PCNN.
CT/MR-T2 medical image fusion results. (a) CT original image. (b) MR-T2 original image. (c) Method 1. (d) Method 2. (e) Method 3. (f) Method 4. (g) Method 5. (h) FFST-SR-PCNN.
MR-T1/MR-T2 medical image fusion results. (a) MR-T1 original image. (b) MR-T2 original image. (c) Method 1. (d) Method 2. (e) Method 3. (f) Method 4. (g) Method 5. (h) FFST-SR-PCNN.
MR-T1/MR-T2 medical image fusion results. (a) MR-T1 original image. (b) MR-T2 original image. (c) Method 1. (d) Method 2. (e) Method 3. (f) Method 4. (g) Method 5. (h) FFST-SR-PCNN.
MR-T1/MR-T2 medical image fusion results. (a) MR-T1 original image. (b) MR-T2 original image. (c) Method 1. (d) Method 2. (e) Method 3. (f) Method 4. (g) Method 5. (h) FFST-SR-PCNN.
Quality assessment of CT/MR-T2 medical image fusion.
Index | Method 1 | Method 2 | Method 3 | Method 4 | Method 5 | FFST-SR-PCNN |
---|---|---|---|---|---|---|
SF | 34.2861 | 34.3181 | 30.7780 | 30.6366 | 36.6748 |
|
AG | 9.6513 | 8.2010 |
|
9.6272 | 9.2317 | 9.6804 |
MI | 2.1254 | 2.2705 | 2.0769 | 2.2199 |
|
2.5248 |
QAB/F | 0.5190 | 0.4850 | 0.4939 | 0.5257 | 0.5843 |
|
RT/s | 16.2069 | 32.4554 | 30.5628 | 22.5256 |
|
11.3624 |
Quality assessment of CT/MR-T2 medical image fusion.
Index | Method 1 | Method 2 | Method 3 | Method 4 | Method 5 | FFST-SR-PCNN |
---|---|---|---|---|---|---|
SF | 27.0626 | 26.9760 | 26.3291 | 23.9678 | 28.7623 |
|
AG | 7.5929 | 6.7873 |
|
7.2387 | 6.7479 | 7.2640 |
MI | 2.1941 | 2.6101 | 2.2457 | 2.3168 | 2.9609 |
|
QAB/F | 0.4617 | 0.4088 | 0.5313 | 0.4733 | 0.5161 |
|
RT/s | 16.2266 | 32.3894 | 29.9843 | 22.0427 |
|
10.1755 |
Quality assessment of CT/MR-T2 medical image fusion.
Index | Method 1 | Method 2 | Method 3 | Method 4 | Method 5 | FFST-SR-PCNN |
---|---|---|---|---|---|---|
SF | 37.5717 | 41.0988 | 36.5295 | 38.8050 | 40.0197 |
|
AG | 9.8877 | 8.9200 | 10.0862 | 10.2808 | 10.4221 |
|
MI | 2.0886 | 2.3744 | 2.0724 | 2.1990 | 2.4176 |
|
QAB/F | 0.5559 | 0.5582 | 0.5242 | 0.6148 | 0.6300 |
|
RT/s | 16.2690 | 31.5377 | 30.1011 | 22.4004 |
|
11.7004 |
Quality assessment of MR-T1/MR-T2 medical image fusion.
Index | Method 1 | Method 2 | Method 3 | Method 4 | Method 5 | FFST-SR-PCNN |
---|---|---|---|---|---|---|
SF | 22.7160 | 22.6900 | 21.5347 | 22.7164 | 24.3787 |
|
AG | 6.4857 | 6.3629 | 6.6983 | 6.4737 | 6.5910 |
|
MI | 2.3686 | 2.6473 | 2.4908 | 2.4517 |
|
2.7319 |
QAB/F | 0.5204 | 0.5614 | 0.6105 | 0.5686 | 0.6261 |
|
RT/s | 15.3972 | 27.2017 | 29.6786 | 22.8392 |
|
8.5966 |
Quality assessment of MR-T1/MR-T2 medical image fusion.
Index | Method 1 | Method 2 | Method 3 | Method 4 | Method 5 | FFST-SR-PCNN |
---|---|---|---|---|---|---|
SF | 33.0368 |
|
26.6605 | 30.3170 | 33.7416 | 34.2538 |
AG | 12.8183 |
|
10.2990 | 11.7227 | 12.8446 | 13.3015 |
MI | 2.7059 |
|
2.4716 | 2.6096 | 2.9629 | 3.2689 |
QAB/F | 0.6086 |
|
0.4126 | 0.5333 | 0.5285 | 0.6353 |
RT/s | 16.2790 | 31.6667 | 31.1026 | 22.0803 |
|
12.4248 |
Quality assessment of MR-T1/MR-T2 medical image fusion.
Index | Method 1 | Method 2 | Method 3 | Method 4 | Method 5 | FFST-SR-PCNN |
---|---|---|---|---|---|---|
SF | 25.6557 | 25.1765 | 23.7800 | 23.2217 | 26.4393 |
|
AG | 9.8227 | 9.3917 | 8.9306 | 8.9716 | 9.5509 |
|
MI | 2.5279 | 3.1267 | 3.1352 | 2.5935 |
|
3.1781 |
QAB/F | 0.4643 | 0.4707 | 0.5005 | 0.4656 | 0.5478 |
|
RT/s | 16.2837 | 34.0250 | 31.1384 | 22.7084 |
|
12.1475 |
According to Figures
In this experiment, six pairs of brain images in different states were selected for fusion. The first three pairs are MR-T2/PET images and the last three pairs are MR-T2/SPECT images. The resulting images fused by different algorithms are shown in Figures
MR-T2/PET medical image fusion results. (a) MR-T2 original image. (b) PET original image. (c) Method 1. (d) Method 2. (e) Method 3. (f) Method 4. (g) Method 5. (h) FFST-SR-PCNN.
MR-T2/PET medical image fusion results. (a) MR-T2 original image. (b) PET original image. (c) Method 1. (d) Method 2. (e) Method 3. (f) Method 4. (g) Method 5. (h) FFST-SR-PCNN.
MR-T2/PET medical image fusion results. (a) MR-T2 original image. (b) PET original image. (c) Method 1. (d) Method 2. (e) Method 3. (f) Method 4. (g) Method 5. (h) FFST-SR-PCNN.
MR-T2/SPECT medical image fusion results. (a) MR-T2 original image. (b) SPECT original image. (c) Method 1. (d) Method 2. (e) Method 3. (f) Method 4. (g) Method 5. (h) FFST-SR-PCNN.
MR-T2/SPECT medical image fusion results. (a) MR-T2 original image. (b) SPECT original image. (c) Method 1. (d) Method 2. (e) Method 3. (f) Method 4. (g) Method 5. (h) FFST-SR-PCNN.
MR-T2/SPECT medical image fusion results. (a) MR-T2 original image. (b) SPECT original image. (c) Method 1. (d) Method 2. (e) Method 3. (f) Method 4. (g) Method 5. (h) FFST-SR-PCNN.
Quality assessment of MR-T2/PET medical image fusion.
Index | Method 1 | Method 2 | Method 3 | Method 4 | Method 5 | FFST-SR-PCNN |
---|---|---|---|---|---|---|
SF | 27.9886 | 24.4599 | 24.5435 | 28.1239 | 28.1658 |
|
AG | 8.6039 | 7.9211 | 7.0367 | 8.6999 | 8.5409 |
|
MI | 3.1791 | 3.0843 | 3.2350 | 3.2391 | 3.3424 |
|
QAB/F | 0.5920 | 0.4431 | 0.4673 | 0.6186 | 0.5797 |
|
RT/s | 15.5014 | 39.0562 | 31.0281 | 26.6560 |
|
9.3991 |
Quality assessment of MR-T2/PET medical image fusion.
Index | Method 1 | Method 2 | Method 3 | Method 4 | Method 5 | FFST-SR-PCNN |
---|---|---|---|---|---|---|
SF | 32.6319 | 27.9935 | 28.0245 | 33.7764 | 34.2547 |
|
AG | 10.9999 | 9.4439 | 8.8861 | 11.5490 |
|
11.5588 |
MI | 3.3142 | 3.2200 | 3.2471 | 3.4545 | 3.7286 |
|
QAB/F | 0.5886 | 0.4417 | 0.4566 | 0.6541 | 0.6331 |
|
RT/s | 15.4448 | 39.8451 | 30.4667 | 25.3966 |
|
9.9195 |
Quality assessment of MR-T2/PET medical image fusion.
Index | Method 1 | Method 2 | Method 3 | Method 4 | Method 5 | FFST-SR-PCNN |
---|---|---|---|---|---|---|
SF | 32.1710 | 26.2209 | 26.6609 | 32.8288 | 33.5603 |
|
AG | 11.0293 | 9.3185 | 8.5356 | 11.3710 | 11.4366 |
|
MI | 3.3244 | 3.2592 | 3.3420 | 3.4270 | 3.6002 |
|
QAB/F | 0.5580 | 0.4026 | 0.4468 | 0.6060 | 0.5819 |
|
RT/s | 15.4377 | 40.0521 | 30.5628 | 26.3830 |
|
10.0894 |
Quality assessment of MR-T2/SPECT medical image fusion.
Index | Method 1 | Method 2 | Method 3 | Method 4 | Method 5 | FFST-SR-PCNN |
---|---|---|---|---|---|---|
SF | 22.1254 | 17.6301 | 17.4476 | 21.6041 |
|
21.7161 |
AG |
|
6.2468 | 5.9028 | 7.1767 | 7.4044 | 7.0624 |
MI | 2.7010 | 2.6122 | 2.7375 | 2.7932 | 2.9168 |
|
QAB/F | 0.6647 | 0.3967 | 0.4266 | 0.6481 | 0.6849 |
|
RT/s | 15.4280 | 40.6601 | 30.8525 | 26.2191 |
|
8.9699 |
Quality assessment of MR-T2/SPECT medical image fusion.
Index | Method 1 | Method 2 | Method 3 | Method 4 | Method 5 | FFST-SR-PCNN |
---|---|---|---|---|---|---|
SF |
|
15.1207 | 15.2854 | 18.7206 | 19.0044 | 18.3672 |
AG |
|
5.1296 | 4.8812 | 5.5770 | 5.7915 | 5.3554 |
MI | 2.5702 | 2.3812 | 2.5202 | 2.6742 | 2.7730 |
|
QAB/F | 0.6692 | 0.3700 | 0.4581 | 0.6589 | 0.6668 |
|
RT/s | 15.4179 | 38.9686 | 30.0630 | 26.3533 |
|
8.3015 |
Quality assessment of MR-T2/SPECT medical image fusion.
Index | Method 1 | Method 2 | Method 3 | Method 4 | Method 5 | FFST-SR-PCNN |
---|---|---|---|---|---|---|
SF | 22.0008 | 18.5584 | 17.6392 | 21.7631 |
|
21.5873 |
AG |
|
6.3536 | 5.5604 | 6.9306 | 7.1145 | 6.9346 |
MI | 22.4242 | 2.2906 | 2.3527 | 2.5019 | 2.6839 |
|
QAB/F | 0.6753 | 0.4438 | 0.4188 | 0.6873 | 0.6915 |
|
RT/s | 15.4405 | 39.9952 | 30.1950 | 25.8698 |
|
8.6035 |
According to Figures
Taken above gray images and color images fusion results together, FFST-SR-PCNN can achieve better fusion performance in edge sharpness, change intensity, and contrast.
To promote the fusion performance of unimodal medical images, this thesis proposed a FFST-SR-PCNN algorithm based on FFST, sparse presentation, and pulse-coupled neural network. It has excellent detail delineation and can efficiently extract the feature information of images, thus enhanced the overall performance of the fusion results. The performance of FFST-SR-PCNN is evaluated by several experiments. In the comparing experiments with 5 comparison algorithms, all single-evaluation indexes of our algorithm are ranked in the top three; the comprehensive evaluation index of our algorithm has best result, and its QAB/F is higher than other 5 comparison algorithms. In terms of subjective manner, FFST-SR-PCNN can efficiently express the marginal information of images and make the details of fusion image clearer, with more smooth edges. Thus, it has better subjective visual effects.
The data used to support the findings of this study are included within the article.
The authors declare that they have no conflicts of interest.