^{1}

^{1}

^{1}

^{2}

^{1}

^{2}

Image pair is often aligned initially based on a rigid or affine transformation before a deformable registration method is applied in medical image registration. Inappropriate initial registration may compromise the registration speed or impede the convergence of the optimization algorithm. In this work, a novel technique was proposed for prealignment in both monomodality and multimodality image registration based on statistical correlation of gradient information. A simple and robust algorithm was proposed to determine the rotational differences between two images based on orientation histogram matching accumulated from local orientation of each pixel without any feature extraction. Experimental results showed that it was effective to acquire the orientation angle between two unregistered images with advantages over the existed method based on edge-map in multimodalities. Applying the orientation detection into the registration of CT/MR, T1/T2 MRI, and monomadality images with respect to rigid and nonrigid deformation improved the chances of finding the global optimization of the registration and reduced the search space of optimization.

Image registration is the spatial mapping of corresponding locations between different images with broad applications in neurosurgery and radiotherapy [

It was previously reported that orientation difference estimation based on edge information [

In this study, the feasibility of calculating the orientation difference between two unregistered images is explored to facilitate the image registration without any feature extraction. The challenges lie in the lack of abundant corresponding characteristics between multimodality images. Tissue boundaries may vary in corresponding multimodality images, but the distribution of gradient information often has considerable similarities. The orientation difference between multimodality images is expected to be determined by the orientation histogram of gradient information. Gradient information has been widely used in the field of image registration [

Contrast to all previously proposed methods based on gradient information, gradient information is adopted to estimate the global orientation difference between images in this work. The estimation of global rotational difference is casted as the problem of gradient magnitude weighted orientation histogram matching. A robust technique for histogram matching based on L1 norm and L2 norm is also provided, which is robust to local deformation, partial occlusion, and illumination change in addition to noise in images. Normalizing the gradient magnitude weighted orientation histogram allows estimating the orientation difference across scale differences of images. The main contribution of the work lies in the fact that the global orientation is estimated based on the accumulation of local gradient information, and the global orientation difference is formulated as the problem of robust matching of gradient magnitude weighted orientation histograms.

The prealignment for multimodality images mainly consists of two components:

Flowchart of the estimation of the rotational difference.

Test MR Brain images. (a) T1 MRI. (b) T2 MRI. (c) Rotate T2 MRI by 11.46°.

The local orientation in the image is obtained by calculating the first derivatives in two orthogonal directions and the orientation is determined using the gradient expression illustrated as [

The gradient direction is calculated as

The gradient is computed pixelwise and this operation is carried out by filtering the image with operators such as Sobel in the

One portion (the red rectangle in Figure

Local gradient calculated by the Gaussian derivative and gradient derivative, respectively. (a) Original image and its portion shown by the red rectangle. (b) Local gradient by Gaussian derivative. (c) Local gradient by gradient derivative. Note that the blue axis shows the local gradient orientation at each pixel, and the length of the axis denotes the value of local gradient magnitude.

In this work, the gradient magnitudes weighted orientation histogram was devised to describe the global orientation characteristic of images. The weighted orientation histogram is created with the weight of the gradient direction at a pixel being the gradient magnitude at the pixel. The orientation resolution is set to 1° to balance the accuracy and the robustness of orientation histogram. In addition, the process of histogram normalization eliminates the influence of slightly scale difference or partial occlusion between images.

The magnitudes weighted orientation histograms accumulated from local gradients calculated by Gaussian derivative and gradient derivative were experimentally compared. In Figure ^{°,} and 270°. Comparatively, Gaussian derivative was able to describe the local gradient information, which represented the desired characteristics of local areas.

Magnitudes weighted orientation histograms accumulated from local gradients calculated by Gaussian derivative and gradient derivative, respectively. (a) Local gradients calculated by Gaussian derivative. (b) Local gradients calculated by gradient derivative.

In Figure

Magnitudes weighted orientation histograms with different orientation bins. (a)

Since two images may not cover exactly the same parts of anatomic structures, in order to maximize the area of overlap between the two images, areas within largest circular regions centered at the images are used in the orientation histogram accumulation. To maximize the area of overlap between the images, only areas within largest circular regions centered at the images are used for the accumulation. The orientation histogram of one image is cyclically slided over the other orientation histogram from the other image, and the position where the two histograms best match is used to determine the rotational difference between the images, assuming

Fourier transform based on Fourier shift theorem was used to efficiently compute the sum of absolute difference (L1 norm) or the sum of squared difference (L2 norm) for all possible shifts cyclically for

Cyclically histogram shift by Fourier shift theorem. (a) Original histogram. (b) Cyclically shift

A synthetic example of determining the rotational difference between two brain T1 MRI and T2 MRI images was presented to demonstrate the detailed performance of the proposed algorithm in Figure

Simulated orientation test by gradient magnitude weighted orientation histogram matching of T1 MRI (Figure

To demonstrate the accuracy and robustness of the proposed method for global orientation difference estimation between multimodality medical images, both synthetic and clinical medical images were applied for the experiments. The performance of the proposed method on synthetic images was firstly tested as the ground truths of the rotational difference were known. The gold-standard rigid-body rotational difference for each registration was set by rotating one image in the image pair with respect to the other image. The proposed method was compared with the edge-based method for rotational estimation for challenging multimodality images. Finally, we incorporated the proposed orientation estimation for preregistration of multimodality images.

The method was tested in three different medical image processing scenarios: brain T1WI and T2WI with scale difference (Figure

Image pairs used to evaluate the registration method. The first column and the second column were the original image pairs and the third column was generated by rotating the first column image with fixed 17° clockwise. (a) Brain T1/T2 MRI with intensity and slight scale difference. (b) Head CT/MR with significant differences. (c) Prostate MR with deformations and partial occlusions due to the surgery.

The test results of rotational differences between image pairs shown in Figure

Test results of rotational differences between image pairs by the proposed method.

Image pair | 1st column and |
1st column and |
2nd column and |
Absolute value |
---|---|---|---|---|

Brain T1/T2 MRI | 17 |
358 (or −2) |
340 |
18 |

Head CT/MR | 17 |
357 (or −3) |
341 (or −19) |
16 |

Prostate MR | 17 |
7 |
353 (or −7) |
14 |

Cross correlation of gradient magnitudes weighted orientation histogram for each image pair.

Then, test results of image pairs in Figure

Test results of rotational differences between image pairs by edge-map based method.

Image pair | 1st column and |
1st column and |
2nd column and |
Absolute value |
---|---|---|---|---|

Brain T1/T2 MRI | 16.87 |
120.23 |
82.97 |
27.26 |

Head CT/MR | 16.87 |
355.08 |
46.40 |
308.68 |

Prostate MR | 16.87 |
68.90 |
22.50 |
46.40 |

Simulated orientation test by edge-map based method of T1 MRI (Figure

Phase correlation of the polar presentation of Fourier spectrum for each image pair.

In essence, the proposed method of rotational estimation is a general technique for image registration with wide ranges of rotational differences. In order to demonstrate the sensitivity of the method to smaller rotations and larger rotations, detailed simulation tests were presented with a smaller range of rotations, including

Rotational estimation results of image pairs with synthetic rotational angles.

Synthetic angles | Test image | ||
---|---|---|---|

Brain T1 | Head CT | Prostate MR | |

1° | 0° | 0° | 0° |

2° | 1° | 2° | 1° |

3° | 3° | 4° | 3° |

4° | 4° | 4° | 4° |

5° | 5° | 5° | 5° |

6° | 6° | 6° | 6° |

7° | 7° | 7° | 7° |

8° | 8° | 8° | 8° |

40° | 40° | 40° | 40° |

70° | 70° | 70° | 70° |

The proposed orientation detection was experimentally incorporated into the registration of CT/MR and T1/T2 MRI images. Four clinical cases with multimodality images under affine transformation were registered. The image registration process was performed on a computer with CPU 3.3 GHz and 4G RAM. The computation time of the registration process with and without the proposed method was measured, and the time consuming of the proposed technique was also reported in Table

Comparison of the iteration time of optimization with and without the proposed method.

Iteration times of optimization without the proposed method | Iteration times of optimization with the proposed method | Consumed time of the proposed method (second) | |
---|---|---|---|

Case 1 | 46 times (15.69 seconds) | 23 times (8.08 seconds) | 0.504 seconds |

Case 2 | 109 times (36.54 seconds) | 26 times (8.31 seconds) | 0.522 seconds |

Case 3 | 226 times (75.56 seconds) | 25 times (8.64 seconds) | 0.566 seconds |

Case 4 | 44 times (15.17 seconds) | 24 times (8.23 seconds) | 0.506 seconds |

An image pair is often aligned initially based on a rigid or affine transformation before a nonrigid transformation for deformable medical image registration. Therefore, registration results of clinical prostate images as shown in Figure

Experimental results of prostate image registration under affine transformation without prior information of orientation difference. (a) Difference of sense image and reference image, SSD = 0.0314. (b) Transformed image. (c) Difference of transformed image and reference image, SSD = 0.0302. Iteration times of optimization: 12 iterations.

Experimental results of prostate image registration under affine transformation with the proposed orientation detection. (a) Transformed image. (b) Difference of transformed image and reference image, SSD = 0.0269. Iteration times of optimization: 120 iterations.

Figure

Since the goal of preregistration by the proposed method is to reduce the search space of the iteration optimization and increase the robustness of the process of the registration. The purpose of using SSD in Figures

It is striking that the estimation of rotational differences is accurate and robust for very dissimilar images captured for the same scene. When there are large local differences, illumination changes, scale or rotational changes, the process of histogram matching generates promising results. The main reason is that the proposed method is computed based on the global distribution of local orientations weighted by gradient magnitudes, which seems to be robust to considerable image changes for multimodalities. The statistic orientation histogram matching is robust to capture the rotational differences between images. Since no high-level features in images are required in the process, the proposed method is very simple and robust to determine the rotational difference between two images. The proposed method for orientation differences estimation is able to be used for rigid registration directly. Since if the orientation difference between two images is known, the prealignment becomes significantly easier after eliminating the rotational difference between two images. The work left from translation estimation is then determined by mutual information-based [

The overlapping area between two unregistered images is an important factor for general image registration. It is often assumed that the overlapping area is larger than 50% of the image area, and registration methods have to be robust to partial occlusion. In clinical medical images, the scan volumes to be registered often contain same anatomic structures of the patient but are often scanned at different time or in different scans. Therefore, it is assumed that the unregistered two images are almost contain the same scene in medical images, so the red circle in Figure

Simulated test result of partial occlusion. (a) Shifted T2 MRI image which was generated by synthetically shifting the content of rotated T2 MRI. (b) Magnitude weighted orientation histogram of shifted T2 MRI image. (c) shows the values of histogram matching. The similarity metric is tested by L1 and L2 norm, respectively. The minimum value of the cost function T corresponds to

Since the gradient orientations in a neighborhood of a straight pattern have (on average) opposite directions, the orientation histogram appears to be symmetric from 0 to 360° due to the fact that about half the gradient vectors have an opposite direction (180°) with respect to other halves for straight patterns in images. Figure

Local orientations of patterns in T1 MRI (Figure

The major difference in the image registration with and without the proposed method is the computation iterations (efficiency) as well as the possibility of finding the global minimum of the optimization (robustness), rather than the accuracy. Thus, reducing the number of iterations of optimization (time consuming) is one advantage of the proposed method. More importantly, increasing the possibility of finding global minimum of optimization of image registration (robustness) is another advantage of the proposed method. Certainly, accuracy of registration is known to be most important in image registration, but the robustness and efficiency of registration are very important as well. Modern computers with more powerful performances may make the process of time reducing meaningless. However, the success rate of the registration (robustness) cannot be improved with powerful computers. Experiments have demonstrated that the proposed method can make the registration more reliable in the process of iterative optimization and reduced the chances of registration failure.

This application is expected to straightforwardly extend to 3D volumetric images, in which the local orientation is estimated based on the smoothed structure tensor [

In this work, a very simple and robust method was proposed to compute the rotational difference between two multimodality images and very dissimilar monomodality images. The proposed method is superior to the existed edge-map based method from the experimental comparison. Experimental results have demonstrated that orientation detection between images is reliable and robust to considerable deformations, slightly scale differences, and dissimilar image contents. Experimental results have also demonstrated that the incorporation of this method into image registration both enhances the robustness of registration and significantly speeds up the registration calculation. It is worthwhile to note that the proposed method of orientation detection is appropriate to be applied for image registration with rigid transformation and nonrigid transformation, which has very broad applications in medical image registration and other applications in general image registration.

The authors declare that there is no conflict of interests regarding the publication of this paper.

The authors wish to sincerely thank the anonymous reviewers for their insight and helpful comments to improve the original manuscript of the work. This work is supported by the Grant from National Natural Science Foundation of China (NSFC: 61302171, NSFC: U1301258) and China Postdoctoral Science Foundation (2013M530740), in part by Grants from National Natural Science Foundation of China (NSFC: 81171402,), NSFC Joint Research Fund for Overseas Research Chinese, Hong Kong, and Macao Young Scholars (30928030) and National Basic Research Program 973 (2010CB732606) from Ministry of Science and Technology of China and Guangdong Innovative Research Team Program (no. 2011S013) of China.