Image matching is important for vision-based navigation. However, most image matching approaches do not consider the degradation of the real world, such as image blur; thus, the performance of image matching often decreases greatly. Recent methods try to deal with this problem by utilizing a two-stage framework—first resorting to image deblurring and then performing image matching, which is effective but depends heavily on the quality of image deblurring. An emerging way to resolve this dilemma is to perform image deblurring and matching jointly, which utilize sparse representation prior to explore the correlation between deblurring and matching. However, these approaches obtain the sparse representation prior in the original pixel space, which do not adequately consider the influence of image blurring and thus may lead to an inaccurate estimation of sparse representation prior. Fortunately, we can extract the pseudo-Zernike moment with blurred invariant from images and obtain a reliable sparse representation prior in the blurred invariant space. Motivated by the observation, we propose a joint image deblurring and matching method with blurred invariant-based sparse representation prior (JDM-BISR), which obtains the sparse representation prior in the robust blurred invariant space rather than the original pixel space and thus can effectively improve the quality of image deblurring and the accuracy of image matching. Moreover, since the dimension of the pseudo-Zernike moment is much lower than the original image feature, our model can also increase the computational efficiency. Extensive experimental results demonstrate that the proposed method performs favorably against the state-of-the-art blurred image matching approach.
Image matching has been an active research area in the field of computer vision, such as image mosaicing [
Pseudo-Zernike moment blur invariant is derived from the pseudo-Zernike moments of the blurred images; it is invariable to convolution with circularly symmetric point spread function. Thus, it can efficiently alleviate the influence of image blurring and improve the accuracy of the sparse representation coefficients.
Motivated by the above analysis, we propose a joint image deblurring and matching method with blurred invariant-based sparse representation prior (JDM-BISR). The framework of our JDM-BISR is shown in Figure
The framework of our joint image deblurring and matching method with blurred invariant-based sparse representation prior. Given a blurry real-time image, JDM-BISR iteratively recovers the clear image and locates the right position of the blurry image. The method will output the recovered image and the position in the reference image.
The main contributions of this paper are as follows: We propose a joint image deblurring and matching method with blurred invariant-based sparse representation prior, to deal with the problem of blurred image matching. We extract pseudo-Zernike moment with blurred invariants from images and obtain the sparse representation coefficients in blurred invariant space, which alleviates the influence of image blurring and improves the reliability of the sparse representation prior.
The remainder of the paper is organized as follows. We will review the related works of pseudo-Zernike moment with blurred invariants and image matching in Section
In this section, we first introduce the definition of pseudo-Zernike blurred invariants, which is utilized in the paper, and then review the methods of image matching.
Pseudo-Zernike blurred invariants are based on orthogonal pseudo-Zernike moments and are suitable for blur point spread functions with circular symmetry, and they have blur invariance and noise robustness. The computation of blur invariants of pseudo-Zernike moments needs to compute pseudo-Zernike moments first and then generate different orders of invariants via an iterative way. Specifically, for a polar coordinate image
Assuming
According to [
Generally speaking, the blurred image
According to [
By substituting equations (
According to above insights, Dai et al. [
Image matching has been intensively studied over the past decade due to its crucial role in computer vision. Traditional image matching methods have been classified into two classes [
An intuitive idea to solve this problem is to first resort to image restorations [
However, they both obtained the sparse representation coefficients in the original pixel space, which do not adequately consider the influence of image blurring, thus leading to an inaccurate estimation of sparse representation prior. In this paper, we obtain the sparse representation coefficients in blurred invariant space rather than original pixel space, thus improving the accuracy of the sparse representation prior, thereby facilitating the following deblurring and matching tasks.
In this section, we will present our JDM-BISR model for blurred image matching. For completeness, we first give a brief overview of JRM-DSR.
The JRM-DSR method aims to solve the problem of blurred image matching by fully exploiting the correlation between restoration and matching. Given the blurred input image
The basic idea of the JRM-DSR is that the blurred image, if correctly recovered, should be represented as a sparse linear combination of the dictionary. Meanwhile, a better restored image can lead to more accurate representation coefficients, which in turn can also improve the quality of image restoration. The JRM-DSR method iteratively recovers the input image by seeking the sparsest representation, thus correcting the initial mismatch and improving the confidence of image matching.
However, in the real application, there always exists some blur in the recovered image; thus, the so-obtained sparse representation coefficients in pixel space may not accurately reflect the similarity between the real-time image and the reference image. To overcome this problem and improve the performance of the image matching, we next propose a joint image deblurring and matching method by obtaining sparse representation prior in a blurred invariant space rather than original pixel space.
In this section, we compute the sparse representation coefficients in the blurred invariant space and propose a joint image deblurring and matching method with blurred invariant-based sparse representation prior (JDM-BISR) The key idea of JDM-BISR is to obtain sparse representation prior in blurred invariant space rather than the original pixel space. The JRM-DSR approach has achieved good performance via obtaining sparse representation prior in the original pixel space. However, in practical applications, the restored image often has some blur, so the sparse representation coefficient obtained in the pixel space may not accurately reflect the similarity between the real-time image and the reference image. As we all know, blurred invariant [
In this section, we adopt the alternating minimization algorithm [
Firstly, according to reference [ Given the restored image where In order to solve the above equation, we introduce an auxiliary variable With the blur kernel The solution of the above problem is Secondly, we fix
Since the sparse representation coefficient obtained by solving the above equation in the origin pixel space is inaccurate, our method utilizes sparse representation prior in robust invariant space as follows:
More specially, we extract the pseudo-Zernike moment blur invariants
Updating the blur kernel Updating the recovered image Updating the sparse coefficient
Predicting the matching position
In this section, we conduct extensive experiments on six aerial images to demonstrate the efficiency of the proposed JDM-BISR method. In the experiments, we set the size of the reference image as
We empirically set the parameters
Firstly, we illustrate the proposed JDM-BISR method with a simple example in Figure
An example of our JDM-BISR approach. For each iteration, the small image represents the restored images and the big image gives the matching results, where the red cross (
Image deblurring and matching results of the example, where the standard deviation of the Gaussian blur kernel is 3. (a) PD (matching). (b) Confidence (matching). (c) PSNR (deblurring). (d) SSIM (deblurring).
In this section, we analyze the efficiency of sparse representation prior in blurred invariant space. Specifically, we compare two sparse representation-based image matching methods on above six aerial images: one is to obtain sparse representation in the original pixel space (SR-PIXEL) [
The matching results of the above two methods are listed in Tables
Image matching results of the SR-PIXEL method.
|
PD |
PD |
PD |
PD |
PD |
---|---|---|---|---|---|
1 |
|
|
|
|
|
2 | 60.33 | 93.00 | 97.67 | 98.00 | 98.00 |
3 | 16.67 | 48.83 | 74.50 | 81.50 | 83.83 |
4 | 6.00 | 19.83 | 38.00 | 50.50 | 58.67 |
5 | 0.83 | 6.33 | 15.33 | 25.17 | 31.67 |
Image matching results of the SR-BI method.
|
PD |
PD |
PD |
PD |
PD |
---|---|---|---|---|---|
1 |
|
|
|
|
|
2 | 68.33 | 94.17 | 98.00 | 98.17 | 100.00 |
3 | 26.50 | 52.67 | 81.33 | 85.00 | 89.83 |
4 | 8.17 | 25.00 | 42.83 | 58.17 | 64.67 |
5 | 5.83 | 10.50 | 23.17 | 35.00 | 42.17 |
In this section, we conduct experiments on joint image deblurring and matching under different degradation settings. In our JDM-BISR algorithm, image deblurring and matching are tightly coupled. Thus, we present the results for image matching and deblurring separately. In addition, we also give a comparison of matching speed.
Tables
The matching accuracy of different methods, where the standard deviation of Gaussian blur kernel is 3.
Method | PD |
PD |
PD |
PD |
---|---|---|---|---|
NCC | 61.33 | 65.50 | 66.33 | 66.50 |
SRC | 48.83 | 74.50 | 81.50 | 83.83 |
DNCC | 10.33 | 24.17 | 41.83 | 55.67 |
JRM-DSR | 70.00 | 86.33 | 90.33 | 90.83 |
JDM-BISR |
|
|
|
|
The matching accuracy of different methods, where the standard deviation of Gaussian blur kernel is 4.
Method | PD |
PD |
PD |
PD |
---|---|---|---|---|
NCC | 36.33 | 44.50 | 46.33 | 47.67 |
SRC | 19.83 | 38.00 | 50.50 | 58.66 |
DNCC | 4.00 | 7.83 | 13.17 | 19.67 |
JRM-DSR | 35.33 | 56.33 | 67.17 | 71.00 |
JDM-BISR |
|
|
|
|
To visually demonstrate the effectiveness of the proposed JDM-BISR method, we choose a blurry image and its corresponding reference image as an illustrative example, where the standard deviation of Gaussian blur kernel is set as 3. Figure
Visualization of image matching and deblurring results. The small image represents the restored images and the big image gives the matching results, where the red rectangle represents the ground truth of the matching position, the red cross (
For image deblurring, we randomly select 600 blurry images for each blur kernel size to verify the efficiency of image deblurring, and the standard deviation of Gaussian blur kernel ranges from 1 to 5. Then, we utilize PSNR and SSIM to evaluate the performance of image deblurring between our JDM-BISR method and JRM-DSR method. Table
Image deblurring results comparison in terms of PSNR.
Method |
|
|
|
|
|
---|---|---|---|---|---|
JRM-DSR | 36.36 | 30.15 | 23.16 | 20.43 | 18.74 |
JDM-BISR |
|
|
|
|
|
Image deblurring results comparison in terms of SSIM.
Method |
|
|
|
|
|
---|---|---|---|---|---|
JRM-DSR | 0.954 | 0.883 | 0.651 | 0.454 | 0.331 |
JDM-BISR |
|
|
|
|
|
In practical applications, we should not only consider the matching accuracy but also the matching speed. Therefore, we carry out experiments to compare the computing time of JRM-DSR and JDM-BISR methods; the experimental results are listed in Table
The computing time of JRM-DSR and JDM-BISR methods.
Method | JRM-DSR | JDM-BISR |
---|---|---|
Dimension of vector | 2500 | 50 |
Computing time (s) | 43.65 |
|
In this section, we analyze the influence of blur kernel size and scale variation on image matching.
To verify the robustness of our method to the kernel size, we utilize different degrees of blurred image for image matching, in which
Image matching results comparison in terms of the standard deviation of Gaussian blur kernel, where the results are the accuracy for
Method |
|
|
|
|
|
---|---|---|---|---|---|
NCC | 100.00 | 90.83 | 66.67 | 47.83 | 32.67 |
SRC | 100.00 | 98.00 | 84.16 | 61.66 | 35.33 |
DNCC | 99.50 | 93.67 | 65.33 | 25.67 | 23.33 |
JRM-DSR | 100.00 | 98.83 | 91.00 | 72.00 | 48.50 |
JDM-BISR |
|
|
|
|
|
To verify the robustness of our method to scale variation, we conduct image matching experiments on blurry input image with different sizes. In the experiment, we set the size of the blurry input image as
Image matching results comparison in terms of scale variation, where the size of the blurry input image is
Method | PD |
PD |
PD |
PD |
PD |
---|---|---|---|---|---|
NCC | 48.33 | 53.33 | 54.50 | 55.33 | 55.33 |
SRC | 34.83 | 58.17 | 65.67 | 67.67 | 68 |
DNCC | 9.00 | 21.50 | 36.00 | 48.50 | 57.17 |
JRM-DSR | 64.17 | 75.33 | 77.17 | 78.00 | 78.17 |
JDM-BISR |
|
|
|
|
|
Image matching results comparison in terms of scale variation, where the size of the blurry input image is
Method | PD |
PD |
PD |
PD |
PD |
---|---|---|---|---|---|
NCC | 77.83 | 80.83 | 82.33 | 82.33 | 82.67 |
SRC | 59.33 | 86.67 | 95.33 | 96.33 | 96.67 |
DNCC | 11.83 | 27.33 | 48.00 | 66.33 | 80.00 |
JRM-DSR |
|
98.00 |
|
99.00 | 99.00 |
JDM-BISR | 92.67 |
|
98.33 |
|
|
In this paper, we propose a joint image deblurring and matching method with blurred invariant-based sparse representation prior (JDM-BISR). Our method obtains the sparse representation prior in the robust blurred invariant space rather than the original pixel space, thus improving the accuracy of the sparse representation prior, thereby facilitating the following image deblurring and matching tasks. Moreover, since the dimension of the pseudo-Zernike moment is much lower than the original image feature, our model also increases the computational efficiency. Extensive experimental results demonstrate that the proposed method outperforms the state-of-the-art blurred image matching approach in terms of both deblurring and matching.
The data used to support the findings of this study are available from the corresponding author upon request.
The authors declare no conflicts of interest.
This study was supported by the project of the National Natural Science Foundation of China (nos. 61433007 and 61901184).