^{1}

Depth measurement is a challenging problem in computer vision research. In this study, we first design a new grid pattern and develop a sequence coding and decoding algorithm to process the pattern. Second, we propose a linear fitting algorithm to derive the linear relationship between the object depth and pixel shift. Third, we obtain depth information on an object based on this linear relationship. Moreover, 3D reconstruction is implemented based on Delaunay triangulation algorithm. Finally, we utilize the regularity of the error curves to correct the system errors and improve the measurement accuracy. The experimental results show that the accuracy of depth measurement is related to the step length of moving object.

Obtaining depth information from the scene is one of the most crucial problems in the computer vision studies. When the depth information is obtained, the resulting data can be applied in various fields, such as 3D reconstruction, remote sensing, vision measurement, and industrial automation [

TOF is a method with high accuracy in depth measurement. TOF cameras emit near infrared light that has been modulated to illuminate an object [

Stereo vision simulates the human vision by using two or more cameras to capture 2D images from different angles [

The structured light method has gained increasing attention due to its high speed and depth measurement accuracy [

Visible light interference and projector calibration are two key issues in the structured light method. In this method, the projectors are difficult to calibrate because they cannot capture the image actively [

To address these problems, we first use infrared structured light as active light source because the wavelength of infrared light is different from that of visible light. Second, we design a simple and efficient structured light pattern as well as the corresponding coding and decoding algorithm. Third, we take the camera and laser emitter as one, and then we use the linear fitting algorithm to determine the system parameters rather than calibrate the camera and the projector, respectively.

According to the different types of projected light patterns, the structured light methods can be classified into single-point pattern, single-line pattern, and coded structured light. The single-point pattern method obtains depth information by point-scanning the entire image. Thus, the computational complexity increases dramatically with the increase in the size of measured object. This method demonstrates the practicability of depth measurement based on structured light method.

Compared with the single-point pattern structured light method, the single-line pattern structured light method can obtain depth information only by scanning one-dimensional object [

The coded pattern structured light method has been proposed to reduce measurement time. This method can obtain the depth data in one shot. Moreover, the method has been studied extensively in recent years because of its high accuracy. Cheng et al. proposed an arrangement coding method [

Kawasaki et al. proposed a grid pattern to obtain the depth information of the object [

Koninckx and van Gool [

Compared with the aforementioned methods, ours is described as follows. First, we design a new grid pattern and propose a sequence coding and decoding algorithm for the pattern. Then, we propose a linear fitting algorithm of system parameters to construct the linear relationship between the object depth and the pixel shift. Finally, the depth information of the object is obtained based on the linear relationship. Moreover, 3D reconstruction is conducted based on Delaunay triangulation algorithm [

The external appearance of the camera system and hardware structure is illustrated in Figures

The laser emits a point matrix pattern projected onto the scene. The CCD camera captures the pattern and correlates it against a reference pattern.

The hardware structure of the system.

When the pattern is projected on an object whose distance to the sensor differs from the reference plane, the position of the speckle shifts on the baseline between the laser and the camera. These shifts are measured for each speckle by a simple procedure, which can calculate a depth image by using the equation described as follows.

Figure

Schematic representation of pixel offset relation.

Given the similarity of triangles, we have

Equation (

The algorithm flow of depth measurement is illustrated in Figure

Algorithm flow of depth measurement.

The light intensity distribution of the speckles in the raw image is uneven, and the image is dark with poor contrast. To solve these problems, we utilize the image preprocessing algorithm including image denoising, enhancement, and binarization.

Once the image binarization is implemented, we can obtain the centroids of the speckles as follows:

We developed a sequence coding algorithm that can help us find the corresponding points between the object image and the reference image. The steps of the algorithm are as follows.

The centroids are sorted according to the values of the abscissa of the centroid. Then the centroids that have the same value of abscissa are replaced in the same column.

The centroids are sorted in each column based on the ordinate values.

The reference and object image are encoded according to the sequence from Steps

The reference image and object image are decoded. Each corresponding point between them has the same ID.

Then the pixel offset between the object and reference image can be obtained. Figure

The method of encoding. All the speckles are sorted from left to right and from top to bottom. Then each speckle is given an ID.

As mentioned, three system parameters should be determined. We convert (

When the pixel offset between the corresponding points is obtained, we can calculate the object depth based on (

Delaunay triangularization uses the characteristics of point cloud in triangularization. First, a convex hull that includes all discrete data points is generated. Then, the convex hull is used to generate an initial triangle. Other discrete points are added one by one to generate the final triangle. The 3D reconstruction results are shown in Figure

Using a KT&C CCD camera and a 50 mW infrared laser emitter, we conducted experiments on 10 groups of data. First, we captured a reference image. Then, the object was moved at step length

We calculated the average value of the first 5 groups of the measurement data to eliminate random errors. The parameters

In Figure

The result of linear fitting. The blue crosses describe the relation between

Then we obtain the relation between

The errors of the three groups of correction data. The three error curves are similar in shape. Therefore, system errors exist in addition to random errors.

Errors are classified into random and system errors. Random errors have no regularity and can be reduced by increasing the measurement times in the experiments. In Figure

The standard error of the mean is calculated and the error fitting curve is obtained.

The error curve by linear fitting is presented in Figure

The fitting line of errors.

The errors are rectified by the error curve. The curves in the figure have no regularity and the errors are reduced.

We use the last two groups of data to test the accuracy of the system. The test result of group 1 is shown in Table

The result of test group 1.

Actual depth (mm) | Depth of object (mm) | |
---|---|---|

Calculate depth of test group 1 | Error | |

3 | 3.1187 | 0.1187 |

5 | 5.1054 | 0.1054 |

7 | 7.0636 | 0.0636 |

9 | 8.8737 | 0.1263 |

11 | 11.1395 | 0.1395 |

13 | 13.2177 | 0.2177 |

15 | 15.0734 | 0.0734 |

17 | 17.0792 | 0.0792 |

19 | 19.1512 | 0.1512 |

Therefore, using only the absolute error is insufficient to show the degree of accuracy of the system. We have to consider the step length of the system to illustrate the accuracy objectively. Therefore, we propose the following indicator:

The experiment results are presented in Figures

The raw image and binary image. (a) Raw image of object, (b) binary image, which has been preprocessed.

Figure

The depth image. The colors of the figure represent the depth. The change from blue to red indicates increasing depth.

The result of 3D reconstruction viewed from two directions.

This paper has described an effective approach to measure the object depth of the surrounding scene in real time. We have designed a grid pattern generated by laser diffraction. This grid pattern performs better than other structured light patterns. In addition, the grid pattern can be calculated easily to determine the corresponding points between the reference and object images. Given that the imaging system calibration has been replaced, we have proposed a linear fitting algorithm. For the generated depth point cloud, we have reconstructed the 3D scene with the Delaunay triangularization algorithm. Random errors can be reduced by increasing the measurement time because the random errors have no regularity. Moreover, we have used the regularity of the error curves to correct the system errors and improve the measurement accuracy. The experimental results show that the depth measurement accuracy is related to the step length of the moving object. The measurement error

If the depth variance of the object is obvious, the measurement error increase because of the limits of the size and the point distances of the designed gird pattern. To solve these problems, we intend to develop an omnidirectional structured light system on a mobile platform in the future.

The authors declare that there is no conflict of interests regarding the publication of this paper.

Research is supported by the National Natural Science Foundation of China under Grant 61273078 and in part supported by the Doctoral Foundation of Ministry of Education of China under Grant 20110042120030 and in part supported by the Fundamental Research Funds for the Central Universities of China under Grant 130404012.