3D (three-dimensional) structured light scanning system is widely used in the field of reverse engineering, quality inspection, and so forth. Camera calibration is the key for scanning precision. Currently, 2D (two-dimensional) or 3D fine processed calibration reference object is usually applied for high calibration precision, which is difficult to operate and the cost is high. In this paper, a novel calibration method is proposed with a scale bar and some artificial coded targets placed randomly in the measuring volume. The principle of the proposed method is based on hierarchical self-calibration and bundle adjustment. We get initial intrinsic parameters from images. Initial extrinsic parameters in projective space are estimated with the method of factorization and then upgraded to Euclidean space with orthogonality of rotation matrix and rank 3 of the absolute quadric as constraint. Last, all camera parameters are refined through bundle adjustment. Real experiments show that the proposed method is robust, and has the same precision level as the result using delicate artificial reference object, but the hardware cost is very low compared with the current calibration method used in 3D structured light scanning system.
Binocular structured light scanning system (BSLSS) is widely used in the fields of reverse engineering [
Calibration of 2D board patterns commonly used.
Calibration board in Atos system
Calibration board in Halcon
Calibration board in openCV
Like the traditional method proposed by Zhang and Tsai, the method in this paper also needs to build the relationships between space points and their projection in image. So space points and their projections should be defined firstly. In this paper, some patterns with a unique ID number are designed for space object points. The patterns are a white circle with circular sections around it, just as shown in Figure
Patterns and scale bar used in this paper.
Artificial patterns
Scale bar
This section describes the binocular vision model and defines the calibration parameters. Figure
Pinhole imaging and binocular stereo vision model.
Pinhole imaging model
Binocular stereo vision model
The binocular vision geometric model is illustrated in Figure
In the above two sets of equations, four equations are independent, and the 3D coordinate of a space object point can be solved through using a least squares estimator. We can assume that the world coordinate and left camera coordinate is the same, which means
Our aim is to get the intrinsic parameters and relative position relationship of two cameras used in BSLSS; the proposed algorithm flow is illustrated in Figure
Algorithm flow.
Left and right cameras are calibrated under its own world coordinate
The intrinsic parameters encompass focal length, image format, and principle point, which are the camera’s intrinsic properties and they are not to be changed. So, we can compute the intrinsic parameters in advance. Here, we provided two methods to get the intrinsic parameters. One is using a panel to compute intrinsic parameters. When imaging a panel, there is a homography matrix linking the image panel and the space panel; intrinsic parameters can be estimated from the homography matrix. The principle of this method can be found in the paper [
The process of obtaining initial value of camera extrinsic parameters can be divided into three steps. The first step is to get camera motion and scene sharp matrix in projective space with the method of factorization, and the second step is to get camera motion and scene sharp matrix in Euclidean space. The third step is to get the initial value of camera external parameters by decomposing the camera motion matrix.
In the first step, supposing there are
Through method of SVD decomposing, we can get
The second step is that, in European space, the projective matrix of one image can be expressed as
The method to solve the fourth column of
In the above deducing process, we do not make any assumptions on the intrinsic parameters, and the rank of dual absolute quadric surface is three and is used as constraint factor which guarantees the robustness of the solving process.
The third step is that if
The image captured by the digital camera is not satisfied with the pinhole camera model, so we should consider lens distortion of the camera. In this section, we only consider the first two terms of radial distortion. The coefficients of the radial distortion can be solved through the following:
In the above Sections
Only the parameters of single camera are solved in the analysis process mentioned above. The global coordinate of the two cameras is not the same. In this section, we will give the algorithm of unifying the two different coordinates of the two cameras into one global coordinate.
Assuming
The relationships between
From (
The experiments for testing our proposed method are given out in this section. The binocular structured light system, which is shown in Figure
Hardware of binocular system.
Artificial targets captured by the left and right cameras.
Method proposed by Zhang [
Panel board in Zhang’s method captured by the left and right cameras.
The panel board used in Zhang’s algorithm is also used as a standard object. When calibration is completed, we will use the calibration parameters to calculate the 3D distance between two circle centers. It is assumed that the
In order to avoid random error, ten group experiments are carried out, and the results are as described in Table
Calibration error (mm).
Image number | Proposed method ( |
Method proposed by Zhang ( |
||||
---|---|---|---|---|---|---|
7 | 14 | 20 | 7 | 14 | 20 | |
1 | 0.050 | 0.021 | 0.011 | 0.041 | 0.034 | 0.012 |
2 | 0.047 | 0.032 | 0.027 | 0.025 | 0.029 | 0.039 |
3 | 0.061 | 0.011 | 0.034 | 0.037 | 0.028 | 0.019 |
4 | 0.103 | 0.028 | 0.018 | 0.031 | 0.051 | 0.027 |
5 | 0.032 | 0.036 | 0.012 | 0.028 | 0.047 | 0.014 |
6 | 0.098 | 0.040 | 0.019 | 0.029 | 0.013 | 0.009 |
7 | 0.057 | 0.035 | 0.017 | 0.037 | 0.018 | 0.015 |
8 | 0.046 | 0.022 | 0.020 | 0.035 | 0.024 | 0.020 |
9 | 0.039 | 0.023 | 0.025 | 0.041 | 0.015 | 0.034 |
10 | 0.049 | 0.039 | 0.020 | 0.022 | 0.022 | 0.009 |
|
||||||
Average error | 0.058 | 0.0287 | 0.0203 | 0.0326 | 0.0281 | 0.0198 |
Calibration results of Zhang’s method and the proposed method (image number = 10).
Zhang’s method | Method in this paper | |||||
---|---|---|---|---|---|---|
Intrinsic parameters | ||||||
Left camera |
|
|
||||
|
|
|||||
Right camera |
|
|
||||
|
|
|||||
Extrinsic parameters | ||||||
Rotation matrix | 0.9493 | −0.0047 | −0.3144 | 0.9494 | −0.0044 | −0.3139 |
0.0069 | 1.0000 | 0.0062 | 0.0061 | 1.0000 | 0.0042 | |
0.3144 | −0.0081 | 0.9493 | 0.3139 | −0.0059 | 0.9494 | |
Translation vector | 202.56768 | 0.272812 | 15.384653 | 201.8624 | 1.3684 | 15.5949 |
Distortion coefficients | ||||||
Left camera |
|
|
||||
Right camera |
|
|
As the current camera calibration method in the BSLSS (binocular structured light scanning system) has high cost, In this paper, we put forward a novel calibration method which does not rely on complex calibration reference object, and the hardware are some small size artificial targets and a scale bar. Because the hardware used does not need the strict industrial processing, the cost is lower compared with traditional 2D or 3D elaborated object. Besides, the size of hardware used in this paper is very small which means they are more flexible than 2D or 3D calibration reference object. Real experimental results show that the calibration precision is the same as the traditional method when calibration images are enough.
The authors declare that there is no conflict of interests regarding the publication of this paper.