Calibration Method for Central Catadioptric Camera Using Multiple Groups of Parallel Lines and Their Properties

This paper presents an approach for calibrating omnidirectional single-viewpoint sensors using the central catadioptric projection properties of parallel lines. Single-viewpoint sensors are widely used in robot navigation and driverless cars; thus, a high degree of calibration accuracy is needed. In the unit viewing sphere model of central catadioptric cameras, a line in a three-dimensional space is projected to a great circle, resulting in the projections of a group of parallel lines intersecting only at the endpoints of the diameter of the great circle. Based on this property, when there are multiple groups of parallel lines, a group of orthogonal directions can be determined by a rectangle constructed by two groups of parallel lines in different directions. When there is a single group of parallel lines in space, the diameter and tangents at their endpoints determine a group of orthogonal directions for the plane containing the great circle. The intrinsic parameters of the camera can be obtained from the orthogonal vanishing points in the central catadioptric image plane. An optimization algorithm for line image fitting based on the properties of antipodal points is proposed. The performance of the algorithm is verified using simulated setups. Our calibration method was validated though simulations and real experiments with a catadioptric camera.


Introduction
With the rapid development of computer vision technology, requirements for visual performance have become increasingly stringent. Increasing the field of view of a camera usually improves its visual performance [1][2][3]. A traditional camera has a limited field of view; using mirrors, as originally proposed by Hecht [4], offers an effective method to enhance the field of view. This camera-mirror system is referred to as a catadioptric camera system and can be divided into two types, central and noncentral, depending on the presence of a unique effective viewpoint [5]. There are four principal types of mirror shapes for central catadioptric cameras: paraboloidal, hyperboloidal, ellipsoidal, and planar. The image captured by the central catadioptric camera can be easily converted into three-dimensional (3D) coordinates of basic feature points using a projection inverse algorithm. This technique is widely used in the field of computer vision.
This paper addresses calibration issues in central catadioptric cameras with a generalized projection model proposed by Geyer and Daniilidis [6]. The results demonstrate that the imaging process is equivalent to two-step mapping with a unit viewing sphere. Previous calibration methods for central catadioptric cameras can be divided into five types: selfcalibration [7,8], calibration based on two-dimensional (2D) points [9][10][11], calibration based on 3D points [12], calibration based on lines [13][14][15][16][17][18], and calibration based on spheres [19][20][21]. Li and Zhao [22] calibrated a catadioptric camera by obtaining orthogonal directions and circular points only through the image of a single sphere. This demonstrates that the analysis of orthogonal directions under the catadioptric projection model is important and useful.
First, we present a brief review of the methods proposed by others for central catadioptric camera calibration. Deng et al. [11] proposed a simple calibration method based on a 2D calibration pattern for central catadioptric camera. They proposed a nonlinear constraint on camera intrinsic parameters from the projections of any three collinear points on the viewing sphere. Geyer and Daniilidis [13] proposed a calibration method for a paracatadioptric camera using the images of three lines and demonstrated rectification of an image using at least two groups of parallel lines. Ying and Hu [14] used the projections of lines or balls to calibrate cameras. They showed that the line and the ball had three and two limitations, respectively, when it came to their use in calibrating cameras; moreover, the method involved was nonlinear. When the noise level is very large, the robustness degree of the methods decreases as singularity approaches.
This paper studies the geometry of the central catadioptric projections of multiple groups of parallel lines. A set of projective properties are described and proved. These properties support the geometric constructions proposed for calibration. In addition, this study considers the applications of the projection properties of parallel lines in the unit viewing sphere model to calibrate central catadioptric cameras; hence, these methods could apply to any of the mirror-type cameras. The main contribution of this study can be summarized as follows: (I) Two groups of parallel lines in different directions are projected to great circles and intersect at four points. According to the properties of antipodal points, the orthogonal vanishing points can be determined on the image plane to obtain the intrinsic parameters of the camera (II) A group of parallel lines is projected to a group of great circles on the unit viewing sphere. The diameter of the circles and tangents at their endpoints determine a group of orthogonal directions for the support plane containing each great circle to obtain the intrinsic parameters of the camera (III) An optimization algorithm for line image fitting is proposed based on the relationship between the antipodal points The remainder of this paper is organized as follows. Section 2 presents a review of the unit sphere model for central catadioptric cameras. Section 3 describes two calibration algorithms for central catadioptric cameras that employ the projection properties of parallel lines as well as an optimization algorithm for line image fitting. Simulation and real data were used to verify the effectiveness of the algorithms, and this is discussed in Section 4. Finally, conclusions regarding the two calibration algorithms are presented in Section 5.

Preliminaries
In this section, we briefly review the projection process of points and lines under central catadioptric cameras. Moreover, we define antipodal image points and discuss their properties.
2.1. Central Catadioptric Projection Model. Geyer and Daniilidis [6] proved that the central catadioptric imaging process is equivalent to the following two-step mapping with a unit viewing sphere ( Figure 1). First, a point M in 3D space is projected to two points M ± on the surface of the unit viewing sphere centered at the focal point O of a reflective mirror.
The unit viewing sphere is called the unit sphere, and O is the center of the projection. Second, with the 3D point O c at the center, M ± is projected to two points m ± on image plane Π, where O c is called the virtual optical center, Π is perpendicular to the line OO c , and their intersection is the principal point p.
As shown in Figure 1, the world coordinate system is adjusted to the unit viewing sphere coordinate system O − x w y w z w . Let M = ½x y z 1 T denote the homogeneous coordinate of a point in O − x w y w z w ; the projections M ± of M on the unit viewing sphere can then be represented as M ± = ±x/kM∧k ±y/kM∧k ±z/kM∧k 1 ½ T , where kMk = ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi x 2 + y 2 + z 2 p andM denotes the nonhomogeneous coordinate of M. The unit viewing sphere coordinate system O − x w y w z w traverses ξ units along the -z w direction to establish the virtual camera coordinate system O c − x c y c z c . Thus, the rotation matrix R and the translation vector t between them can be represented as R = I and t = ½0 0 ξ T , respectively, where I is a three-identity matrix and ξ = kO − O c k is the mirror parameter. The type of mirror selected depends on the value of ξ: the mirror is a plane if ξ = 0, an ellipsoid or hyperboloid if 0 < ξ < 1, and a paraboloid if ξ = 1.
Let the intrinsic parameter matrix of the virtual camera be where f u and f v represent the focal length along the u and v directions, respectively, on the 2D image plane. u 0 v 0 1 ½ T are the homogeneous coordinates of the principal point p, and s is the skew factor. Then, with the unit viewing sphere, the imaging process from point M ± to m ± can be described as  Journal of Sensors where λ 1 and λ 2 are two nonzero scale factors andM ± = ±x/kM∧k ±y/kM∧k ±z/kM∧k ½ T denote the nonhomogeneous coordinates of points M ± .
2.2. Antipodal Image Points. For a central catadioptric camera, Wu et al. [16] defined the antipodal image points and their properties as follows: Definition 1. In Figure 1, fM + , M − g is called a pair of antipodal points if they are two endpoints of the diameter of the viewing sphere.
Definition 2. In Figure 1, fm + , m − g is called a pair of antipodal image points if they are images of two endpoints of the diameter of the viewing sphere. Figure 1, if fm + , m − g is a pair of antipodal image points obtained using a central catadioptric camera, then

Calibrating Catadioptric Cameras Using Projections of Multiple Groups of Parallel Lines
In the unit sphere model, a line in 3D space is projected to a great circle, where the center of the great circle is the center of the viewing sphere; thus, two great circles intersect at two points.
Theorem 4. In Figure 1, the great circles, from which a group of parallel lines on one plane are projected on the unit viewing sphere, only intersect at two points: a pair of antipodal pointsfM + , M − g.
Proof. In Figure 1, a group of parallel lines in one plane intersect at a point at infinity. With a central catadioptric projection model, the point at infinity projected onto the unit sphere forms two points. Hence, according to Definition 1, the points of intersection of the great circles of the projections of the group of parallel lines are a pair of antipodal points fM + , M − g. ☐ Corollary 5. In Figure 1, for the catadioptric camera, the images show a group of parallel lines in one plane intersecting at two points, called a pair of antipodal image points.
Proof. Based on Theorem 4 and geometric invariance under a central projective transformation, the images of the group of parallel lines have only two points of intersection, the images of the points of intersection of the great circles. ☐ In this study, the 3D lines are not parallel to the z w -axis of the catadioptric camera coordinate system. If the 3D lines are parallel to the z w -axis, the line images coincide into a straight line.

Camera Calibration Using Two Groups of Parallel Lines.
Let l 1i and l 2i ði = 1, 2, 3, 4Þ be two groups of great circles on the unit viewing sphere corresponding to projections of two groups of parallel lines in two directions, where i = 1, 2, 3, 4, as shown in Figure 2 According to the properties of the projection, points d 1 and d 2 are a group of orthogonal vanishing points, hence, proving Proposition 6. ☐

Camera Calibration Using a Group of Parallel Lines
Proposition 7. In a central catadioptric system, if the images of a group of i parallel lines (i = 4, in Figure 3) are given, i groups of the orthogonal vanishing points can be obtained.
Proof. As shown in Figure 2 As shown in Figure 3, images C 1i of circles l 1i intersect at two points m 1± , which are the images of points M 1± .Without losing generality, to simplify the representation, we can take images C 1i of circles l 1i with i = 1 as examples, and let C 1i ði ≥ 2Þ be C i ði ≥ 2Þ. The tangents of points m 1+ and m 1− with respect to conics C i ði ≥ 2Þ intersect at point d 1 , the vanishing point in the direction of the tangents calculated by d 2 denotes the fourth harmonic element of points m 1− , m 1+ , and p, where p is the image of the midpoint of the diameter of circles l 1i . Using crossratio invariance [23],  [14]. Hence, according to Proposition 3, the antipodal image point m i− of point m i+ can be determined. Let conic C be C = ax 2 + bxy + cy 2 + dx + ey + f = 0: ð8Þ We need to minimize the algebraic distance Cðx j , y j Þ from the corresponding points to conic C [24]; then, where ðx j , y j Þ are the coordinates of image points m i+ or its antipodal image points m i− . We then minimize the objective function Γ (9), to which there always exists the trivial solu- To avoid the existence of zero solution, we normalize C ðx, yÞ with a = 1, b = 1, c = 1, d = 1, e = 1, and f = 1. Whena = 1, let X 1 = ½b c d e f T ; thus, the equation Cðx j , y j Þ = 0 becomes a T j X 1 − b j = 0, where a j = ½x j y j y 2 j x j y j 1 T and b j = −x 2 j . Given 2N points, we obtain the following equation:    Li-3 [22] (e)  Deng [11] Li-1 [22] Li-3 [22] (e)  Journal of Sensors where A = ½a 1 a 2 ⋯ a 2N T and b = ½b 1 b 2 ⋯ b 2N T . By minimizing objective function (9), we obtain We then minimize the objective function-specifically, we obtain the partial derivative of (11) with respect to X 1 , making the partial derivative equal to zero: The solution of (12) is and the coefficients of the conic are X 1 Similarly, by normalizing with b = 1, c = 1, d = 1, e = 1, and f = 1, the coefficients of the conic X 1 2 , X 1 3 , X 1 4 , X 1 5 , X 1 6 can be obtained. Thus, the equation of the line image can be obtained by

Calibration Algorithms Using Orthogonal Vanishing
Points. According to Sections 3.1 and 3.2, multiple groups of orthogonal vanishing points can be obtained from images of two groups of parallel lines. A pair of orthogonal vanishing points d 1 and d 2 can be represented with the homogeneous coordinates d 1 = x 1 y 1 1 ½ T and d 2 = x 2 y 2 1 ½ T , respectively. From the relationship between the orthogonal vanishing points and the image of the absolute conic (IAC) [23], it follows that where ω = K −T K −1 represents the IAC and K denotes the intrinsic parameter matrix of the camera. ω with five degrees of freedom can be solved linearly with at least five groups of orthogonal vanishing points. K can then be determined using the Cholesky factorization of ω.
The intrinsic parameters of the camera can thus be obtained from two groups of parallel lines in different directions (Method 1) or a group of parallel lines (Method 2). The two algorithms can be summarized as follows: Step 1. Input kðk ≥ 5Þ views, each containing projections of at least two groups of parallel lines in different directions.
Step 2. Fit the equation of the projective contour of the mirror by least-squares fitting; obtain the initial value of the intrinsic parameters of the camera.
Step 3. Take NðN ≥ 5Þ points for each projection of the line, and calculate their antipodal image points; estimate the equation of the line image using Proposition 8.
Step 4. Calculate the two common points of intersection of the images in each group of parallel lines using the "solve" function in MATLAB.
Step 5. Solve the tangents at the points of intersection corresponding to each conic l = Cm.
Step 6. Obtain the orthogonal vanishing points using Proposition 6 or 7.
Step 7. When the orthogonal vanishing points are known, solve ω using (15). Determine K using the Cholesky factorization of ω.

Experiments
To test the validity and feasibility of the proposed calibration methods, we performed a number of experiments using simulation data in addition to real images. We compared the results with the calibration methods of Deng et al. [11], Geyer and Daniilidis [18], and Li and Zhao [22]. The 3D lines were not parallel with the z w -axis of the catadioptric camera coordinate system.
In the simulation experiment, the conic of the line image was fitted by the method stated in Proposition 8 and the least-squares method, respectively, and the intrinsic parameters of the camera were calibrated. One hundred points were chosen on each line image. Gaussian noise with a zero mean and standard deviation σ was added to each pixel point. The noise level σ varied from 0 to 3.5 pixels. For each noise level, we performed 200 independent trials to solve the absolute error of the intrinsic parameters f u , f v , u 0 , v 0 , and s of the camera and compared the results for the two conic fitting methods. The changes in the absolute errors of f u , f v , u 0 , v 0 , and s with different noise levels are shown in Figures 4(a)-4(e). The line image of the least-squares method obtained through fitting is denoted by Method 1-1 and Method 2-1. The proposed methods in this study are denoted by Method 1 and Method 2; Li and Zhao's [22] methods are denoted by Li-1 (Proposition 6), Li-2 (Proposition 7), and Li-3 (Proposition 8), respectively.
Typically, the accuracy of the obtained calibration results highly depends on the accuracy of the extracted conics. Figures 4(a)-4(e) demonstrate that the accuracy of all   Journal of Sensors methods suffers from degradation in the presence of increasing noise. However, Method 1 and Method 2 perform better than the other methods. The comparison of the experimental results indicated that in the center catadioptric camera, the accuracy of fitting the line image using the antipodal constraint was greater than that observed with the least-squares method.

Experiment
Using Simulation Data. The line image was fitted using the method proposed in Section 3.3. In addition, with the assumption of Gaussian noise, the absolute errors in the intrinsic parameters of the camera, f u , f v , u 0 , v 0 , and s with different noise levels were calculated and compared with the reference methods [18,11,22], as shown in Figures 5(a)-5(e), respectively. Furthermore, the run time of the seven methods were compared using the MATLAB R2014b platform; the results obtained after 200 simulations are listed in Table 1.
It is well recognized that the linear method is less affected by noise. Figures 5(a)-5(e) demonstrate that as the noise level σ increased, the absolute errors of Deng et al.'s [11], Geyer and Daniilidis' [18], and Li and Zhao's [22] methods, including three different ways denoted by Li-1 (Proposition 6), Li-2 (Proposition 7), and Li-3 (Proposition 8), respectively, increased linearly. The errors from our methods showed a slower rate of increase, indicating the effectiveness and feasibility of the proposed method. Furthermore, Table 1 reveals that the average runtimes of our methods and Li and Zhao's [22] were similar, whereas the runtime of Deng et al.'s [11] and Geyer and Daniilidis' [18] methods was higher than our methods. This result is likely as Deng et al.'s [11] and Geyer and Daniilidis' [18] methods were nonlinear.
Only a section of the projection of a straight line for the central catadioptric system is visible. The angle between the directions of the 3D lines influences their projections and affects the calibration accuracy. In the simulation experiment, the angle of the direction of the first group of parallel lines was set to be perpendicular to the z w -axis, and the direction of the second group of parallel lines was set to be parallel to the z w -axis as the initial value. The direction of the second group of parallel lines rotates the angle θ from 0°to 90°around the x w -axis. The line image was fitted using the method proposed in Section 3.3. For each angle, 200 independent experiments are carried out. The absolute errors in the intrinsic parameters of the camera, f u , f v , u 0 , v 0 , and s with different angles were calculated and compared with the reference methods [18], as shown in Figures 6(a)-6(e). Li and Zhao [22] calibrated the camera using the projection image of a sphere, and Deng et al.'s [11] method was based on a 2D calibration pattern of points. The angle of lines had a small effect on their methods and, thus, were not considered in this experiment.
When one of the groups of parallel lines rotates around the x w -axis, the angle between the directions of the two parallel lines is changed accordingly. Figure 7 shows that calibration accuracy is high and stable when their angle is between 20°and 70°. When the direction of one of the groups of parallel lines is nearly parallel to the z w -axis, the image of this group of parallel lines tends to be a straight line with a large error in calibration results. When the angle between the directions of two groups of parallel lines becomes smaller, the interference between the two groups of parallel lines becomes larger under noisy conditions, resulting in a larger absolute error. When two groups of parallel lines approach coincidence, Method 1 in this study and Geyer and Daniilidis' [18] method will degenerate and have large errors. Therefore, the angle between the directions of two parallel lines used for calibration is selected as 20°≤ θ ≤ 70°in the real situation.

Experiment Using Images.
The angle between the directions of the two parallel lines is selected according to the influence of the angle on the calibration results in the simulation experiment, and the angle is 20°≤ θ ≤ 70°in the real experiment. In general, the noise in a real environment is unknown; random Gaussian noise is the main source. Gaussian noise was used to simulate the real noise in the simulation experiment. Figure 5 shows that the proposed method has robustness in the presence of noise. Next, we used real images to verify the accuracy and applicability of the methods in this study. In the real experiment, a checkerboard was used as a calibration model to replace the line. We used a central catadioptric camera with a hyperboloidal mirror having parameter ξ = 0:966, designed by the Center for Machine Perception at Czech Technical University. Five pictures of the checkerboard were taken using the catadioptric camera with its  9 Journal of Sensors position changed. One of the five images is shown in Figure 7. The resolutions of these images were 1320 × 1170 pixels. The equations of the line images were obtained using the method described in Section 3.3.
The intrinsic parameters of the camera were obtained with the two calibration methods proposed in this study in addition to Deng et al. [11], Geyer and Daniilidis [18], and Li and Zhao's [22] calibration methods. Table 2 lists the calibration results on real data.
To check the calibration results in Table 2, the test image in Figure 7 was rectified to its projective image, as shown in Figures 8(a)-8(g).
The results of all methods in Table 2 were similar to each other, indicating that the proposed methods are feasible. The straight line in the space is changed into a curve due to lens distortion in the original image shown in Figure 7. After using the data in Table 2 to rectify the image, the straight line in the space is still straight in the images shown in (g) Figure 9: Reconstruction information of the checkerboard based on the intrinsic parameters in Table 2. Reconstructed using (a) Method 1, (b) Method 2, (c) Geyer and Daniilidis [18], (d) Deng et al. [11], (e) Li-1 (Proposition 6), (f) Li-2 (Proposition 7), and (g) Li-3 (Proposition 8) from Li and Zhao [22]. To further verify the accuracy of the method, the data in Table 2 was used to rectify the image shown in Figure 7, and the checkerboard reconstruction results were extracted. The reconstruction information in parallel and orthogonal directions of the checkerboard is shown in Figures 9(a)-9(g). First, we extracted the points on each row and each column to fit the equation of the line. Next, we calculated the angles between two lines in all parallel directions and obtained their average value. Similarly, the average value of angles between all orthogonal lines could also be obtained. Table 3 lists the angle results with real data as shown in Figures 9(a)-9(g).
The ground truth data were 0°for the parallel lines and 90°in the orthogonal direction on the checkerboard, as already known. Using our methods in the reconstruction results, the angle between the parallel and orthogonal directions is closer to the ground truth, which proves the accuracy of the method in this study.

Conclusions
The main contribution of this study to the field of omnidirectional camera calibration is the use of the projection properties of parallel lines in a unit viewing sphere model to calibrate a central catadioptric camera. A line in 3D space is projected to a great circle on the unit viewing sphere, where its center is consistent with that of the unit viewing sphere. A group of parallel lines is projected to a group of big circles with only two points of intersection: the antipodal points. In Method 1, two groups of parallel lines in different directions are projected onto two groups of big circles with four points of intersection on the unit viewing sphere. According to the definition of antipodal points, the quadrilateral with these points as vertices is a rectangle, and the orthogonal directions can be obtained by the four points of intersection. In Method 2, the tangent at a point on a circle is orthogonal to the diameter passing through that point on the circle, using which the orthogonal vanishing points for the support plane containing each great circle can be determined on the image plane. Finally, the intrinsic parameters of the camera can be linearly determined by five groups of orthogonal vanishing points. This indicates that five images are required and all five intrinsic parameters are recovered without any assumptions, such as zero skew s = 0 or a unitary aspect ratio f u = f v .
The two methods proposed in this study provide new insights into the calibration of central catadioptric cameras, especially from the aspect of constraints provided by the projection properties of parallel lines. Two linear calibration approaches using parallel lines are derived from its projection properties. Experimental results on both simulated data and real image data validated the effectiveness of our methods and demonstrated that the two linear calibration methods maintain comparable accuracy. As only a section of the conic projected by a line in the central catadioptric projection is visible, according to the relationship between the antipodal points, an optimization algorithm for line image fitting to improve the accuracy of calibration was proposed in this study. The simulated results show that the pro-posed optimization algorithm can improve the precision of a fitting line image in center catadioptric cameras. Although the algorithm proposed in this study improves the calibration accuracy to some extent, a more thorough investigation may be required. The shortcoming of the method used in this study is that the projection contour of the mirror for the center catadioptric camera is required to obtain images of antipodal points. In our future research, the method of directly obtaining the images of antipodal points from the projective properties of geometric elements will be considered.

Data Availability
The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest
The authors declare that there is no conflict of interests regarding the publication of this paper.