Adaptive Hybrid Visual Servo Regulation of Mobile Robots Based on Fast Homography Decomposition

For the monocular camera-based mobile robot system, an adaptive hybrid visual servo regulation algorithm which is based on a fast homography decomposition method is proposed to drive the mobile robot to its desired position and orientation, even when object’s imaging depth and camera’s position extrinsic parameters are unknown. Firstly, the homography’s particular properties caused by mobile robot’s 2-DOFmotion are taken into account to induce a fast homography decomposition method. Secondly, the homography matrix and the extracted orientation error, incorporated with the desired view’s single feature point, are utilized to form an error vector and its open-loop error function. Finally, Lyapunov-based techniques are exploited to construct an adaptive regulation control law, followed by the experimental verification.The experimental results show that the proposed fast homography decompositionmethod is not only simple and efficient, but also highly precise. Meanwhile, the designed control law can well enable mobile robot position and orientation regulation despite the lack of depth information and camera’s position extrinsic parameters.


Introduction
Visual sensors have the advantage of sensing rich information at low costs.At the meantime, there have been considerable improvements in image processing and analysis techniques with the popularization of visual sensors.Therefore, using visual sensors on robots as the visual servo system, in order to significantly improve their environmental sensing ability and intelligence level, has become one of the hot topics for research in current robot community.
Because robot arms are unconstrained systems, their visual servo regulation problems are relatively simple and have been well studied in previous research work.References [1][2][3] comprehensively reviewed the visual servo regulation algorithms of robot arms.The key point for robot arm visual servo regulation is the selection of appropriate visual characteristic set and the determination of the interaction matrix that characterizes the nonlinear mapping between the characteristic set and camera movement (also known as the image Jacobian matrix).The major difficulty is how to deal with the image's information about depth of field missing from the interaction matrix.In order to solve this problem, Malis [4,5] proposed a 2.5D hybrid visual servo method, where the selected visual characteristic set includes both 3D position information and 2D image information.Homography matrix is also used to process the unknown depth information in the interaction matrix.Benhimane and Malis [6] proposed a homography matrix based visual servo method, which directly uses homography matrix and desired characteristic points to establish visual characteristic sets, in order to derive the interaction matrix that does not include time varying depth information.Piepmeier et al. [7] proposed a typical quasi-Newton method based online estimation for the interaction matrix and successfully applied it to uncalibrated visual servo.
Different from robot arms, mobile robot is a typical system with nonholonomic constraint.The existence of nonholonomic constraint makes the visual servo regulation of mobile robots more difficult than that for robot arms.The visual servo of mobile robots includes visual tracking and visual regulation.References [8][9][10] conducted in-depth research on visual tracking.However, in system stability analysis, the speed and path of the robot need to be constrained.Therefore, these methods can hardly be directly applied in visual regulation.
The resolution of mobile robot visual servo regulation problem requires the combination of the visual servo regulation technique and the regulation controller design technique for nonholonomic constrained system.Fang et al. [11] studied the mobile robot visual regulation problem under the situation where the coordinate of the camera completely overlapped that of the robot.They proposed the analytical expression of the homography matrix under this specific situation and decomposed it to obtain the angular difference and imaging depth ratio between the current position and the desired position.Eventually they designed the adaptive regulation control law using the Lyapunov direct method.Based on [11], Zhang et al. [12] further studied the mobile robot visual regulation problem under the situation where only pure translation extrinsic parameter exists between the coordinates of camera and robot.First, angular difference signal between the current posing and the desired posing was extracted using the Faugeras homography decomposition method [13] or other methods [14]; then, it was combined with 2D image error signal to solve the system open-loop error function; finally, an adaptive control law that can compensate the unknown translational displacement and the desired imaging depth was designed using Lyapunov's direct method.
It is obvious that the studies in [11,12] are both based on the conditions that the optical axis of the camera and the moving orientation of the mobile robot are coincident.They are not applicable when there are posture differences between the coordinates of the camera and the robot.Moreover, [13] and other general homography decomposition methods [15,16] all exhibit computational complexities and nonunique decomposition results, and extra a priori knowledge is needed to eliminate the wrong decomposition results.Based on the analytical expression of homography matrix proposed in [11], [14] proposed a fast decomposition method for this type of homography matrix.However, it is only applicable to the cases where the coordinates of the camera and the robot are completely overlapped or only with translation transformations.Inspired by [6,11,12,14], this research expands their work and studies the mobile robot visual servo regulation problem when the coordinates of the camera and the robot have both position and orientation transformations.A fast decomposition method for homography matrix which is applicable to a broader scope is also proposed in this work.To be specific, first, the fast homography decomposition method is used to determine the factor of the homography matrix and the angular difference between current position and the expected position; then, combined with the characteristic points of the desired view, a group of error functions are constructed, and consequently the open-loop error function of the system is obtained; finally, using Lyapunov's direct method, the adaptive regulation control law, which is capable of simultaneously compensating for the translation extrinsic parameter and the desired imaging depth, is designed and experimentally verified.

Unique Properties of the Euclidean Homography Matrix.
Figure 1 shows the projective model of the homography matrix.
The current and desired coordinates of the camera are represented by {} and { * }, respectively.  and   * are the corresponding image planes of the two cameras, respectively.The position-orientation relationship between the two coordinates can be described as where R ∈  3×3 where K is the camera intrinsic matrix,  and V are the horizontal and vertical pixel coordinates of point P in camera {}, and  * and V * are the horizontal and vertical pixel coordinates of point P in camera { * }.Then, Euclidean homography H satisfying the following equation, can be expressed as where According to (4), Euclidean homography matrix H is a rank-1 modified matrix.Its determinant can be written as Property 1 can be derived from (5).
Property 1.The value of the Euclidean homography matrix determinant equals the ratio between the distances from the main points of current camera and expected camera to the scene plane.The ratio must be greater than zero.Let R k, be a rotation matrix resulting from rotating  around unit vector k; the general formula of this rotation transformation is Based on ( 6), the following equations hold: Figure 2 shows the coordinates of the mobile robot {} and the camera {}.
Let the position-orientation relationship between {} and {} be Equation ( 9) is derived from (7) and the Nanson formula then, with the calibrated orientation extrinsic parameter  R  of the camera, (10) can be used to solve for the a priori solution k of the onboard camera rotation axis.Obviously, when the optical axis is perpendicular to the rotation axis of the robot, k = −e 2 = [0 −1 0] T .According to (10) and (11), where equations e T 3 d = 0 and e T 3  t  = 0 are used.Property 2 can be derived from ( 9) and (12).Property 2. The corresponding rotation angle   of the rotation matrix R decomposed from the onboard homography H  is of the same size but opposite direction of the rotation angle   of the robot.In addition, the corresponding rotation axis k of R and the decomposed translational displacement t are mutually orthogonal.
Property 3. The corresponding rotation axis k of the rotation matrix R decomposed from the onboard homography H  is equal to the corresponding unit characteristic vector of the real eigenvalue H T  at unity.

Calculation of Rotation Axis k.
After the calibration of the onboard camera extrinsic parameter, (11) can be first used to solve for the a priori rotation axis k.Then, with the known normalized projective coordinates of the four groups of coplanar matching points, direct linear transformation (DLT) can be used to solve for the homography H  that satisfies the constraints ‖H  ‖  = 1 and det(H  ) > 0. After that, the real eigenvalue   of H T and its corresponding unit characteristic vector x  can be obtained: So, the constant axis of the rotation matrix is where sgn () is used to take the sign of the term.After k is obtained, the Euclidean homography matrix H is At this point, both the homography matrix H and the rotation axis k of the rotation matrix R are obtained.It is noteworthy to note that k is a constant in most of the cases, which only needs to be solved for once.

Calculation of Rotation Angle 𝜃.
As shown in Figure 2, when the optical axis of the camera is perpendicular to the rotation axis of the mobile robot and the angle between the image plane vertical axis and the robot rotation axis is zero, with the coordinate {} of the camera being the reference frame, its rotation axis can be written as −e 2 .In this case, the homography matrix is in a special form and can be decomposed more easily.On this basis, this section provides a new construct method to solve for rotation angle.
First, construct the rotation matrix where 2 , then equation R k 1 , 1 k = −e 2 must hold.Left multiply by R k 1 , 1 and right multiply by R T k 1 , 1 on both sides of (4) to get Because k and t  are orthogonal, so R k 1 , 1 k = −e 2 and R k 1 , 1 t  are also orthogonal; that is, the component t2 = 0 in t.Let the element in the th row and th column of H be written as h ; then (18) can be written as Let the elements on both sides of the equality be equal and rearrange to get According to (21) and (23), where Let then two groups of possible solutions for  can be arrived at: When h12 or h32 in (24) is not zero, substitute (28) into ( 22)-( 23) to get t2 1 and t2 3 and substitute them back into (24) for verification, so as to determine the unique angle solution.When h12 = h32 = 0, it is impossible to determine the unique solution directly from (25).A priori knowledge, such as movement continuity or scene plane normal, needs to be used to eliminate false solutions.
When elements h12 = h32 = 0 in the homography matrix H, it means that the component of ñ2 = 0 in ñ.This corresponds to the situation where the scene plane is perpendicular to the floor.

Calculation of Translation t 󸀠 and Scene Plane Normal
n.After determining the rotation angle  of the rotation matrix, substitute it into (20) to get We can obtain Right multiply by ñ on both sides of (29) to get Equations ( 30) and (31) can be used to calculate ñ and t; then R, t  , and  n in (4) can be solved from (19): According to Figure 1, the projection of the expected image plane point  m on the scene plane normal  n must be smaller than zero, which introduces the constraint that is, when h12 or h32 in (24) is not zero, a unique homography decomposition solution can be obtained.When h12 = h32 = 0, two groups of possible homography decomposition solutions can be obtained; in this case, the a priori information of  n needs to be used to eliminate the error solution.
According to the homography decomposition process described in this section, this decomposition method can avoid the singular value decomposition calculation of the matrix.At the meantime, it also avoids solving for cubic equation with one unknown when determining the scale factor between H  and H. Therefore, it is a fairly good homography matrix decomposition method that is easy to program and operate.

Controller Design
As shown in Figure 3, the main purpose of monocular mobile robot visual servo regulation control is to construct an appropriate visual characteristic set from the image signals provided by visual sensors, through characteristic extraction and position-orientation estimation.Then, construct error signal and consequently solve for the open-loop error fuction of the system.Finally, design appropriate adaptive control law to compensate for the translation extrinsic parameter and the expected imaging depth information of the target, while controlling the mobile robot to move from current position-orientation {} to the desired { * }.Its main steps include image characteristic extraction and visual positionorientation estimation, visual characteristic set construction and open-loop error function solution, and adaptive control law design.

Problem Formulation. According to (3),
therefore, the image error signal can be defined as where   P is the coordinate of the spacial 3D point P in {  } coordinate frame.The origin of coordinate frame {  } overlaps with that of the onboard camera coordinate frame {}; its , , and  axes are parallel with the , , and  axes of the robot coordinate frame {}, respectively.The linear and angular velocities of the coordinate system T of the robot can be easily solved using rigid body kinematics: The linear velocity of the 3D point   P resulting from the coordinate system {  } movement described by ( 36) and ( 37) is Substitute ( 35)-(37) into (38) and rearrange to have where (⋅)  , (⋅)  in (40) are the ,  components of the vector, respectively.
Define the orientation error signal as where  is the angle solved from (20).According to (9), the mapping relationship between its rate of change and the rotation angular velocity of the robot is According to (52), combined with (59) and (60) to get lim Because α,  2 are uniformly continuous and lim  → ∞  = 0, according to the extended Barbara theorem, lim Take derivative for  1 −  and rearrange to get where From ( 69)-( 71), it can be eventually derived that, under the effect of designed adaptive regulation control law, the robot can asymptotically converge to the desired positionorientation.

Experiments and Analysis
Let the position-orientation relationship between the current {} and the desired { * } coordinate systems of the robot be let the extrinsic parameter between the mobile robot and the camera be Then the images of the square on the scene plane in {} and { * } are shown in Figure 4.

Fast Decomposition Experiment of the Homography
Matrix.Apply different levels (0-2 pixels) of Gaussian noise on the tops of the two images, respectively.For each level of the noise, randomly select 1000 groups of the noise to be applied on the top of the image.Use DLT to estimate the homography matrix and decompose the matrix with traditional SVD-based decomposition method [13] and the proposed fast decomposition algorithm, respectively.The decomposition results are as follows: svd , t () svd ,  n () svd } , { () fast , t () fast ,  n () fast } ,  = 1, . . ., 1000. (75) Let the error between the above results and the real values {, t  ,  n} be Taking the average value of the 1000 groups of errors, we can have the error maps in Figure 5.
According to Figure 5(a), the rotation angles calculated from the two methods have extremely close accuracy, with individual cases where the proposed method has even better results than the traditional one.On the other hand, from Figures 5(b) and 5(c), it can be seen that the accuracies of the camera translation and the scene plane normal are obviously higher than those of the traditional algorithm.Therefore, this algorithm effectively avoids the matrix SVD calculation.It requires less operation and is easy for programming, with a higher decomposition accuracy.

Experiment of Adaptive Hybrid Visual Servo Regulation
for Mobile Robot.In order to verify the performance of the proposed algorithm, the mobile robot system was simulated in a MATLAB environment.From the simulation results, it can be seen that the proposed algorithm is capable of making the robot gradually converge to the desired location and has good control performance.
The gain settings of the controller and adaptive parameter update are T , ρ = 0; the adaptive parameters are illustrated in Figure 6.It can be seen from Figure 6 that, given an arbitrary guess of the unknown parameters, they are gradually stabilized towards the true value.
The control inputs are illustrated in Figure 7.The curves of system error changes are shown in Figure 8.It can be observed that the control law designed in this study is capable of controlling the robot to gradually stabilize towards the expected position-orientation, with unknown translation extrinsic parameter of the camera.

Conclusion
This paper studied a fast homography decomposition method based adaptive hybrid visual servo regulation algorithm, with missing depth information of the target and unknown translation extrinsic parameter of the onboard camera.With the construction of auxiliary rotation matrix, a common homography decomposition problem was transformed into a special type of homography decomposition problem, which effectively reduced the decomposition complexity.Compared with traditional homography decomposition methods, this decomposition algorithm has higher decomposition accuracy and robustness.At the meantime, the designed adaptive visual controller is capable of providing online compensation for the unknown imaging depth and translation extrinsic parameter, which enables the mobile robot to gradually move from the original position-orientation to the desired position-orientation, with a good performance.

Figure 1 :
Figure 1: Projective model of the homography matrix.

Figure 2 :
Figure 2: The mobile robot frame and the camera frame.

Figure 3 :
Figure 3: Visual servo regulation of the mobile robot with a monocular camera.

Figure 4 :Figure 5 :Figure 6 :Figure 7 :
Figure 4: The current image and the desired image.
and t ∈  3×1 are the relative orientation and translation parameters between {} and { * }, respectively.Let the distances between the origins  and  * of {} and { * } and the target plane  be  and  * , respectively, and let the normal direction of 3D scene plane  in {} and { * } be  n and  n, respectively; then  can be written as   = [  n T , ] T and   = [  n T ,  * ] T in {} and { * }, respectively.Let the 3D point P on  in {} and { * } be  P = [  ,   ,  ] T and  P = [  ,   ,  ] T , respectively; then  m and  m, the normalized projective coordinates of P on the image planes   ,  *  , are ( b cos  + d + ( 2 +  −  cos ) α +  cos ) .