Image-based visual servoing for nonholonomic mobile robots using epipolar geometry is an efficient technology for visual servoing problem. An improved visual servoing strategy, namely, three-step epipolar-based visual servoing, is developed for the nonholonomic robot in this paper. The proposed strategy can keep the robot meeting FOV constraint without any 3D reconstruction. Moreover, the trajectory planned by this strategy is shorter than the existing strategies. The mobile robot can reach the desired configuration with exponential converge. The control scheme in this paper is divided into three steps. Firstly, by using the difference of epipoles as feedback,
the robot rotates to make the current configuration and desired configuration in the same orientation. Then, by using a linear input-output feedback, the epipoles are zeroed so as to align the robot with the goal. Finally, by using the difference of feature points, the robot reaches the desired configuration. Simulation results and experimental results are given to illustrate the effectiveness of the proposed control scheme.
1. Introduction
With the development of computer vision and growing demand for robots intelligence, visual servoing is becoming a hot research field of robotics. Visual servoing is an extensive field where computer vision is used in the design of motion controller. The main task of visual servoing [1] is to regulate the pose (position and orientation) of the robot to reach the desired pose by using the image information obtained by a camera.
Different visual servoing (VS) approaches have been proposed to solve visual servoing problem. They can be classified into two main categories: position-based visual servoing (PBVS) [2–4] and image-based visual servoing (IBVS) [5, 6].
For the position-based visual servoing strategy, the desired pose is estimated on the basis of visual data and geometric models. For instance, an omnidirectional vision system [7] is used to determine the robot posture. A concept of 3D visible set for PBVS [3] is proposed. These strategies add some new concepts to overcome the shortcomings of PBVS, but all these strategies require 3D reconstruction. 3D reconstruction, under normal circumstances, needs a large amount of computation which makes it difficult for PBVS to do real-time configuration.
To avoid the 3D reconstruction, the image-based visual servoing strategy is proposed. In IBVS, the errors between the initial and desired configuration of the feature points on the image plane are generated, and the feature points are controlled to move from current configuration to the desired configuration on the image plane. IBVS has been known to be more suitable for preventing the feature points from leaving the field of view (FOV) since the trajectories of the feature points are controlled directly on the image plane. An IBVS with Canny operator and line detecting strategy is proposed in [5]. However, image singularities and image local minima may exist due to the form of image Jacobian. This happens frequently as encountered in the use of general IBVS strategy. In order to solve this issue, an approach named homography-based visual servoing has been proposed in [8] for mobile robots, which needs camera calibration parameters and an adaptive estimation of a constant depth-related parameter. In addition, this strategy cannot let the initial configuration converge to the desired configuration exponentially.
In some unknown environments, the calibration is not exactly known; the visual servoing is to be the uncalibrated visual servoing. So, [9] proposed a quaternion-based camera rotation estimate and a new closed loop error system to solve the robustness of vision-based control systems for the uncalibrated vision servoing. On the basis of [9], an adaptive homography-based visual servo tracking controller [10] is designed to compensate for the lack of unknown depth information using a quaternion formulation to represent rotation tracking error. While a robust adaptive uncalibrated visual servo controller [11] is put forward to asymptotically regulate a robot end-effector to a desired pose compensating for the unknown depth information and intrinsic camera calibration parameters.
Keeping the camera with FOV is an important problem in the visual servoing. Reference [12] presents a novel two-level scheme for adaptive active visual servoing of a mobile robot to provide a satisfactory solution for the field-of-view problem, while [13] introduces a novel visual servo controller that is designed to control the pose of the camera to keep multiple objects in the FOV of a mobile camera. A set of underdetermined task functions are developed to regulate the mean and variance of a set of image features. The nonlinear characteristic of mobile robot is another important problem in the visual servoing. Reference [14] presents a controller for locking a moving object in 3D space at a particular position on the image plane for both the highly nonlinear robot dynamics and unknown motion of the object.
Recently, a novel IBVS strategy [6] is proposed by computing the epipolar geometry between the current image and the desired one. When the angle between focus length f and x-axis of the initial configuration is larger than the desired configuration, this strategy can let the initial configuration converge to the desired configuration exponentially. But when the angle between f and x-axis of the initial configuration is smaller than the desired configuration, the trajectory planned by the strategy may be much longer, and the time cost by this IBVS strategy may be much increased; sometimes the feature points are out of field of view.
To overcome these shortcomings, we proposed a three-step strategy. Firstly, we add one step to rotate the robot from the initial configuration to the intermediate configuration, which has the same orientation as the desired configuration. With this step, we can guarantee that the angle between f and x-axis of the new configuration will never be smaller than the desired configuration. Thereby, the trajectory will always be smaller in our three-step strategy and the robot will keep the feature points with the FOV constraint. Then, a linear input-output feedback is used by the second step to let the epipoles be zero so as to align the robot with the goal. Finally, a proportional plus integral controller is introduced into the third step to take less time reaching the desired configuration.
This paper is organized as follows. Section 2 presents the main task of IBVS, the nonholonomic robot model, and the epipolar geometry. An outline of the system frameworks is given. Section 3 presents the control scheme. Simulations are provided in Section 4 and experiment results are illustrated in Section 5 to evaluate the effect of the proposed control scheme. Finally, Section 6 is a conclusion.
2. Problem Formulation
In this section, visual servoing, epipolar geometry, nonholonomic robot model, and general framework are briefly introduced.
2.1. Visual Servoing
Robot visual servoing is a strategy for driving mobile robot from current pose (position and orientation) to desired pose (position and orientation) by using feature points of current view and desired view as feedback input while keeping the feature points within the FOV.
2.2. Epipolar Geometry
Epipolar geometry describes the intrinsic geometry between two views and only depends on the relative location between cameras and their internal parameters. As shown in Figure 1, the points C1 and C2 are called focal length center, the line connecting points C1 and C2 is called the baseline, and the intersection of the baseline and the image plane is called epipole, that is, e1 and e2. Let P be one of the points in 3D space. p1 and p2 are the projection points in image planes I1 and I2. The lines l1 and l2 are called the epipolar line. From the geometry knowledge, e1 and e2 can represent the relative orientation between the image planes. These points will be used later.
The schematic diagram of epipolar geometry.
The value of the epipoles can be directly computed by the geometrical relationship between the desired and current views. The common method is using fundamental matrix F to compute the epipoles. Following is the epipolar geometry:
(1)l1=FTp1,l2=Fp2,F=M-T×EM-1,
where E is an essential matrix.
Note that F can be estimated by some well-known algorithms, like Hartley’s normalized 8-point algorithm [15] or others like RANSAC algorithm [16] and LMedS algorithm [17].
2.3. Nonholonomic Robot
The nonholonomic robot with two independently derived wheels is shown in Figure 2. C is the mass center of the the robot, which is located in the middle of deriving wheels. θ is the orientation angle of the robot. ν and ω are the linear and angular velocities of the robot. The kinematic model for the robot with the nonholonomic constraint of pure rolling and nonslipping is
(2)x˙=νcosθ,y˙=νsinθ,θ˙=ω.
The nonholonomic robot with two independently derived wheels.
2.4. General Frameworks
The initial configuration presented in Figures 3(a) and 3(b) is the first step, the main work of which is to take the current configuration in the same direction with the desired configuration. With the role of this step, we can keep the feature points in the field of view. Figure 3(c) is the second step, the main goal of which is to zero the epipoles with input-output linear feedback. With this step, the robot is aligned with the desired configuration. Figure 3(d) is the last step to reach the desired configuration by comparing the feature points in the two views.
The scheme of three-step epipolar-based visual servoing system. (a) The initial configuration of the mobile robot; (b) the first step that the mobile robot will move from ro to ri′; (c) the second step that the mobile robot will move from the ri′ to ri′′; (d) the third step that the mobile robot will move from the ri′′ to rd.
Initial configuration
First step
Second step
Third step
The three-step control scheme will be described in detail in Section 3.
3. Three-Step Control Scheme
In this section, we will design a three-step strategy scheme for the mobile robot driving to the desired configuration. We will detail how to realize each step.
3.1. Match the Orientation
A derivation of epipole kinematics will be presented. First, we will derive the expression of the epipoles as a function of the robot configuration r.
Figure 4 shows the geometry relation with two views of the same scene. The current configuration is r=[x,y,θ]T, and the desired configuration is rd=[0,0,π/2]T. f is the focal distance. φ is the angle between the x-axis and the line joining the desired and current camera centers. θ is the angle between f and the x-axis. d is the distance between the desired camera center and the current camera center.
The geometry relation between current configuration and desired configuration.
First of all, we will rotate the robot, and the current configuration and the desired configuration will be in the same orientation. With the first step, the trajectory planned by the strategy in this paper is shorter than the existing strategy and the singular problem can be effectively avoided. The singular problem happened when f⊥d in red color in Figure 4. But f is never ⊥d if after the first step the orientation of current configuration and desired configuration is the same orientation. From Figure 4, we have
(3)edu=fxy=ftan(π2-φ),(4)eau=ftan(θ-φ)=ftanθ-tanφ1+tanθtanφ,
where edu is the epipole of desired configuration and eau is the epipole of current configuration. Solving edu-eau from (3) and (4) yields
(5)edu-eau=f(tan(π2-φ)-tan(θ-φ)).
The time derivative of edu-eau is
(6)e˙du-e˙au=fv1ycos(θ)-xsin(θ)y2-f(θ˙-φ˙)cos2(θ-φ),
where v1 is the linear velocity. We set the control law as
(7)v1=0,v2=gx(edu-eau),
where gx is the positive coefficient. Equation (6) can be simplified to be
(8)e˙du-e˙au=-v2fcos2(φ-θ),
where v2 is the angular velocity. We can use (7) to rewrite the (8) as
(9)e˙du-e˙au=-gx(edu-eau)fcos2(φ-θ).
Equation (9) can be shortened to be
(10)e˙du-e˙au=-g(edu-eau),
where
(11)g=gxfcos2(φ-θ)>0.
From (10), edu-eau will converge to zero in a limited time. So, we take (7) as control law and then the mobile robot will go to the same orientation with the desired configuration. If the feature points of the desired configuration are in the field of view, then the next two steps will keep the feature points in the field of view. Note the following points.
The above control law (7) is image-based, since it only uses the measured epipoles. No information of the robot configuration or any other odometric data is used.
The form of (10) is essential to guarantee that edu-eau is zero in finite time.
From edu-eau, we know that the robot will converge to the same orientation from the desired configuration.
It remains to be shown how to avoid the proposed control law (7) to become singular. As is shown in the control law, the linear velocity (v1) is always defined and the angular velocity (v2) has a potential singularity when y=0 or (φ-θ)=π/2. The following remarks are in order at this point.
Remark 1.
We provide the control law just running under the condition that the epipolar geometry between the desired and current camera views can be defined, corresponding with the case of y≠0 and (φ-θ)≠π/2. If y=0 (see (3), (6), and control law (7) that are undefined) or (φ-θ)=π/2 (see (4), (6), and control law (7) that are undefined), the homography matrix H can be decomposed to design a replaced rotational controller to diminish the orientation error. Using stacking of the fundamental matrix [18] F to observe the norm of the 9-dimensional vector, estimate the homography matrix H which is still defined in this situation and decompose it to get the rotation matrix R between the desired and current camera views. At last, we provide a simple proportional rotational controller to diminish the orientation error.
Remark 2.
Here, we provide how to choose the values of controller parameter.
Choice of gx. gx should be the positive value. The value of gx determines the rate of convergence v2.
3.2. Zero the Epipoles
For feedback-linearization purposes, we will need the kinematics of the epipoles with respect to the velocities v1 and v2. From (3) and (4), the time derivative of edu is obtained as follows:
(12)e˙du=fv1ycosθ-v1xsinθy2.
As shown in Figure 4(13)y=dsin(φ),x=dcos(φ).
So, e˙du and e˙au can be expressed as
(14)e˙du=v1fdsin(φ-θ)sin2(φ),(15)e˙au=fcos2(θ-φ)(θ˙-φ˙).
From Figure 4, we can obtain
(16)φ=arctan(yx),φ˙=(y˙x-x˙y)d2.
Taking the two equations above into (15), the time derivative of eau can be written as
(17)e˙au=v1fsin(φ-θ)dcos2(φ-θ)+v2fcos2(φ-θ).
From some simple geometry knowledge, we have
(18)cos(φ-θ)=sign(eauedu)feau2+f2,sin(φ-θ)=-sign(eauedu)eaueau2+f2,
where the sign functions guarantee that any position of the current configuration is taken into consideration in these equations. We take these into (14) and (15), and the relationship between the input and the output time derivatives is expressed as
(19)e˙au=-v1sign(eauedu)deaueau2+f2f+v2eau2+f2f,(20)e˙du=-v1sign(eauedu)eau(f2+edu2)dfeau2+f2,
where d is the distance of two epipoles. In summary, the simple matrix form should be
(21)[e˙aue˙du]=H[v1v2],
with
(22)H=[-sign(eauedu)eaueau2+f2dfeau2+f2f-sign(eauedu)eau(edu2+f2)dfeau2+f20].
Here, we are faced with a difficulty that d is unknown in the image-based control and the input-output is not linear. We set the control law as
(23)[v1v2]=H⌢-1[q1q2],
with
(24)H⌢-1=[0-sign(eauedu)d⌢feau2+f2eau(edu2+f2)feau2+f2-fedu2+f2].
Taking (23) into (21), we obtain
(25)[e˙aue˙du]=HH⌢-1[q1q2]=[1(d⌢d-1)eau2+f2edu2+f20d^d][q1q2],
in which we set
(26)[q1q2]=[-g1eau-g2eduβ/γ],
where g1>0, g2>0, β, and γ are the position odd integers with β<γ. Also, update the distance estimate d according to the following equation:
(27)d^˙=g2f2d^eduβ/γ.
Taking (27) into (26), we obtain
(28)e˙au=-g1eau-(d⌢d-1)eau2+f2edu2+f2g2eduβ/γ,(29)e˙du=-d⌢dg2eduβ/γ.
The control law can be written as follows:
(30)v1=sign(eauedu)d^feau2+f2eau(edu2+f2)g2eduβ/γ,v2=-feau2+f2g1eau+feau2+f2g2eduβ/γ.
Then, eau and edu will converge to zero in finite time.
It is assumed that the angle of camera view is 120°. For example, in Figure 5, the maximum differential angle of camera view is between the conditions c1 and cend. In the circumstances, the feature points can be set in the shaded area. From Figure 5, it can be seen that when the robot moves to the second step, the feature points will keep in FOV.
Find overlapping region of initial FOV and end FOV in the second step.
Remark 3.
It remains to be shown how to adjust the control if the proposed control law (30) becomes singular. As is shown in the control law (30), the angular velocity (v2) is always defined and the linear velocity (v1) has a potential singularity when eau is equal to zero.
If eau is equal to zero at the beginning of the second step, we can perform a preliminary maneuver in order to displace the current epipole to a nonzero value. Using a small certain value eau~, the preliminary control is as follows:
(31)v1=0,v2=-feau2+f2g1(eau-eau~).
With this choice, we get from (19) e˙au=-g1(eau-eau~), so eau converges exponentially to eau~, and then we can use our provided control law.
According to (29), as eduβ/γ is bounded, edu will converge to zero at finite time tx. From tx, (28) becomes e˙au=-g1eau, eau will converge to zero with exponential rate g1, v1 is equal to zero, and v2=-(f/(eau2+f2))g1eau. So, the robot system will perform the pure rotation in this phase. Hence, convergence of the epipoles to zero is obtained at a finite distance.
As already noticed, after the transient (t>tx), edu is equal to zero and v1 is equal to zero, which prevents the potential singularity. eau can be bounded by bounding g2. For sufficiently small g2, the current epipole eau cannot cross zero during the transient. With the desired epipole, edu reaches zero before eau, and the proposed control law is never singular. This kind of control law is also known as a terminal sliding mode.
Remark 4.
We now present how to choose the control parameters, that is, β/γ, g1, g2, and initial estimate of the robot distance d^.
Choice of β/γ. The current and desired robot positions may increase during the second step. According to (30), when the eau and edu increase, we want to decrease the value of v1 and v2, so β/γ should be chosen less than one and close to zero.
Choice of g1, g2. Remark 3 requires g2 to be sufficiently small to guarantee that the proposed control law (30) is never singular. We can choose g1, based on the characteristic that eau and edu never change sign. Now, we take the following into account.
If the initial epipole values eau and edu are in different signs, the perturbation term in (28) pushes eau into singular. g2 should be very sufficiently small. There is no special strategy for g1.
If the initial epipole values eau and edu are in the same sign, the perturbation term in (28) pulls eau away from singular. Any value of g2 is satisfiable. g2 only determines the rate of convergence eau after the transient (tx).
In both simulation and experiments, the choice of g1, g2 is sufficient to achieve singularity avoidance.
Choice of Initial Estimate of the Robot Distance d^. According to Remark 3, it is necessary to initialize d^ at a value d^0>d0, where d^0 means the initial estimate value of d and d0 means the initial value of d. We can use an upper bound derived from the knowledge of the environment, where the robot moves.
3.3. Match the Feature Points
At the end of the second step, both eau and edu are zero and the intermediate configuration is in the same orientation with the desired configuration. Now, we are facing the problem of how to use feature points to realize the control law into the robot from intermediate configuration to desired configuration. Similar to the above two steps, the third step control law works in the camera image plane. The basic idea is to make each feature point in the current image plane match the feature points in the desired image plane. In principle, only one feature point is needed to implement this idea. We can also use a number of feature points as a choice in case of noisy images. We set D=∥pa∥2-∥pd∥2, the difference between the squared norms of the current feature pa and the desired feature pd.
If proportional control is used here to be the control law, then the system will cost much time to reach the desired configuration. So, we let the control law be proportional plus integral controller as
(32)v1=-gtD-gi*∫0tDdt,v2=0,
where gt>0 and gi>0. Then, the current configuration will converge from the intermediate configuration ri to the desired configuration rd exponentially.
Remark 5.
We provide how to choose the values of controller parameters.
Choice of gt, gi. gt, and gi should be the positive value. The value of gt determines the rate of convergence v1. The value of gi determines the control precision of the system.
4. Simulation Results
In this section, simulation results are provided to validate the proposed approach. The scene consists of ten feature points which are random on the plane. Simulations have been performed using MATLAB and the Epipolar Geometry Toolbox [19]. Ten pairs of corresponding feature points are used in the desired and current image. They are used in all three steps. In the simulation, we use f=0.04 m. The initial and desired configurations are chosen as
(33)ro1=(1,1,π4)T,ro2=(1,1,π2)T,ro3=(1,1,3π4)T,rd=(0,0,π2)T.
We use three initial configurations standing for different situations. In the first step, the parameter is gx=30. In the second step, the parameters are g1=0.6, g2=0.4, and β/γ = 4/9. And in the third step, the parameters are gt=1000, gi=10. We set the initial estimate of the robot distance d^=3 m.
The trajectory of the robot is shown in Figures 6, 8, and 10.
The first step: the robot trajectory. (a) The initial configuration is ro1=(1,1,π/4)T; (b) the initial configuration is ro3=(1,1,3π/4)T. DC is the desired configuration, FIC is the intermediate configuration of the first step, and IC is the initial configuration. Both figures show that the orientation of the robot is towards π/2 (the orientation of desired configuration). In this step, the robot is moving from IC to SIC.
Initial configuration ro1=(1,1,π/4)T
Initial configuration ro3=(1,1,3π/4)T
In the first step, the robot takes ro1 as initial configuration. Figures 6(a) and 7(a) show the robot trajectory and the regular velocity (the linear velocity is zero). In the beginning, the robot is in the orientation of π/4. So, the difference between edu and eau is very large; the regular velocity is also the same. In this case edu is smaller than eau, so the regular velocity is positive. The robot will rotate clockwise and get closer to the orientation of π/2, and the regular velocity decreases. We can get exponential convergence as shown in Figure 7(a).
The first step. (a) is in the initial configuration ro1=(1,1,π/4)T, (b) is in the initial configuration ro3=(1,1,3π/4)T, and these figures show the regular velocity v2 of the robot in the first step.
Initial configuration ro1=(1,1,π/4)T
Initial configuration ro3=(1,1,3π/4)T
The second step: the robot trajectory. DC is the desired configuration, SIC is the intermediate configuration of the second step, and FIC is the intermediate configuration of the first step. In this step, the robot is moving from FIC to SIC.
The robot takes ro3 as initial configuration; the result is shown in Figures 6(b) and 7(b). This case is just the opposite to the above. At first, the robot is in the orientation of 3π/4. edu is larger than eau. So, the regular velocity is negative. The robot will rotate anticlockwise. And the convergence will be exponential as shown in Figure 7(b).
The initial configuration is ro2. In the beginning, the robot is in the orientation of π/2. So, edu is equal to eau; this step is finished at first.
In the second step, together with effect of the first step, regardless of the initial configuration, at the beginning of the second step, the robot is in the orientation of π/2. And these three initial configurations are equal now. So, only one of them is needed to be discussed. In Figure 9(a), as expected, edu and eau are declined to zero. The edu is zeroed at time t=1 s, and the eau is zeroed at the time t = 7 s. While the input control (v1 and v2) is shown in Figure 9(b), Figure 8 shows the robot trajectory.
The second step. (a) The epipole of current configuration eau and the epipole of desired configuration edu in the second step. The eau and edu will be zero in finite time. (b) The linear velocity of the robot u1 and angular velocity of the robot u2 in the second step.
The epipole of current configuration eau and desired configuration edu
The linear velocity of the robot v1 and angular velocity of the robot v2
The third step: the robot trajectory. DC is the desired configuration, DC′ is the final configuration, and SIC is the intermediate configuration of the second step. In this step, the robot is moving from SIC to DC′.
In the third step, the robot trajectory shown in Figures 10 and 11 shows the distance between current configuration and desired configuration.
The third step. The distance between the current configuration and the desired configuration in the third step.
The desired configuration and final configuration are summarized in Table 1; it is clear to see that the final configurations are very close to the desired ones by using one three-step strategy. But, by using the strategy of [6], only the first group is close to the desired one, and the others cannot reach the desired one, and the processing time of three-step strategy is much shorter than the strategy of [6]. From Table 2, if θ is less than 75°, the path planned by the strategy of [6] will be an error. But the path planned by three-step strategy will be of the same short distance, 4.1498 m. If θ is more than 90°, the paths planned by the strategy of [6] and the three-step strategy are almost the same. You can see that the three-step strategy is more robust and efficient in this case than the strategy proposed by [6]. Figure 12 shows the trajectory of the second group in Table 1. Figure 13 shows the distance between the current configuration and desired configuration of the second group in Table 1.
Simulation results of final configuration.
Initial configuration
Desired configuration
Final configuration using [6] strategy
Final configuration using three-step strategy
(2,4,0.7π)
(0,0,0.5π)
(0.0316,-0.1384,0.5013π)t = 26.9 s
(0.1114,-0.0227,0.4999π)t = 11.9 s
(2,1,0.3π)
(0,0,0.5π)
(-0.0360,20.2318,0.5021π)t = 30 s
(0.0394,0.0148,0.5013π)t = 12.2 s
(1,1,0.3π)
(0,0,0.5π)
(-0.0358,139.0497,0.5004π)t = 30 s
(0.0392,0.0411,0.5013π)t = 9.7 s
The desired configuration coordinate is set by ourselves, so we set (0,0,0.5π) without loss of generality. The simulation time is 30 s. t means the whole visual servo processing time.
Comparison of distance using different strategies.
Angle (θ)^{1}
Distance using strategy of [6] (d)^{2}
Distance using three-step strategy (d)^{2}
120°
4.0272 m
4.1498 m
115°
4.0250 m
4.1498 m
110°
4.0188 m
4.1498 m
105°
4.0213 m
4.1498 m
100°
4.0271 m
4.1498 m
95°
4.0224 m
4.1498 m
90°
4.1498 m
4.1498 m
85°
5.4232 m
4.1498 m
80°
7.2354 m
4.1498 m
75°
9.1564 m
4.1498 m
70°
18.398 m
4.1498 m
65°
28.657 m
4.1498 m
60°
71.493 m
4.1498 m
55°
184.77 m
4.1498 m
1θ is the angle between f and x-axis.
2d is the distance between the second intermediate configuration and desired configuration as in Figure 10 SIC → DC.
The trajectory of the second group in Table 1.
The simulation time is 13 s; the distance between the current configuration and the desired configuration of the second group in Table 1.
The movement of feature points in the first step is shown in Figure 14. The feature points in current configuration are moving close to the feature points in desired configuration, which is effective in keeping the camera with FOV.
The first step: movement of feature points.
The simulation results are shown in Figure 15. Feature points of current configuration are in close proximity to the desired ones.
The first step: final result of feature points.
5. Experiment Results5.1. Testbed
As shown in Figure 16, the testbed consists of the following components: a differential drive mobile robot (with an Samsung ARM S3C 2410 inside), a kinect camera that captures 30 frames per second with eight-bit RGB-image at a 640 × 480 resolution, a first Intel core i5 inside PC (operating under the MS Windows 7 x64 operating system), and a second Intel core i5 inside PC (operating under the Ubuntu 10.04 operating system, a Linux kernel based operating system). The internal mobile robot controller (Samsung ARM S3C 2410) hosts the control algorithm that was written in Linux-C/C++. The first PC is used for image processing. IMAGE PROCESSING, the image processing algorithm, is written in MS visual studio MFC (C++ based with the aid of the OPENCV 2.4.1 library). The communication protocol between the image processing PC and internal mobile robot controller is the serial communication. The second PC is a remote PC, and it is used to remotely login to the internal mobile robot controller via Telnet. The remote PC can log the run data of mobile robot and can also debug the mobile robot. The chessboard is rigidly attached to a rigid structure that is used as the target. We use the OPENCV FindChessboard algorithm to determine the coordinates of each point in the chessboard. The mobile robot is controlled by a torque input. The torque controller requires the actual linear and angular velocity of the mobile robot. So, the mobile robot is equipped with the steering motor encoders. The encoder data is performed by the DSP controller. The communication between DSP controller and internal mobile robot controller is CAN communication. Using the OPENCV camera calibration algorithm, the intrinsic calibration parameters of the KINECT are determined. The image center coordinates are determined to be u0=319.5 [pixels] and v0=239.5 [pixels], and the focal lengths are fku=620.7 [pixels] and fkv=620.7 [pixels]. The intrinsic matrix is to be
(34)K=[620.70319.50620.7239.5001].
Testbed. Mobile robot used for the experiments equipped with a KINECT by Microsoft Company, Ltd.
5.2. Results
For the visual servoing task, a set of 9 corresponding feature points are chosen in the two images and tracked in real time by means of a FindChessboard algorithm. The robot moves under the three-step visual servoing algorithm. In the first step, the parameter is gx=30. In the second step, the parameters are g1=0.6, g2=0.4, and β/γ=4/9. And, in the third step, the parameters are gt=1000 and gi=10. We set the initial estimate of the robot distance d^=7 m.
First, control law (6) will take action, as shown in Figure 17. Both the epipoles eau and edu are driven to the same value. Due to the actuator deadzone, we just need eau to be close to edu.
Experiment result. Epipole behavior.
Then, the second step is carried out under the action of the control law (30). Both the epipoles eau and edu are driven to zero. Due to the actuator deadzone, in Figure 17, the epipoles eau and edu are almost zero but not exactly zero.
Finally, the third step is executed under the influence of the control law (26); the D (the difference between the current configuration and desired configuration) will decrease exponentially. Figure 18 shows that D will decrease nearly to zero.
Experimental results. Exponential decrease of the difference D between the actual and desired features.
Figures 20, 21, and 22 collect nine snapshots of the robot motion during the first, second, and third steps, respectively. On the right of each snapshot, the current (green) and desired (red) feature points are shown to be superimposed to the current image. Figure 19 shows the desired feature points (right) and desired configuration of robot (left). During the first step (Figure 20), the epipoles are driven to be the same value. The orientation of the current configuration and desired configuration will be the same. While the second step (Figure 21) is carried out, the epipoles are driven to the principal point. The third step (Figure 21) is then executed, and the current feature points converge to their target. The overall servoing performance is satisfying, resulting in a positioning error of about 3 cm left with respect to the target position.
Experimental results. Snapshots of the desired configuration of robot (a). Also, snapshots of the feature motion in desired configuration (b).
Experimental results. Snapshots of the robot motion (a) during the first step. Also, snapshots of the feature motion (b) superimposed on the current image.
Experimental results. Snapshots of the robot motion (a) during the second step. Also, snapshots of the feature motion (b) superimposed on the current image.
Experimental results. Snapshots of the robot motion (a) during the third step. Also, snapshots of the feature motion (b) superimposed on the current image.
6. Conclusions
In this paper, a new visual servoing strategy is proposed, named three-step epipolar-based visual servoing, which includes three steps. Firstly, by using the difference of epipoles as feedback, the robot rotates to make the current configuration and desired configuration in the same direction. Secondly, by using a linear input-output feedback, the epipoles are zeroed so as to align the robot with the goal. Thirdly, by using the feature points, the robot reaches the desired configuration.
The main advantages of the proposed control scheme includes: (1) importing the first step to rotate to desired configuration; (2) adding integral control to accelerate the convergence in the third step. The strategy is capable of solving the problem of keeping feature points in FOV and moreover planning the path correctly and shortly compared with [6] which can be evaluated through the simulation results and experiment results.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
Acknowledgments
This work is partially supported by the National Natural Science Foundation of China (nos. 61379111, 61071096, 61073103, 61003233, and 61202342), Specialized Research Fund for Doctoral Program of Higher Education (nos. 20100162110012 and 20110162110042), and China Postdoctoral Science Foundation, Postdoctoral Science Planning Project of Hunan Province, Postdoctoral Science Foundation of Central South University (120951).
EspiauB.ChaumetteF.RivesP.A new approach to visual servoing in roboticsJanabi-SharifiF.MareyM.A kalman-filter-based method for pose estimation in visual servoingParkD.-H.KwonJ.-H.HaI.-J.Novel position-based visual servoing approach to robust global stability under field-of-view constraintLippielloV.SicilianoB.VillaniL.Position-based visual servoing in industrial multirobot cells using a hybrid camera configurationGangW.ZhengdaM.JusanL.A method of error compensation in image based visual servoProceedings of the International Conference on Electrical and Control Engineering (ICECE '10)June 2010Wuhan, China838610.1109/iCECE.2010.292-s2.0-79952238204MariottiniG. L.OrioloG.PrattichizzoD.Image-based visual servoing for nonholonomic mobile robots using epipolar geometryDoY.KimG.KimJ.Omnidirectional vision system developed for a home service robotProceedings of the 14th International Conference on Mechatronics and Machine Vision in Practice (M2VIP '07)December 2007Xiamen, China21722210.1109/MMVIP.2007.44307462-s2.0-48649104641FangY.DixonW. E.DawsonD. M.ChawdaP.Homography-based visual servo regulation of mobile robotsHuG.GansN.DixonW.Quaternion-based visual servo control in the presence of camera calibration errorHuG.GansN.Fitz-CoyN.DixonW.Adaptive homography-based visual servo tracking control via a quaternion formulationHuG.MacKunisW.GansN.DixonW. E.ChenJ.BehalA.DawsonD.Homography-based visual servo control with imperfect camera calibrationFangY.LiuX.ZhangX.Adaptive active visual servoing of nonholonomic mobile robotsGansN. R.HuG.NagarajanK.DixonW. E.Keeping multiple moving targets in the field of view of a mobile CameraWangH.LiuY.ChenW.WangZ.A new approach to dynamic eye-in-hand visual tracking using nonlinear observersHartleyR. I.In defense of the eight-point algorithmYangM.Estimating the fundamental matrix using L∞ minimization algorithmProceedings of the 7th World Congress on Intelligent Control and Automation (WCICA' 08)June 2008Chongqing, China9241924610.1109/WCICA.2008.45943942-s2.0-52149094322RuiL.FengW.An algorithm for estimating fundamental matrix based on removing the exceptional pointsProceedings of the International Conference on Computational Intelligence and Natural Computing (CINC '09)June 2009889010.1109/CINC.2009.662-s2.0-70350558403MalisE.ChaumetteF.21/2D visual servoing with respect to unknown objects through a new estimation scheme of camera displacementMariottiniG. L.PrattichizzoD.EGT for multiple view geometry and visual servoing: robotics and vision with pinhole and panoramic cameras