Asymptotic Vision-Based Tracking Control of the Quadrotor Aerial Vehicle

This paper proposes an image-based visual servo (IBVS) controller for the 3D translational motion of the quadrotor unmanned aerial vehicle (UAV). The main purpose of this paper is to provide asymptotic stability for vision-based tracking control of the quadrotor in the presence of uncertainty in the dynamic model of the system. The aim of the paper also includes the use of flow of image features as the velocity information to compensate for the unreliable linear velocity data measured by accelerometers. For this purpose, the mathematical model of the quadrotor is presented based on the optic flow of image features which provides the possibility of designing a velocity-free IBVS controller with considering the dynamics of the robot. The image features are defined from a suitable combination of perspective image moments without using the model of the object. This property allows the application of the proposed controller in unknown places. The controller is robust with respect to the uncertainties in the translational dynamics of the system associated with the target motion, image depth and external disturbances. Simulation results and a comparison study are presented which demonstrate the effectiveness of the proposed approach.


Introduction
Potential applications of robotic systems have motivated the researchers to design new models and develop robust controllers in order to improve their reliabilities.Among the robotic systems, unmanned aerial vehicles (UAV) have received great attention in the last decade.The researches generally involve designing reliable controllers, developing efficient actuators, and using the precise sensors.The sensory system for these vehicles generally includes the global positioning systems (GPSs) and the inertial measurement units (IMUs).This sensory unit provides attitude and angular velocity information reliable for a control process.However, it is difficult to obtain linear velocity information suitable for a tracking task.In addition, the GPSs provide only course position information and are not reliable in indoor environments.
In the last decade, vision sensor is utilized as a complementary sensor to obtain local position of the robots and also to estimate the linear velocity information.It has received a great attention among the researchers in the area of UAV and different applications are developed, including estimation of pose and motion of the vehicle [1,2], simultaneous localization and mapping [3], automatic positioning and tracking [4], and obstacle avoidance [5].In some applications, visual data are directly used as the feedback information to control the vehicle.
Controlling the UAV using visual data started from the late 1990s.Two classic approaches are available for visionbased control of robots which include position-based visual servoing (PBVS) and image-based visual servoing (IBVS).In the first approach, visual information is used to provide the robot with a 3D understanding of its workspace.The application of this method on the aerial robots has been reported in several works including [1,6,7].Extracting pose and ego-motion information of a robot using a vision sensor is very difficult and therefore estimation approaches are utilized.Most of the developed estimation algorithms need a priori geometric model of the observed target.These approaches generally have high computational load and need noiseless image data.Furthermore, most of the pose estimation algorithms are sensitive to initial conditions [8].In the second approach, a controller is designed based on dynamics of image features in the image plane.This approach does not require 3D information of the image and hence computationally is simpler and more robust with respect to PBVS.However, in IBVS, more effort is required in the phase of designing the controller in order to compensate for the nonlinear dynamics of the image features.
For the robots with high-speed maneuvers like aerial robots, the dynamics of the robot should be considered in the design of the vision-based controller.Designing an IBVS controller for underactuated aerial vehicles is more complicated in this case.A solution is given in [4,9,10] where passivity of spherical image moments is utilized to preserve the triangular structure of the system dynamics.However, the spherical features do not provide a satisfactory trajectory for the vehicle in the vertical axis.This issue is discussed and alleviated by rescaled spherical features in [11].The authors have presented a dynamic IBVS approach in [12] using perspective image features.This approach completely recovers the conditioning problem associated with spherical moments.
Another problem in designing an IBVS controller for the aerial vehicles is the lack of precise information of the linear velocity.This information is specially important in tracking applications.To overcome this problem, [13] uses flow of image features in the image plane.This approach still suffers from the unsatisfactory trajectory of the vehicle.The optic flows of the image include linear velocity information which are useful to compensate for the low quality of velocity measurements obtained from accelerometers.Reference [14] presents an IBVS controller using two nonlinear observers to estimate translational optic flow and attitude of the robot.The approach uses backstepping method which increases the complexity of the design procedure and is only developed for the positioning task.The moving target problem is addressed in [15] where the controller only produces uniformly ultimately bounded (UUB) tracking; that is, the controller has final tracking error.An asymptotic IBVS tracking controller is presented in [16].However, the approach assumes an especial condition for the motion of the target.
In this paper, the authors present an asymptotic tracking dynamic IBVS controller for the 3D translational motion of a quadrotor helicopter.Perspective image moments are considered as image features which are reconstructed on a virtual image plane to provide the possibility of designing a dynamic IBVS controller.These features do not require any information from the model of the target and they can have arbitrary bounded motion.The mathematical model of the system is presented in terms of the optic flow of image features.Therefore, it possible to design a velocity-free IBVS controller.The robust integration of the sign of the error (RISE) method [17] is utilized in order to achieve a smooth input and avoid the use of switching controller.The controller is robust with respect to the parametric and nonparametric uncertainties in the dynamics of the system associated with target motion and depth information.Another advantage with respect to previous approaches is that the controller does not need the yaw angle of the robot.Stability analysis guarantees the global asymptotic tracking property of the controller.Simulations results illustrate the effectiveness of the controller.This work extends the authors' previous work [18] on optic flow-based IBVS control, in which only UUB stability is guaranteed.
The paper is organized as follows.Mathematical equations of the quadrotor aerial vehicle are presented in Section 2. Utilized image features and their dynamics on the virtual image plane are given in Section 3. In Section 4, the proposed IBVS controller is presented.The validation of the controller by simulations is given in Section 5. Finally, Section 6 provides the conclusion.

Dynamic Equations of the Quadrotor
In this section, first the aerial vehicle of the study is described and then its mathematical model is presented.

System Description.
Figure 1 illustrates the model of a quadrotor helicopter.It consists of four rotors mounted on a rigid cross frame.While the front and rear motors rotate clockwise, the right and the left ones rotate counterclockwise.With this rotation arrangement, the reactive torques, which are generated by the rotors, are cancelled out.In addition, the yaw motion is also compensated.It should be noted that, in conventional helicopters, this function is done by a tail rotor [19].On the other hand, by varying the angular velocity of the rotors, one can control the vertical motion.Also, the yaw motion in the direction of the induced reactive torque can be produced by increasing the thrust of one pair of rotors and decreasing the others such that the total thrust is kept constant.The difference in thrusts of the two pairs of rotors generates the roll (or pitch) motion.

Dynamic Equations.
The equations of motion of the quadrotor (with a camera attached to its center) are described by two coordinate frames: the inertial frame I = {  ,   ,   ,   } and the body-fixed frame B = {  ,   ,   ,   } which is attached to the center of mass of the robot (see Figure 1).The position of the center of the frame B with respect to the inertial frame is denoted by  = [  ]  and the rotation matrix R : B → I defines its attitude which depends on the three Euler angles , , and  denoting, respectively, the roll, pitch, and yaw.
The kinematics of the quadrotor can be expressed by where tively, the linear and angular velocities of the quadrotor in the body-fixed frame.Also, notation sk(⋅) denotes the skewsymmetric matrix.The angular velocity Ω is related to the Euler angles through the following relation: On the other hand, the dynamics of a 6DOF rigid body in the body-fixed frame are given as follows [19]: where  is the mass of the robot, J ∈ R 3×3 is the symmetric inertia matrix around the center of mass, and F ∈ R 3 and  ∈ R 3 are, respectively, the force and torque vectors with respect to the frame B. In case   coincides with the body principal axis of inertia, the inertial matrix is diagonal.The actuators of the robot produce a single trust input,  1 , and the full torque actuation  = [ 2  3  4 ]  .This actuation system shows the underactuated structure of the quadrotor.
The force input F in (3) is as follows: where E 3 = e 3 = [0 0 1]  are the unit vectors in the frames B and I, respectively.From ( 2) and ( 5), it is clear that the translational dynamics, (4), are independent of the yaw angle and hence the yaw dynamic can be controlled independently.
There are two schemes to design a visual servo controller for the quadrotor.In [9,12], single-loop controllers are synthesized based on the full dynamics of the system.The most dominant advantage of this approach is that only a single controller is required.However, the complexity of the controller is high and consequently it is not easy to tune the control gains.Another approach is the cascade control.It means the control scheme is divided into innerloop and outer loop control.In this approach, the dynamics of each loop are much simpler and therefore it is easier to design the controller and tune the control gains.This scheme is considered in this paper where the inner loop tracks the desired orientation using high-rate data, measured by gyroscopes.This desired orientation is the output of the outer loop via an IBVS scheme.The stability of the whole system can be guaranteed by time-scale separation and highgain arguments [20].This paper only focuses on controlling the translational dynamics with the assumption that a proper high-gain controller tracks the desired attitude.

Image Dynamics
Spherical and perspective projections are usually used for vision-based control of aerial vehicles.The authors have proposed a method in [12] to apply the perspective features in designing a dynamic IBVS controller for UAV.In this method, the image moments are projected on a virtual image plane which is oriented using only the pitch and roll angles of the vehicle (through the rotation matrix R  which specifies a rotation, respectively, about   and   axes and depends on  and  angles).This method provides satisfactory trajectory for the robot in Cartesian space.In this paper, the same imaging approach is followed and the image features are selected from a planar target as [21] where V   and V   are the coordinates of the center of gravity of the target in the oriented image plane,  is the focal length of the camera, and  * is the desired value of  which is defined as follows: where V   are the centered moments of the target in the virtual image plane.Knowing that √ =  * √  * , in which  * is the vertical distance of the camera from the target in the desired position, the image dynamics can be written as follows [21]: is the vector of the image features, k() = R  V is the linear velocity of the robot expressed in the virtual frame (a frame attached to B and rotated by R  ), and Δ() is the velocity of the moving target in the virtual frame.

IBVS Using Velocity of Image Features
In this section, a dynamic tracking IBVS controller is presented for the 3D translational motion of the quadrotor helicopter.Using a smooth input, the controller provides an asymptotic stability property and it is robust against the image depth  * and the motion of the moving target.The objective is to move the camera attached quadrotor to match the current image with the desired image obtained from a target.The yaw angle of the quadrotor can be controlled to guarantee a stable velocity, through visual information as done in [12] or using IMU data.
In the design procedure, it is assumed that the camera frame (attached to the center of projection of the camera) is coincident with the quadrotor body-fixed frame, B. This assumption can be easily released by using the transformation between the frames.As the first step, an error vector in the image space should be defined.To simplify the design procedure, the desired features are considered to be as follows: Selecting these desired features is equivalent to have the barycenter of the target at the center of image which is common in vision-based control of UAV.However, a similar transformation as in [12] can be done to modify any desired features to (9).Now the image space error is defined as follows: Using ( 8), the time derivative of the error can be written as To consider the translational dynamics of the robot in designing the IBVS controller, these dynamics should also be written in the virtual frame.Then one has k = −sk ( ψ) k − f (12) in which f is defined as in the following: To regulate the dynamics of the system defined by ( 11) and (12), it is necessary to use the linear velocity information k.As already mentioned, available low-weight and low-cost accelerometers do not provide the velocity measurements suitable for a tracking mission.Therefore, in this paper, the dynamics of the system will be written based on the available optic flow information ( q and hence δ ) of the features.Many reliable and robust methods have been reported in the literature for measurement of optic flow [22,23].Now, by computing the time derivative of (11) and substituting (12) in it, the system dynamics can be written based on the dynamics of the image flow as follows: where These dynamics include some uncertainties which have to be compensated by the controller.There is one parametric uncertainty associated with the depth information  * .The other uncertainties are unstructured which are related to the velocity of the moving target (Δ and Δ).In addition, the acceleration of yaw, which is difficult to precisely measure in practice, is assumed to be unknown, having a bounded value.Therefore, the following assumptions for the dynamics are considered in the subsequent development.
Assumption 1.The image depth  * , the target velocity Δ, and its first, second, and third time derivatives are assumed to be bounded.
Assumption 2. The independent controller for the yaw angle ensures that M 1 and M 2 , and their first time derivative are bounded.This can be satisfied through a simple controller assuming small values for the roll and pitch angles of the robot.
Since the closed-loop system is analyzed through a differential inclusion framework, the following definition is presented.
Definition 3 (Filippov solution [24]).A vector function x(⋅) is called a solution of ẋ = g(x) on [ 0 ,  1 ] (with discontinuity on the right-hand side such that g is Lebesgue measurable and essentially locally bounded function) if x(⋅) is absolutely continuous on [ 0 ,  1 ] and for almost all  ∈ [ where and ⋂ =0 denotes the intersection over all sets  of Lebesgue measure zero, co denotes convex closure, and B(x, ) is open ball of radius  centered at x.
To facilitate the subsequent analysis, filtered tracking errors denoted by  2 and r are also defined as follows: where  1 and  2 are positive constants.It has to be noted that the filtered tracking error r is not measurable since it depends on δ 1 .Now, the open-loop tracking error system can be developed by premultiplying (19) by  * and using ( 14) and ( 18) to obtain the following: where For open-loop dynamics (20), the following controller is designed: where ^∈ R 3 is the Filippov solution to the following differential equation: and  are positive constants, and sgn(⋅) denotes the standard signum function such that ∀x = [ 1  2  3 ]  , sgn(x) = [sgn( 1 ) sgn( 2 ) sgn( 3 )]  .It should be noted that the second term in ( 21) is used to ensure that f(0) = 0. Using ( 19) and ( 22), the time derivative of ( 21) can be written as Now, using (23), the time derivative of ( 20) can be written as follows: where the auxiliary term Ñ ∈ R 3 is defined as The following upper bound can be obtained for Ñ:      Ñ     ≤ where ‖ ⋅ ‖ denotes the Euclidean norm.Equation ( 26) can be written as where h() ∈ R 9 is defined as follows: and  is given by where  1 and  2 are known positive constants.Before presenting the main result of this paper, the following lemma is presented with the proof given in [17].Lemma 4. Let the auxiliary function L() ∈ R be defined as follows: If the control gain  is selected to satisfy the following sufficient condition then where the subscript  = 1, 2, 3 denotes the th element of the vector.
Now, the following theorem for the stability result of the propose controller is stated.Theorem 5. Consider the system dynamics defined by (14) with its input as f.The controller given by ( 21) ensures that the system signals are bounded and the tracking errors are regulated in the sense that provided that the control gain  satisfies (32),  1 > 1/2,  2 > 1, and   is adjusted according to the following condition: where Proof.Let the auxiliary function ( 2 , ) ∈ R be defined as a Filippov solution to the following differential equation: where L is defined in Lemma 4. It can be easily verified from Lemma 4 that if the condition for  in ( 32) is satisfied, () ≥ 0. Now, the following Lyapunov function is considered to prove the theorem: Mathematical Problems in Engineering This Lyapunov function satisfies the following inequalities: where  1 = min{1/2,  * /2},  2 = max{1,  * /2}, and y is defined as The time derivative of (37) along the Filippov trajectories exists almost everywhere (a.e.); that is, for almost all  ∈ [ 0 ,   ], one has where  is the generalized gradient of .Since  is continuously differentiable, then one has where Substituting ( 18), (19), and ( 24) in (41) and using the calculus for K[⋅] from [24] yield and −1 if   < 0 ∀ = 1, 2, 3. Now the following upper bound can be obtained for L , using the fact that the set in (43) reduces to a scalar equality since the righthand side is continuous a.e.: It should be noted that the right-hand side of ( 43) is continuous except for the Lebesgue negligible set of times when r  K[sgn( 2 )] − r  K[sgn( 2 )] ̸ = 0 [25].Using the fact that the expression in (44) can be upper bounded as Considering ( 27) and ( 28) and since  1 > 1/2 and  2 > 1, (46) can be written as follows: L a.e.≤ − ‖h‖ 2 −   ‖r‖ 2 +  ‖r‖ ‖h‖ .
Now, by completing the squares for the second and third terms in the right-hand side of (47) and provided that   satisfies (35), one can conclude that L a.e.≤ − (y) , where (y) ≜ ( −  2 /4  )‖h‖ 2 is a continuous, positive semidefinite function.
The inequalities in ( 38) and ( 48) can be used to conclude that  1 ,  2 , r, and  are bounded.The closed-loop error system can be used to show that the remaining signals are also bounded, and h(⋅) is uniformly continuous.Then, from (48), Corollary 1 of [26] can be invoked to show that  1 ,  2 , and r go to zero asymptotically, and this ends the proof.Remark 6.The controller ( 21) is also robust and provides asymptotic convergence when the translational dynamics of the quadrotor (3) include unstructured uncertainties.These uncertainties, which may be associated with neglected terms in the course of modelling and/or external disturbances, can be modelled as an additive bounded term similar to D [27].Therefore, the controller is able to compensate for its effect.This is because of the interesting property of the RISE method that learns the unstructured uncertainty of the system.
The controller input f provides the translational force for the quadrotor which cannot directly compensate it because of the underactuated structure of the robot.However, having the desired value for f, one can derive the trust magnitude,  1 , and the desired roll and pitch angles [28].Therefore, a proper inner-loop controller, which tracks the desired attitude, satisfies the control objective.

Simulation Study
This section provides MATLAB simulations to validate the effectiveness of the developed vision-based controller.In the simulations, the sampling rates for the visual data and for the rest of the system are selected as 20 ms and 10 ms, respectively.The quadrotor is initially considered to be at a hover position having the target in the field of view.The target is assumed to be rectangular and its vertexes are considered as the available visual information to measure image features (6) and their optic flow.The initial positions of the vertexes are located at (0.25, 0.2, 0) m, (0.25, −0.2, 0) m, (−0.25, 0.2, 0) m, and (−0.25, −0.2, 0) m with respect to the inertial frame.These points are projected through the perspective projection on a digital image plane with focal length divided by pixel size equal to 213 and the principal point located at (80,60).
The parameters of the dynamic model of the quadrotor are selected as  = diag(0.0081,0.0081, 0.0142) kgm 2 /rad 2 ,  = 2 kg, and  = 9.81 m/s 2 .The maximum value of  * is assumed to be 10 m.In order to provide more realistic conditions in the simulations, unstructured forces are also  The inverse dynamics of the quadrotor [29] are used to compute the desired roll and pitch angles of the robot which have to be tracked in order to achieve the force inputs designed in (21).High-gain proportional-derivative controllers are used to control these angles.Since the presented approach requires the velocity of the image features, numerical derivation is used to compute it which can provide an appropriate estimation in case the visual data are available with high frequency.

Tracking Results.
In the first simulation, the quadrotor's initial position is assumed to be at (−0.4, −0.1, −8) m with respect to the inertial frame, and the desired features are measured at (0, 0, −5) m.The target moves on a circle of radius 1 m with the velocity of /15 rad s −1 .The control gains are set as  1 = 1,  2 = 1.2,   = 26, and  = 1.The visionbased method, presented in [30], is exploited to control the yaw angle of the quadrotor.
Trajectories of the target points in the virtual image plane are shown in Figure 2. The norm of the error signals is illustrated in Figure 3. 1D and 3D illustrations of the time evolution of the translational motion of the quadrotor are, respectively, shown in Figures 4 and 5.As expected, results of the simulation show the satisfactory tracking of the target and the system errors converge to zero in the tracking mission.

Comparative Study.
To show the superiority of the proposed vision-based controller with respect to previous methods, in this section the results are compared with the results of the method proposed in [31].This work assumes that the linear velocity information is available via accelerometers.The presence of bias in the output of accelerometers is one of the main errors in these sensors [32].The effect of this error is studied in [18] which shows that a small amount of bias error degrades the performance of this approach.However, here it is assumed that the true value of velocity information is available and only the final tracking error of two approaches is compared.The target trajectory and the initial position of the quadrotor are assumed to be the same as the first simulation.The trajectories of the moving target and the quadrotor in     plane for two approaches are illustrated in Figure 6.In addition, the norm of the input vectors is shown in Figure 7. Results demonstrate that, in spite of having smooth and similar control efforts, the proposed controller in this paper improves the performance of the system by decreasing the final tracking error.

Conclusion
This paper has proposed an IBVS controller for the translational motion of the quadrotor helicopter, flying on a moving target.The main purpose of this paper is to decrease the final tracking error of the system in the presence of uncertainty in the model of the system.The controller utilizes the RISE method to achieve a smooth control effort.In order to  compensate for the unreliable quality of accelerometers for a tracking task, in this paper the dynamics of the system are derived based on the flow of image features.The optic flow can be obtained using the flow of the target points in the image sequence or simply can be computed by numerical derivation in case the visual data are available with high rate.The proposed controller is robust against the parametric and nonparametric uncertainties in the dynamic model of the system.These uncertainties are associated with the motion of the target, unknown depth information of the image, and also unmodelled terms in the translational dynamics.Stability analysis proves that the controller produces an asymptotic tracking performance.Simulation results demonstrate the satisfactory response of the proposed vision-based approach and its advantage over the classic methods.
The future work of this research is devoted to improving the robustness of the system in the presence of uncertainty in the optic flow measurements.

Figure 2 :
Figure 2: Simulation 1: trajectories of the target points in the virtual image plane.
applied to the system which are modelled by sinusoidal signals with different phases for each direction and the amplitude equal to 0.1 N.

Figure 3 :Figure 4 :
Figure 3: Simulation 1: time evolution of the norm of the error signals.

Figure 5 :
Figure 5: Simulation 1: 3D illustration of the trajectory of the motion of the quadrotor and the moving target.

Figure 6 :Figure 7 :
Figure 6:     plane trajectories of the target and the robot.