This paper addresses the stability problem of uncalibrated image-based visual servoing robotic systems. Both the visual feedback delay and the uncalibrated visual parameters can be the sources of instability for visual servoing robotic systems. To eliminate the negative effects caused by kinematic uncertainties and delays, we propose an adaptive controller including the delay-affected Jacobian matrix and design an adaptive law accordingly. Besides, the delay-dependent stability conditions are provided to show the relationship between the system stability and the delayed time in order to obtain less conservative results. A Lyapunov-Krasovskii functional is constructed, and a rigorously mathematic proof is given. Finally, the simulation results are presented to show the effectiveness of the proposed control scheme.
Science and Technology Program of Tianjin, China15ZXZNGX002901. Introduction
For human beings, vision is an important sensory channel. Through visual sensors, robots also can monitor the circumstance and perform the tasks. Nowadays, the advanced visual processing techniques and high-speed image processors make vision-based robot systems capable of handling dynamical tasks, and the vision-based control has been applied to many industrial robot systems. It has become the mainstream of robot control.
Vision-based control can be traced back to 1980s [1]. Look-and-move is one of the early vision-based technologies [1–3]. In this approach, two nested loops run simultaneously: the visual loop is the external loop and the joint-space loop is the internal loop. Due to the sensitivities to disturbances and errors, the look-and-move architecture is not suitable for high-performance control tasks [4]. As an alternative, the visual servo (VS) technique is proposed [5]. This control architecture directly generates the control inputs using the visual information. Such a simple and direct structure is favorable for high-speed servoing tasks. Considerable visual servoing approaches have been investigated for various robot systems and from many different aspects. Figure 1 shows two typical structures of the visual servoing control.
Visual servoing control scheme structure.
Position-based visual servoing
Image-based visual servoing
In the existing literature, there are two challenges in the field of visual servoing control: (a) the difficulties of calibration and (b) the image feedback signals of inferior quality.
The calibration of visual servoing systems includes the camera calibration, kinematic calibration, and dynamic calibration. For the sake of identifying unknown or uncertain system parameters, periodical and high-accurate calibration work usually is required, which is tedious and demanding. Without such calibration work, the system models cannot be accurately characterized and the closed-loop visual servoing systems could be unstable. To avoid such calibration work, the uncalibrated control approaches are proposed [6–10]. Some work [6–8] investigates the approaches with robust controllers for eliminating the negative effects of calibration errors of the system model, and the uncertain system parameters in the above work are replaced with the approximated ones. As for the case of unknown parameters or time-varying parameters, adaptive control techniques are proposed [11, 12]. To handle such parameters, in these methods, adaptive laws are designed to update them online.
As another cause of system instability, the visual signals of inferior quality are also nonnegligible. Generally speaking, noise and delays in the visual signals are the main inducements. In this paper, we consider delays as the main reason for inferior image signals. As we know, the visual signal flows are expected to be synchronized with other system signals. However, asynchronization could happen due to many reasons including the limitation of image processing [13–17] and restriction of visual signal transmission [18]. Some early research studies the instability problem caused by image-processing (or image-sampling) delays [1, 2, 13]. These early efforts focused on reducing the image sampling time through a parallel or pipelined approach [2, 14–17]. With the development of the advanced image processor chips, such a problem has been resolved to a large extent. With the wide application of visual servo, its patterns are becoming various. The connections between visual sensors and controllers can be wireless or Internet, which also means the visual feedback path can be a source of delays due to the transmission block in the intercomponent information exchanges [1, 2, 13, 18]. Improving the speed and the reliability of communication links is a straightforward way to address the issue. But it inevitably leads to the increase of the cost. Consequently, designing proper control schemes to handle with delays is an alternative. In the IBVS control scheme design, the delay problem is studied by [19–23]. Using average joint angle values of the past and present moment to replace the present joint angle, [19] obtains predicted image feature position values by the Jacobian matrix to cope with the delay problem. One common flaw of the aforementioned methods is that they require the accurate knowledge of system parameters, and the acquirement of such information is based on calibration.
The two challenges make a visual servoing robotic system become typical complex industrial systems. This is because the mainstream noncalibration techniques usually require accurate image signals to compensate for the parametric errors or to update the unknown parameters. Under the delayed image feedback loop, there is no accurate synchronized visual feedback available. In this context, the control of such systems is of high nonlinearity and complexity. Consequently, it is worthwhile and challenging to be investigated. This paper therefore will concentrate on the influence of visual transmission delays upon the uncalibrated visual servoing robotic systems.
In the literature of this area, [21] presents an online calibration method to overcome the time delay problem. Inoue and Hirai [22] design a two-layer controller called STP to compensate for the delays and the concept of virtual trajectory is introduced. Gao and Su [23] employ local fitting Jacobian matrix based on polynomial fitting to obtain more accurate Jacobian estimation and image precompensation for uncalibrated IBVS robotic system. Unfortunately, the controller design in the above literature is based on kinematics and fails to consider the dynamics of robots. It is well known that the dynamics of robot systems plays an important role in the stability, especially in the case of high speed. Much progress has been made in the aspect of the uncalibrated dynamic-based visual servoing systems control without delay effects [24–31]. As for the uncalibrated dynamic-based visual servoing systems with the delay effects, the relevant work focuses on the area of the distributed cooperative control [32–35]. Liu and Chopra [33] study an adaptive control algorithm to guarantee task-space synchronization of networked robotic manipulators in the presence of dynamic uncertainties and time-varying communication delays. Wang [34] investigates the problem of synchronization of networked robotic systems with kinematic and dynamic uncertainties in the case of nonuniform constant communication delays. Liang et al. [35] address cooperative tracking control problem of networked robotic manipulators in the presence of communication delays under strongly connected directed graphs. However, the above work considers the delays in the interagent information exchanges rather than delays existing in the visual feedback of single dynamic-based visual servoing robotic system. To the best of the authors’ knowledge, there is little literature considering the time delay problem in an uncalibrated dynamic-based visual servoing robotic system without using image-space velocity measurements. To address the aforementioned issues, the following problems are expected to be addressed. First, the modeling of the system. Delays, noncalibration, and velocity measurements, these contributing factors, need to simultaneously be included in the modeling of the system. Second, the handling of the time-varying parameters and how to avoid using image velocity measurements. Third, the delay-dependent stability conditions are expected to be given for obtaining less conservation. The contributions of this paper can be summarized as follows. (a) An uncalibrated dynamic-based visual servoing model is developed to visual track a feature point whose image depth is time varying without using the image velocity and in the presence of unknown constant delayed visual feedback. (b) To handle overlapped effects of uncalibrated parameter uncertainties and the visual feedback delay, the novel Jacobian matrix called delay-affected Jacobian matrix is first proposed in this paper. (c) Lyapunov-Krasovskii stability theory is employed to analyze the stability of the delay-affected dynamic-based visual servoing system, and delay-dependent (d.d) stability conditions are given to obtain less conservative stability results.
The paper is organized as follows. Section 2 gives some preliminary knowledge used throughout the paper. In Section 3, the kinematic and dynamic models of dynamic-based visual servoing robotics systems are formulated. In Section 4, the main results of this paper, the controller design, and the adaptive laws are proposed to address the stability problem of the uncalibrated dynamic-based visual servoing robotic system with visual feedback delays. In Section 5, rigorous stability analyses are provided. Section 6 presents simulation results to show the effectiveness of the proposed control scheme. Section 7 concludes the paper.
2. PreliminariesLemma 1.
Let X, Y, and F be real matrices with proper dimensions, where FTF≤I. For any constant ε>0, the following holds.
(1)XFY+YTFTXT≤εXXT+1εYTY.
Lemma 2.
Let ϕ:R→R be a uniformly continuous function on 0,∞. Suppose that limt→∞∫0tϕτdτ exists and is finite. Then,
(2)ϕt→0ast→∞.
Let f:R×C→Rn be a mapping from R× (a bounded subset of C) to a bounded subset of Rn, u,v,w:R¯+→R¯+ are continuous nondecreasing functions, for any s>0, us, vs are positive and u0=v0=0. If there exists a continuous differentiable functional V:R×C→R, such that
(4)uϕ0≤Vt,ϕ≤vϕc,and
(5)V̇t,ϕ≤−ωϕ0,the zero solution of system (8) is uniformly stable. If zero solution of the system is uniformly stable and for any s>0, ωs>0 holds, then zero solution of system (8) is uniformly asymptotically stable. If zero solution of the system is uniformly asymptotically stable and lims→∞us=∞, then zero solution of system (8) is globally uniformly asymptotically stable.
3. Kinematics and Dynamics
In this section, we present the mathematical modeling of delayed visual servoing robotic systems with the eye-in-hand configuration. In the modeling process, both kinematics and dynamics are considered. To illustrate the kinematics of the system, Figure 2 shows the transformation among different frames.
Robotic system with the eye-in-hand configuration.
Let yt∈ℝ2 be the coordinates of a feature point’s projection on the camera image plane and r∈ℝ3 be the Cartesian coordinates of the feature point w.r.t the robot base frame. Based on the model developed in [7], the mapping between image position yt and the Cartesian position r can be formulated as
(6)yt=1ztΩ1Ω2Tbcr1,where zt∈ℝ denotes the depth of feature to the camera frame; Tbc∈ℝ4×4 denotes the homogeneous transformation matrix from the camera frame to the base frame; Ωi∈ℝ1×4 denotes the ith row of the camera intrinsic parameter matrix Ω (Ω is an intrinsic parameter matrix which is derived from the typical model introduced in [36]). In reality, the feature point is stationary with respect to the robot base and it results in a constant column vector r.
The relationship between zt and r can be formulated by
(7)zt=Ω3Tbcr1,where Ω3 is the third row of the perspective projection matrix Ω.
Combining with (7), the derivative of (6) w.r. time t satisfies
(8)ẏt=1ztΩ−yΩ3∂Tbc∂tr1,where y is short for yt. It should be noticed that Tbc can be divided into two parts: Tbe (the forward robot kinematics) and Tec (homogeneous matrix from the camera to the end effector). Due to the eye-in-hand configuration, Tec is a constant matrix. Then, one has
(9)∂Tbc∂tr1=Tec∂∂qRr+Pq̇,where R denotes the rotation matrix, P denotes the translation vector, and q denotes the joint position. For more details, see [7]. By letting the matrix L be the left 3×3 submatrix of ΩTec, L0∈ℝ2×3 be the 1st and the 2nd rows of L and L3∈ℝ1×3 be the 3rd row of L, one can derive the mapping from joint velocities to image velocities as follows:
(10)ẏt=1ztL0−yL3∂∂qRr+P⏟Jqq̇.
The nonlinear mapping Jq introduced in (10) is an important matrix in IBVS, which is known as Jacobian matrix [37, 38]. The differential of (7) w.r.t. time t satisfies
(11)żt=L3∂∂qRr+Pq̇.
The dynamics of robots can be given with Euler-Lagrange equation as follows [39]:
(12)Hqtq¨t+12Ḣqt+Cqt,q̇tq̇+gqt=u,where u is the n×1 vector formed by joint input of the manipulator; Hqt is the n×n positive-define and symmetric inertia matrix; Cqt,q̇t∈ℝn×n is a skew-symmetric matrix such that for any proper dimensional vector ψ,
(13)ψTCqt,q̇tψ=0.
On the left side of (12), the first term is inertia force, the second term is the Coriolis and centrifugal forces, and the last term gq is the gravitational force.
Remark 1.
From Figure 2, it intuitively can be seen that the estimation of Jacobian matrices determined by homogeneous transformation matrices (Tbc, Tbe, and Tec) is directly affected by the delayed visual feedbacks. The complexity of the system mainly lies in the highly nonlinear relationship between delayed image states and joint states.
To facilitate analysis, we present Figure 3 to show the closed-loop structure of a typically delayed VS robotic system.
The structure of a delayed visual servoing robotic system.
4. The Adaptive Controller Design
In this section, we will investigate the uncalibrated dynamic-based visual servoing robotic system with visual feedback delays and kinematic uncertainties. In our study, the formulation of the uncalibrated VS robotic system is partly based upon the depth-independent Jacobian model developed by [27]. This model allows depth to be time varying so that the visual servoing system can still be stabilized even in the presence of the fast-changing feature image depth.
From (10), we can easily split 1/zt from Jq and thereby obtain the depth-independent Jacobian matrix D which is given by
(14)D=L0−yL3∂∂qRr+P.
Additionally, from (11), we define such a vector as follows:
(15)D3=L3∂∂qRr+P.
Therefore (10) and (11) can be rewritten as
(16)żt=D3q̇,ẏt=1ztDq̇.
In the uncalibrated dynamic-based visual servoing system, the estimate of Jacobian matrix is usually used as the replacement of unknown exact Jacobian matrix. It can be easily seen from (14) and (15) that components of the depth-independent Jacobian matrix D and the matrix D3 can be classified as two categories: the known and the unknown. Known components are R, P, and unknown components are L, L3, and r. The estimate of Jacobian matrix can be analytically derived through the linear parameterization [40]. From (14), it can be seen that the known and the unknown are coupled. And the coupling of the known and the unknown hinders the linear parameterization of these matrices. The following property is proposed to decouple them.
Property 1.
For a vector q̇∈ℝn×1, the product Dq̇ can be linearly parameterized as follows:
(17)Dq̇=YDy,q,q̇θk,(18)D3q̇=YD3q,q̇θk,where YDy,q,q̇∈ℝ2×p1 and YD3q,q̇∈ℝ1×p1 are regressor matrices which consist of known parameters; θk∈ℝp1×1 is a vector which consists of unknown parameters; and p1 denotes the number of unknown parameters, which satisfies p1≤36.
Proof 1.
Due to the limitation of pages, see proof in Appendix A.
By Property 1, the Jacobian matrix D can be expressed in a linear form: a known matrix (regressor matrix) multiplies an unknown vector. From (12), it can be clearly seen that the regressor matrix YDy,q,q̇ includes the current image position yt. Unfortunately, the feedback visual signals are delayed as we consider. We may use yt−h to denote the coordinates of delayed feature image position, where h denotes the constant delayed time. In this case, the matrix YDy,q,q̇ cannot be obtained. Instead, we can only obtain YDyt−h,q,q̇. After substituting this regressor matrix including the delayed visual feedback matrix into (14) and (12), we have
(19)Qq̇=YDyt−h,q,q̇θk=L0−yt−hL3∂∂qRr+Pq̇,where Q is named as delay-affected depth-independent Jacobian matrix. For simplification, we call it delay-affected Jacobian matrix hereafter. The relationship between Q and D is given by
(20)Q=D+y−yt−hD3.
Using the delay-affected Jacobian matrix Q and D3, we define a new composite Jacobian matrix as
(21)Jd=Q+12yt−h−ydD3+yt−h−ydq̇+,where q̇+ denotes the vector which satisfies q̇+q̇=1.
Based on all above analyses, we now propose the controller for delay-affected uncalibrated VS robotic systems as follows:
(22)ut,t−h=gq−J^dTqK1yt−h−yd−K2q̇,where K1∈ℝ2×2 and K2∈ℝn×n are positive definite symmetric matrices and J^dq denotes the estimate of Jdq. Note that the estimate J^dq for the new Jacobian matrix is able to obtain from (21) by respectively replacing unknown matrices Q and D3 with their estimates Q^ and D^3, and it yields
(23)J^dq̇=Q^q̇+12yt−h−ydD^3q̇+yt−h−yd.
Additionally, recalling (12) and (18) in Property 1, we can easily derive the following linear parameterization form
(24)J^dq̇=YDyt−h,q,q̇+12yt−h−ydYD3q,q̇⏟Y¯yt−h,yt,q,q̇θ^k+yt−h−yd,where Y¯yt−h,yt,q,q̇ is the new regressor matrix including the delayed image state. To obtain θ^k, we proposed the following adaptive law:
(25)θ^̇k=Γk−1Y¯TK1yt−h−yd,where Γk is a positive definite symmetric matrix with proper dimensions and Y¯ is short for Y¯yt−h,yt,q,q̇. Besides, it is not hard to derive Y¯m and Y¯M accordingly. Please refer to Notation in Introduction for the explanation. Additionally, it is also not hard to roughly give the bound of the unknown parameter vector θk according to the Tec and feature Cartesian coordinates [27]. Thereby, we assume that both θ^km and θ^kM are known, i.e., θ^k∈ℒ∞. Basing on the above analyses, we can effortlessly know the bound of J^d from (24), i.e., J^dm and J^dM can be regarded as known ones. We define
(26)J^d0=12J^dm+J^dM,ΔJ^d=12J^dM−J^dm.
Consequently, J^d can be expressed in the interval matrix form [41] as follows:
(27)J^d=J^d0+∑i,j=1i=2,j=neigijejT,gij<ΔGij,where ΔGij denotes the element at the ith row and the jth column of ΔJ^d. Likewise, Jq is also bounded. Jm and JM are given by
(28)J0=12Jm+JM,ΔJ=12JM−Jm.
Hence, Jq can be expressed in the interval matrix form
(29)J=J0+∑i,j=1i=2,j=neifijejT,fij<ΔJij,where ei∈ℝ2 denotes the column vector whose ith element is 1 and the other element is 0; ej∈ℝn denotes the row vector whose jth element is 1 and others are 0; ΔJij denotes the element at the ith row and jth column of ΔJ.
Remark 2.
From (24), one of the key points in deriving J^dm and J^dM is the obtaining of θ^km and θ^kM. From the practical experience, the range scales of θ^k actually depend on the (1) the initial value of θ^k, which is set artificially and (2) the real value θk which is unknown. Even if θk is unknown, we can easily give some estimates of its elements according to some other rough estimates. For more details, please refer to Appendix A.
5. Stability AnalysisTheorem 1.
Consider the uncalibrated delayed visual servoing system described by (8), (11), (14), and (15) and the controller (22). For a given constant h>0, if there exist symmetric matrices K1>0, K2>0, positive constants εij>0, i=1,2, j=1,2,…,n, such that the following nonlinear matrix inequalities hold
(30)−K2+hJ0TΓy+ΓyTΓyJ0T+h2ΔJij2J1+J2+J^d0TJ^d0+ΔGij2G1+G2≤0,(31)−Γy+12żM2−1+K1TK1≤0,(32)−K1+K1TK12≤0,where
(33)J1=∑i,j=1i=2,j=nεijejejT,J2=∑i,j=1i=2,j=n1εijejeiTΓy+I2eiejT,G1=14∑i,j=1i=2,j=nζijejejT,G2=∑i,j=1i=2,j=n1ζijejeiTeiejT.Each εij and ζij denotes any positive constant separately. Then, the system is asymptotically stable, i.e., the image error of the feature point is convergent to zero, limt→∞Δy=0.
Proof 2.
Combining (14), (19), and (21), we have
(34)J^dq̇=Jqq̇+J^dq−Jdqq̇=Qq̇+12yt−h−ydtD3q̇+yt−h−ydtq̇+q̇+Y¯Δθk=zẏ+12żΔy+12yt−yt−hż+yt−h−ydt+Y¯Δθk,where Δθk=θ^k−θk.
Substituting controller (22) into (12),we have the following closed-loop system,
(35)Hqtq¨t+12Ḣqt+Cqt,q̇tq̇=−J^dTqK1yt−h−yd−K2q̇.
As aforementioned, the fact that Y¯ and Δθk are all bounded yields the result that
(36)∫0tΔθkTY¯TY¯Δθkdr≤BM,∀t≥0for some positive constants BM.
Let us consider the following nonnegative Lyapunov-Krasovskii functional candidate,
(37)Vx=12q̇THqq̇+12ΔyTK1zΔy+∫−h0∫t+θtẏTsΓyẏsdsdθ+12ΔθkTΓkΔθk+BM−12∫0tΔθkTY¯TY¯Δθkdr⏟▽,where the employment of the term ▽ follows the typical practice (refer to [42], p118).
The time derivative of V along the trajectory of system is given by
(38)V̇x=q̇THqq¨+q̇T12Ḣqq̇+ΔyTK1zΔẏ+ΔyT12K1żΔy+ΔθkTΓkΔθ̇k+hẏTtΓyẏt−∫t−htẏTsΓyẏsds−12ΔθkTY¯TY¯Δθk.
Multiplying q̇T from left side to both sides of (35) yields
(39)q̇THqq¨+q̇T12Ḣqq̇=−q̇TJ^dTqK1Δy+q̇TJ^dTqK1yt−yt−h−q̇TK2q̇.
Rewriting the (34) and then multiplying ΔyTK1 from the left side of zẏ+1/2żΔy, we have
(40)ΔyTK1zẏ+Δy12K1żΔy=ΔyTK1J^dq̇−ΔyTż2−1K1yt−yt−h−ΔyTK1Δy−ΔyTK1Y¯Δθk.
After taking differential of 1/2ΔθkΓkΔθk and invoking (25), it yields
(41)ΔθkTΓkΔθ̇k=ΔθkTY¯TK1Δy−yt+yt−h.
Substituting (39), (40), and (41) into (38), we obtain
(42)V̇x=q̇TJ^dTqK1yt−yt−h−q̇TK2q̇−ΔyTż2−1K1yt−yt−h−ΔyTK1Δy−ΔθkTY¯TK1yt−yt−h−12ΔθkTY¯TY¯Δθk+hẏTtΓyẏt−∫t−htẏTsΓyẏsds.
Likewise, with Lemma 1, the below cross terms yield
(43)q̇TJ^dTqK1yt−yt−h≤12q̇J^dTJ^dq̇+12∫t−htẏTsK1TK1ẏsds,−ΔyTż2−1K1yt−yt−h≤12ΔyTK1K1TΔy+12ż2−12∫t−htẏTsK1TK1ẏsds,−ΔθkY¯TK1yt−yt−h≤12ΔθkTY¯TY¯Δθk+12∫t−htẏTsK1TK1ẏsds.
Besides, from (10), hẏTtΓyẏt can be rewritten as hq̇TJTqΓyJqq̇.
Having obtained the results in (43), substituting them to (43) and we have the following inequality:
(44)V̇x≤q̇−K2+12J^dTJ^d+hJTΓyJ⏟Iq̇+−Γy+12ż2−12+K1TK1⏟II∫t−htẏTsK1TK1ẏsds+△yT−K1+K1TK12Δy.
We will analyze the term I and the term II one by one. Firstly, we consider the term I. In this term, both J^d and J are time-varying matrices. It should be noted that we assume Jm and JM being unknown ones as aforementioned. Using Lemma 1 and (29), we can easily derive
(45)hJTΓyJ≤hJ0TΓy+ΓyTΓyJ0T+∑i,j=1i=2,j=nhεij2ΔJij2ejejT+∑i,j=1i=2,j=nh2εijΔJij2ejeiTΓy+I2eiejT,where εij denotes any positive constant.
With Lemma 1 and (27), we can effortlessly extend 1/2J^dTJ^d as
(46)12J^dTJ^d≤J^d0TJ^d0+∑i,j=1i=2,j=nζij4ΔGij2ejejT,+∑i,j=1i=2,j=n1ζijΔGij2ejeiTeiejT,where ζij denotes any positive constant.
Substituting (45) and (46) into term I, invoking (30), it yields
(47)−K2+12J^dTJ^d+hJTΓyJ≤−K2+hJ0TΓy+ΓyTΓyJ0T+h2ΔJij2J1+J2+J^d0TJ^d0+ΔGij2G1+G2≤0,where J and G are defined in (33).
Then, we consider the term II. In actual visual servoing robotic system, the depth changing velocity is actually bounded. Here, we can reasonably assume that the ż is bounded, ż<żM. Invoking (31), we have
(48)−Γy+12ż2−12+K1TK1≤−Γy+12żM2−1+K1TK1≤0.
Combining (47), (48), and (32), we can finally have V̇≤0 in (44), which means that the Lyapunov-Krasovskii functional V never increases its value so that it is upper bounded. From (37), bounded V directly implies that the joint velocity q̇∈ℒ2∩ℒ∞, y−yt−h∈ℒ2∩ℒ∞, the errors of θk∈ℒ2∩ℒ∞, and image error Δy∈ℒ2∩ℒ∞. Then the joint acceleration q¨∈ℒ∞ can be concluded from the closed-loop dynamics (35). Therefore, the joint velocity q̇ is uniformly continuous. Note that it is not hard to derive ẏ∈ℒ∞ from (10) with bounded JTq and q̇∈ℒ2∩ℒ∞. Thereby, we can also conclude that Δy is uniformly continuous. ẏ∈ℒ∞ and ẏt−h∈ℒ∞ yield ẏ−ẏt−h∈ℒ∞ and hence we can derive that the image delay error y−yt−h is uniformly continuous. And from (25), it can be derived that θ̇k∈ℒ∞. Thereby, θ̇k is uniformly continuous. Invoking Lemma 2 and Lemma 3, we have limt→∞Δy=0, limt→∞Δθk=0, and limt→∞y−yt−h=0. This completes the proof.
Remark 3.
It can be clearly seen that the delay-dependent stability condition is presented in Theorem 1. Stability analyses given by [33–35] are delay-independent results, which means the stability conditions impose no constraint on system delays. Hence, their stability results hold with any magnitude of delays. However, in reality, the delays are usually bounded and the delay-dependent results are conservative. To obtain less conservative results, we should consider magnitude of delays. It is significant to the delay stability research due to less conservativeness.
Remark 4.
In order to fully control 6-DOFs or more degree robots, we need more noncollinear feature points. For instance, three noncollinear feature points should be considered for a 6-DOF manipulator. The scheme proposed in this paper can be effortlessly generalized to the case of multiple feature points by the similar method described in [28]. Considering the page limitation, we only present the single feature point case in this paper.
6. Simulation Results
To show the effectiveness of the control scheme described in (22) and Theorem 1, we conduct the following simulations.
The actual visual parameters are set as follows: f=0.035 m, u0=280 pixels, v0=250 pixels, ku=1800 pixels/m, kv=1800 pixels/m, and ϑ=π/2 rad, where f is focal length; u0 and v0 are coordinates of camera principal point in the image frame; ku and kv denote scale factors along axis x and y, respectively; and ϑ denotes intersection angle between axis u and axis v. The intrinsic matrix Ω therefore can be derived as
(49)Ω=6302800063000010.
For the setting of the camera’s position and pose, the Tec is set as follows:
(50)Tec=1000.10100.10010.10001.
The gravitational acceleration is set as g=10 m/s2. Tbe is time varying and determined by forward kinematics of the manipulator whose parameters are given in Table 1.
Parameters of manipulator in the simulation.
Link
li
αi
di
qi
mikg
lci
1
1
0
0
q1
1
0.5
2
1
180
0
q2
1
0.5
3
0
0
1
q3
1
0.5
Notes: li denotes link length; αi denotes link twist; di denotes link offset; qi denotes joint angle; mi denotes link mass; lci denotes the length between barycenter and its prior joint.
From Property 1 and according to the ranges of qi, z, L0, L3, ∂/∂qRr+P, in this paper, we may set JdM and Jdm as
(51)J^dm=−602−120−85−150−120−30,J^dM=6021208515012030,and set Y¯M and Y¯m as
(52)Y¯m=−ϒ01×12−280×ϒ−ϒ01×12−250×ϒ,Y¯M=ϒ01×12280×ϒϒ01×12250×ϒ,where
(53)ϒ=500…500⏟9200…200⏟3.
The feature point’s coordinates w.r.t. the base frame are (150, 20)Tm. The initial position coordinates on the image plane are (140, 81.44)T, and the desired position coordinates on image plane are (160.7, 120.6)T.
Besides, we set żM=0.3 m/s here and K1 and K2 are obtained by solving the feasibility problem of (30), (31), and (32) with the solver feasp. In this simulation, we use K1=diag3.250,1.025, K2=diag50.689,125.336, Ψk=50I18, Γk=8I36.
Based on all above settings, two simulations are conducted. In the first simulation, the proposed control scheme is used to track the desired position under two different constant delays: h1=98 ms and h2=198 ms. Figures 4(a), 5(a), 6(a), and 7(a) demonstrate the position errors, the position, the velocity, and the trajectory of the feature point on the image plane, respectively. It can be observed that the performance is almost identical even under the different delays, 198 ms and 98 ms. It verifies that the convergence of the system will be achieved once as long as the conditions given in Theorem 1 hold. Besides, Figure 7 also shows better position tracking performance with 98 ms delays than that of 198 ms. To show the convergence of estimates θ^k to real values, we partly choose some elements in the vector θ^k. Figure 8 shows the profile of estimated parameters from θ^k4 to θ^k12. It should be noted that the kinematic parameters θ^k converge only when the persistent excitation (P.E.) condition is satisfied. In our simulation, we choose the θ^k0 close to their real values such that these estimated parameters can converge to them. In most cases, the estimated parameters only converge to the true values up to a scale. However, it will not affect the convergence of image errors.
Position tracking errors.
Scheme 1
Scheme 2
Position trajectory of the feature point on the image plane.
Scheme 1
Scheme 2
Velocity of the feature point on the image plane.
Scheme 1
Scheme 2
Position trajectory of the feature point on the image plane.
Scheme 1
Scheme 2
Estimated parameters from θ^k4 to θ^k12.
To demonstrate the superiority of the proposed control scheme, we make a comparison between the two control schemes: the scheme 1 and the scheme 2. The scheme 1 is the method proposed in this paper, and the scheme 2 originating from [30] is modified accordingly in this simulation as follows.
(54)J^=D^+12ΔyD^3,ut=gq−J^K1Δyt−K2q̇.
It should be noted that the Jacobian matrix in the scheme 2 does not consider the delay effects. Then, we conduct the second simulation. In this simulation, we use żM=0.05 m/s, K1=diag50.250,50.025, K2=diag60.168,125.336, Ψk=100I18, Γk=0.2I36, h=98 ms for scheme 2.
From Figures 4(b)–7(b), it can be clearly seen that the performance of scheme 2 in the presence of delay time h=98 ms is unsatisfying. Abnormal oscillations can be observed, which is caused by the delays. In contrast, the proposed scheme can still guarantee very satisfying control performance, which is not affected too much by delayed signals. In conclusion, the second simulation result shows the superiority of the proposed scheme over existing schemes that can eliminate the negative effect caused by delays to a great extent.
7. Conclusions
In this paper, we have proposed a control method for uncalibrated dynamic-based visual servoing robotic systems to cope with the delay problem existing in the visual feedback loops. To handle the unknown camera intrinsic and extrinsic parameters, we introduced the depth-independent Jacobian matrix and used the linear parameterization to adaptively identify these uncertainties. Then, we took the delays into consideration and constructed a novel matrix called delay-affected Jacobian matrix. Based on the delay-affected Jacobian matrix, we proposed the adaptive controller. To prove the stability of the closed-loop system, the Lyapunov-Krasovskii functional is constructed and delay-dependent stability conditions are also provided to obtain less conservative results. Simulation results of the proposed control scheme were presented to show the effectiveness. To further validate the performance of the proposed scheme, experimental tests on real networked visual servoing robotics systems are expected to be the most appropriate choice and this is also one of our main objectives in the future research.
AppendixA. Proof of Property 1Proof 3.
Let Li denote the ith row of L. Recalling (14), we can expand LiT∂/∂qRx+P as follows:
(A.1)LiT∂∂qRx+P=∂∂qLiTRx+LiTP=∂∂qr11Li1x1+r21Li2x1+r31Li3x1+r12Li1x2+r22Li2x2+r32Li3x2+r13Li1x3+r23Li2x3+r33Li3x3+ρ1Li1+ρ2Li2+ρ3Li3,where rhlh=1,2,3,l=1,2,3 denotes the h,l element of matrix R, ρkk=1,2,3 denotes the kth elements of P, Liji=1,2,3,j=1,2,3 denotes the i,j element of matrix L, and xp denotes pth element of x. Let qi and q̇i be the ith element of q and q̇, respectively. When p1=36, i.e., none of the elements rhl and ρk equals to zero, vector Dq̇ will linearly depend on 36 unknown parameters, and YDy,q,q̇ can be derived. Define ϱhl=∑i=1nq̇i∂rhl/∂qih=1,2,3,l=1,2,3, ϱk=∑i=1nq̇i∂ρk/∂qik=1,2,3, ϱT=ϱ11,ϱ12,ϱ13,ϱ21,ϱ22,ϱ23,ϱ31,ϱ32,ϱ33,ϱ1,ϱ2,ϱ3, and θkiT=Li1x1,Li1x2,Li1x3,Li2x1,Li2x2,Li2x3,Li3x1,Li3x2,Li3x3,Li1,Li2,Li3,i=1,2,3. Specifically, we have θk=θk1T,θk2T,θk3T and
(A.2)YDy,q,q̇=ϱT01×12−uϱT01×12ϱT−vϱT.
When rhl is independent of q, ϱhl equals zero. YDy,q,q̇ can be obtained by removing elements which equal zero, accordingly, θk can be obtained by removing corresponding elements. When ρk is independent of q, YDy,q,q̇ and θk can be obtained by similarly removing. In this way, we can derive the expression of θk and YDy,q,q̇ for every p1<36.
Besides, because dT is a subset of D, the linearization of dTq̇ can be a direct result of the Property. When p1=36, YD3q,q̇ can be expressed as
(A.3)YD3q,q̇=01×24ϱT.
When p1<36, i.e., ρk is independent of q, YD3q,q̇ and θk can be obtained by removing corresponding elements. Then, we can derive expression of θk and YD3q,q̇ for every p1<36. This completes the proof.
B. List of Notations & Symbols
List of notations and symbols.
Notations
Ω
Perspective projection matrix
Tbe
Homogeneous transformation matrix from the end effector to the base
R
Rotation matrix included in Tbe
T
Translation vector included in Tbe
Tec
Homogeneous transformation matrix from the camera to the end effector
r
The Cartesian coordinates of the feature point w.r.t. the robot base frame
L
The matrix consists of the left 3×3 part of ΩTec
L0
The 1st and 2nd rows of L
L3
The 3rd row of L
Jq
Jacobian matrix
Hq
The inertia matrix of manipulator dynamics
Cq,q̇
The Coriolis and centrifugal forces
gq
The gravitational force
u
The control input
D
The depth-independent Jacobian matrix
D3
The vector derived from L3∂/∂qRr+P
Q
The delay-affected depth-independent Jacobian matrix
Jd
A novel composite Jacobian matrix
K1, K2
The control gain matrices
YD∗, YD3∗
The regressor matrices
θk
The unknown parameter vector
ei
ei∈ℝ2, the column vector whose ith element is 1 and the other element is 0
ej
ei∈ℝn, the row vector whose jth element is 1 and other elements are 0
Symbols
⋅
The standard Euclidean norm
AT
The transposition of matrix A
Am
The matrix consists of the minimum elements of A
AM
The matrix consists of the maximum elements of A
Aij
The element in the ith row and the jth column of matrix A
yt−h
The visual signal delayed unknown constant duration of h
Data Availability
The data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
The authors declare that they have no conflicts of interest.
Acknowledgments
This work was financed by Science and Technology Program of Tianjin, China under Grant 15ZXZNGX00290.
HutchinsonS.HagerG. D.CorkeP. I.A tutorial on visual servo control199612565167010.1109/70.5389722-s2.0-0030261036VinczeM.Dynamics and system performance of visual servoingProceedings 2000 ICRA. Millennium Conference. IEEE International Conference on Robotics and Automation. Symposia Proceedings (Cat. No.00CH37065)2000San Francisco, CA, USA64464910.1109/robot.2000.844125BenhimaneS.MalisE.A new approach to vision-based robot control with omni-directional camerasProceedings 2006 IEEE International Conference on Robotics and Automation, 2006. ICRA 20062006Orlando, FL, USA52653110.1109/robot.2006.16417642-s2.0-33845658762DahmoucheR.AndreffN.MezouarY.Ait-AiderO.MartinetP.Dynamic visual servoing from sequential regions of interest acquisition201231452053710.1177/02783649114360822-s2.0-84859587113WeissL.SandersonA.NeumanC.Dynamic sensor-based control of robots with visual feedback19873540441710.1109/JRA.1987.10871152-s2.0-0023435206JägersandM.FuentesO.NelsonR.Experimental evaluation of uncalibrated visual servoing for precision manipulationProceedings of International Conference on Robotics and Automation1997Albuquerque, NM, USA2874288010.1109/robot.1997.606723HashimotoH.KubotaT.SatoM.HarashimaF.Visual control of robotic manipulator based on neural networks199239649049610.1109/41.1709672-s2.0-0026959655ZergerogluE.DawsonD. M.de QueriozM. S.BehalA.Vision-based nonlinear tracking controllers with uncertain robot-camera parameters20016332233710.1109/3516.9513702-s2.0-0035439215WangH.YangB.LiuY.ChenW.LiangX.PfeiferR.Visual servoing of soft robot manipulator in constrained environments with an adaptive controller2017221415010.1109/TMECH.2016.26134102-s2.0-85020921744WangH.Adaptive control of robot manipulators with uncertain kinematics and dynamics201762294895410.1109/TAC.2016.25758272-s2.0-85011596111ChenJ.DawsonD. M.DixonW. E.BehalA.Adaptive homography-based visual servo tracking for a fixed camera configuration with a camera-in-hand extension200513581482510.1109/TCST.2005.8521502-s2.0-26244435762LeiteA. C.LizarraldeF.Passivity-based adaptive 3d visual servoing without depth and image velocity measurements for uncertain robot manipulators2016308–101269129710.1002/acs.26692-s2.0-84955577731CorkeP. I.1994Mechanical and Manufacturing EngineeringVinczeM.AyromlouM.ChroustS.ZillichM.PonweiserW.LegensteinD.Dynamic aspects of visual servoing and a framework for real-time 3D vision for robotics2002Berlin, HeidelbergSpringer10112110.1007/3-540-45993-6_7GangloffJ. A.de MathelinM. F.High speed visual servoing of a 6 DOF manipulator using MIMO predictive controlProceedings 2000 ICRA. Millennium Conference. IEEE International Conference on Robotics and Automation. Symposia Proceedings (Cat. No.00CH37065)2000San Francisco, CA, USA3751375610.1109/robot.2000.845316GangloffJ. A.de MathelinM. F.High-speed visual servoing of a 6-d.o.f. manipulator using multivariable predictive control20031710993102110.1163/1568553033225543912-s2.0-0346331364CuvillonL.LarocheE.GangloffJ.de MathelinM.GPC versus H ∞ control for fast visual servoing of a medical manipulator including flexibilitiesProceedings of the 2005 IEEE International Conference on Robotics and Automation2006Barcelona, Spain4044404910.1109/robot.2005.15707402-s2.0-33747608488WuH.LouL.ChenC. C.HircheS.KuhnlenzK.Cloud-based networked visual servo control201360255456610.1109/TIE.2012.21867752-s2.0-84866548197NakadokoroM.KomadaS.HoriT.Stereo visual servo of robot manipulators by estimated image features without 3d reconstructionIEEE SMC'99 Conference Proceedings. 1999 IEEE International Conference on Systems, Man, and Cybernetics (Cat. No.99CH37028)1999Tokyo, Japan57157610.1109/icsmc.1999.814155DaiN.NakamuraM.KomadaS.HiraiJ.Tracking of moving object by manipulator using estimated image feature and its error correction on image planesThe 8th IEEE International Workshop on Advanced Motion Control, 2004. AMC '042004Kawasaki, Japan65365710.1109/amc.2004.1297946KinbaraI.KomadaS.HiraiJ.Visual servo of active cameras and manipulators by time delay compensation of image features with simple on-line calibration2006 SICE-ICASE International Joint Conference2007Busan, Republic of Korea5317532210.1109/sice.2006.3153182-s2.0-34250692612InoueT.HiraiS.Robotic manipulation with large time delay on visual feedback systems2010 IEEE/ASME International Conference on Advanced Intelligent Mechatronics2010Montreal, ON, Canada1111111510.1109/aim.2010.56959122-s2.0-78651506776GaoZ.SuJ.Estimation of image Jacobian matrix with time-delay compensation for uncalibrated visual servoing2009261218234LiuY. H.WangH.LamK.Dynamic visual servoing of robots in uncalibrated environments2005 IEEE/RSJ International Conference on Intelligent Robots and Systems2006Edmonton, Canada3131313610.1109/iros.2005.15453792-s2.0-79957979903WangH.LiuY. H.Uncalibrated visual tracking control without visual velocityProceedings 2006 IEEE International Conference on Robotics and Automation, 2006. ICRA 20062006Orlando, FL, USA2738274310.1109/robot.2006.16421152-s2.0-33845652759ShenY.SunD.LiuY.-H.LiK.Asymptotic trajectory tracking of manipulators using uncalibrated visual feedback200381879810.1109/TMECH.2003.8091332-s2.0-0037349228LiuY.-H.WangH.WangC.LamK. K.Uncalibrated visual servoing of robots using a depth-independent interaction matrix200622480481710.1109/tro.2006.8787882-s2.0-33747619166WangH.LiuY.-H.ZhouD.Dynamic visual tracking for manipulators using an uncalibrated fixed camera200723361061710.1109/TRO.2007.8950912-s2.0-34447295365LizarraldeF.LeiteA. C.HsuL.CostaR. R.Adaptive visual servoing scheme free of image velocity measurement for uncertain robot manipulators20134951304130910.1016/j.automatica.2013.01.0472-s2.0-84876668643LiangX.WangH.LiuY.-H.ChenW.ZhaoJ.A unified design method for adaptive visual tracking control of robots with eye-in-hand/fixed camera configuration2015599710510.1016/j.automatica.2015.06.0182-s2.0-84937932934LiT.ZhaoH.Global finite-time adaptive control for uncalibrated robot manipulator based on visual servoing20176840241110.1016/j.isatra.2016.10.0062-s2.0-8501485258028291528QiaoW.SipahiR.Consensus control under communication delay in a three-robot system: design and experiments201624268769410.1109/TCST.2015.24587762-s2.0-84939449440LiuY.-C.ChopraN.Controlled synchronization of heterogeneous robotic manipulators in the task space201228126827510.1109/TRO.2011.21686902-s2.0-84857033731WangH.Passivity based synchronization for networked robotic systems with uncertain kinematics and dynamics201349375576110.1016/j.automatica.2012.11.0032-s2.0-84875217106LiangX.WangH.LiuY. H.ChenW.HuG.ZhaoJ.Adaptive task-space cooperative tracking control of networked robotic manipulators without task-space velocity measurements201646102386239810.1109/TCYB.2015.24776062-s2.0-8494244668226415197HashimotoK.NagahamaK.NoritsuguT.A mode switching estimator for visual servoingProceedings 2002 IEEE International Conference on Robotics and Automation (Cat. No.02CH37292)2002Washington, DC, USA1610161510.1109/robot.2002.10147732-s2.0-0036061153SebastiánJ. M.PariL.AngelL.TraslosherosA.Uncalibrated visual servoing using the fundamental matrix200957111010.1016/j.robot.2008.04.0022-s2.0-56349109445ShademanA.FarahmandA.-m.JägersandM.Robust jacobian estimation for uncalibrated visual servoing2010 IEEE International Conference on Robotics and Automation2010Anchorage, AK, USA5564556910.1109/robot.2010.55099112-s2.0-77955803265KhalilH. K.20023rdUpper Saddle River, NJ, USAPrentice-Hall, Inc.SlotineJ.-J. E.LiW.On the adaptive control of robot manipulators198763495910.1177/0278364987006003032-s2.0-0023416013GarofaloF.CelentanoG.GlielmoL.Stability robustness of interval matrices via Lyapunov quadratic forms199338228128410.1109/9.2504722-s2.0-0027541559LozanoR.BrogliatoB.an MaschkeO. E.Dissipative systems analysis and control. Theory and applications20011212221110.1088/0957-0233/12/12/703