Discrete-Time Sliding Mode Control Coupled with Asynchronous Sensor Fusion for Rigid-Link Flexible-Joint Manipulators

. This paper proposes a novel discrete-time terminal sliding mode controller (DTSMC) coupled with an asynchronous multirate sensor fusion estimator for rigid-link ﬂexible-joint (RLFJ) manipulator tracking control. A camera is employed as external sensors to observe the RLFJ manipulator’s state which cannot be directly obtained from the encoders since gear mechanisms or ﬂexible joints exist. The extended Kalman ﬁlter-(EKF-) based asynchronous multirate sensor fusion method deals with the slow sampling rate and the latency of camera by using motor encoders to cover the missing information between two visual samples. In the proposed control scheme, a novel sliding mode surface is presented by taking advantage of both the estimation error and tracking error. It is proved that the proposed controller achieves convergence results for tracking control in the theoretical derivation. Simulation and experimental studies are included to validate the eﬀectiveness of the proposed approach.


Introduction
In many high-performance robotic manipulator applications, positioning of the end effector (or/and links) is critical as the ultimate goal is to track the desired trajectory. However, the challenge of achieving high accuracy and dynamic performance is increased due to the nonlinear flexibilities found in rigid-link flexible-joint (RLFJ) manipulators. Some researchers have designed observers to estimate states in the robot model, since link positions/ velocities are typically not measured in the industrial robot system. Nicosia and Tomei [1,2] designed a controller combined with the observer which estimates motor positions or/and motor velocities for RLFJ manipulators. Dixon et al. [3] developed a globally adaptive output-feedback tracking controller for the RLFJ dynamics, which is based on the link velocity filter. Globally, output-feedback methods are not easily implemented in the real system because of the requirement of link position measurements which are not frequently measured in real systems. A controller was designed in [4,5] by using a neural network (NN) observer to estimate link and motor positions/velocities and dynamic parameters. But NN observer-based controllers do not take advantage of motor positions. Kalman filter (KF) and extended Kalman filter (EKF) are utilized to estimate positions of joints or/and end effectors, driving torque, and dynamic parameters for manipulators [6]. Lightcap and Banks [7] designed an EKF-RLFJ controller by using EKF to estimate link and motor positions/velocities. Garcła et al. [8] proposed compliant robot motion controllers by using EKF to fuse wrist force sensors, accelerometers, and joint position sensors. EKF-based sensor fusion was presented by Jassemi-Zargani and Necsulescu [9] to estimate the acceleration for operational space control of a robotic manipulator. However, these reported EKF-based control methods do not discuss the case of asynchronous measurements from multirate sensors for RLFJ manipulators.
Observer-based sliding mode control (SMC) is one of the most important approaches to handle systems with uncertainties and nonlinearities [10]. For the RLFJ manipulator system, observer-based SMC of robot manipulators has been widely studied since the state of manipulators (e.g., acceleration velocity of joints) need not always be measured directly [11]. e terminal SMC (TSMC) is used in rigid manipulator control (e.g., robust TSMC and finite-time control) since it has some superior properties compared with SMC such as better tracking precision and fast error convergence [12][13][14]. In particular, the singularity problem of the TSMC was addressed in [15,16]. However, most of these papers use the continuous-time dynamic model of the manipulator. Indeed, discrete-time models exist widely in the real digital control system. It has been proved that the discrete-time SMC has technological advances in digital electronics, computer control, and robotic system. Corradinia and Orlando [17] presented a robust DSMC coupled with an uncertainty estimator designed for planar robotic manipulators. However, the flexibilities of joints are not involved in those controller designs.
To remedy such limitations, the paper proposes a novel controller, AMSFE-DTSMC, which is implemented based on DTSMC coupled with an asynchronous multirate sensor fusion estimator. e robotic multirate sensor unit contains vision and non-vision-based sensors whose sampling rate and processing time are different. In the proposed scheme, the delayed slow sampling vision measurement is treated as a kind of periodic "out-of-sequence" measurement (OOSM) [18] which is used to update the non-vision-based state estimation in an EKF-based asynchronous multirate sensor fusion algorithm. Using the position and velocity estimation from sensor fusion estimator, the DTSMC is designed by using a novel sliding surface which is implemented by considering estimation error and tracking error. e main contributions of this work are summarized as follows.
(i) We propose a novel tracking control scheme, AMSFE-DTSMC, which is based on the DTSMC coupled with the sensor fusion estimator for a RLFJ manipulator. e sliding surface of AMSFE-DTSMC is designed by utilzing estimation error and tracking control error. (ii) We construct an asynchronous multirate measurement model for robotic sensors and design a sensor fusion to fuse such asynchronous multirate data for robotic state estimation.
is paper is organized as follows. Section 2 gives the problem formulation. In Section 3, the multirate sensor data fusion algorithm is presented. Section 4 designs a novel DTSMC for tracking control. e simulation and experimental studies are implemented in Section 5. e paper ends with conclusion about the proposed approach.

Problem Formulation
In this paper, a robotic manipulator system is given with the sensor unit including joint motor encoders and cameras fixed in the workspace. e tracking control scheme for RLFJ manipulators can be developed by using the robotic state estimate via multisensor fusion. e state of the robot is estimated by these sensors directly and indirectly; however, there are some limitations for a single sensor to obtain precise information. To fuse asynchronous multirate data from different sensors, the dynamic and sensor models are formulated in this section.

Discrete Rigid-Link Flexible-Joint Robot
Model. e discrete dynamic model of an n− link RLFJ manipulator can be obtained by the minimization of the action functional suggested by Nicosia [2] as follows: where q(k), _ q(k) and q m (k), _ q m (k) denote the position, velocity of the link and motor angles at k time, respectively, , θ], f(·) represents centrifugal, Coriolis and gravitational forces; K s , M m and F m are constant, diagonal, positivedefinite matrices representing joint stiffness, motor inertia, motor viscous friction, respectively; the joint deflection δ(k) is defined as the difference between the motor and link position and u(k) denotes the motor torque; the unknown or varying dynamic parameter in the robotic model is defined as θ(k) which satisfy θ(k + 1) − θ(k) � Tw θ (k), the dynamics uncertainties of the links and motors are modeled with random variables w l (k) and w m (k).
Define a state vector e dynamics in equations (1a)-(1d) can be transformed to a state space representation: where 2 Complexity

Measurement Model. e observation vector is given by
where q m (k), _ q m (k) T are the position and velocity of motor angles and y(k) represents the position of the end effector in image space observed by a fixed camera.
By using the standard pinhole camera model, the mapping from the Cartesian space to image space is given by where r(k) denotes the position of the end effector in the Cartesian space and I m (·) is a transformation function from task space to image space. From the forward kinematics, the position relationship between robotic joints and end effector is described by a transform relationship as follows: where r(k) is the position of the end effector at kth time and F k (·) is a transformation function from the joint space to task space. From the equations (6) and (7), e joint position can be observed by a fixed camera: where , the measurement equation is given by where z i (k) is the measurement of the state from different sensor, i � c denotes the camera measurement, and i � m represents the measurement from motors. We assume that the process noise ω(k) and measurement noise ](k) are sampled from the independent and identically distributed white Gaussian noise which satisfies following equations at time t k : Since the measurement of the vision sensor at time t k is obtained from the visual image taken at time step t κ � t k − t d , t d denotes the delay time. e relation of different sensors' sampling rates is given by where s m and s c denote the sampling rate of motor encoders and visual sensors, respectively, and l is a positive integer. We show the sampling rate difference between vision and non-vision measurements in Figure 1, where l step lags of vision measurements are described. According to the characteristics of visual sensor, we treat the vision measurements as the periodic l-step lag out-of-sequence measurements.

Remark 1.
e velocity of joints (or/and end effector) also can be observed by vision sensors, is the image Jacobian matrix, and J k (q) denotes the Jacobian matrix from joint space to task space. However, the measurement of velocity is always impaired by the noisy image data with slow sampling rate and the dynamic uncertainties in the RLFJ manipulator system. erefore, the velocity measurement in the image space is not utilized in the measurement model.

Remark 2.
Assume that the camera can cover the entire workspace of the robot. With the prior knowledge of motion planning of the robot, it is basically assumed that there is an one-to-one mapping from the image space to joint space in the real robotic system. Complexity 3

Asynchronous Multirate Sensor Fusion Estimation
According to the system model described in Section 2, the robotic link position can be estimated by using the multisensor system. For asynchronous multirate sensors, the sensor fusion method is designed to use the late measurements for updating the current estimated state to get a more accurate estimation in two steps: Step 1: when the vision measurement is unavailable, the robotic state is estimated using the non-vision-based sensors which keep the real-time estimate by recovering the missing information between two vision samples.
Step 2: as the delayed vision measurement arrives at every vision sample at time t � t lk , l � 1, 2, . . ., the state should be re-estimated to cope with the limitations of other sensors in the absolute position measurement.

Estimation by Using Non-Vision Sensor Measurements.
From Figure 1, before the (k − 1)th vision frame is available, we have the estimation of the state at time t lk using the nonvision sensor measurements: where Z m lk � z m (1), z m (2), . . . , z m (lk) { } represents all encoder motor measurements up to time t lk . Using the extended Kalman filter (EKF), the estimation at time t lk via the motor encoder measurement is given as follows: where A lk , W lk , H lk , and V lk are Jacobian matrices and K lk is the correction gain vector. According to above equations, the state and covariance estimates are updated with the measurement with a fast sampling rate.

Re-Estimation via Vision Sensor Measurements.
Suppose that at the time t lk the vision measurement at time t κ � t kl− l is obtained. A new estimation x(lk|κ + ) is calculated using the information about the delayed (k − 1)th vision measurement z c (κ).
e delayed measurement z c (κ) observed by vision measurement will be used to correct the accumulated estimation errors which are caused by sensors with fast sampling rate. e equations for updating the estimation x(lk|lk) with the delayed vision measurement z c (κ) are given by where P xz (lk, κ|lk) represents cross covariance between x(lk|lk) and z(κ|lk), P zz (κ|lk) is the covariance of z(κ|lk) which denotes the estimation of measurement at time t κ , and Using the EKF, P xz (lk, κ|lk), P xz (lk, κ|lk), and z(κ|lk) are obtained by assuming that the function F k,κ (·) is invertible. Define F − 1 lk,κ (·) � F κ,lk (·) which denotes the backward transition function to estimate the state back from t lk to t κ . Since the previous state x(κ) is not affected by present input signal u(lk), we give the state relationship between t κ and t lk by where the process noise ω(lk, κ) and covariance Q(kl, κ) are calculated by  Complexity e estimation in equation (19) can be determined by ω(lk, κ|lk)). (24) To estimate the process noise as in equations (13)- (17), we have en, P xz (lk, κ|lk) and P zz (κ|lk) are obtained as follows: where the covariances P(κ|lk) and P xω (lk, κ|lk) are derived as follows:

Summary of the Fusion Estimate
Method. e state of a RLFJ manipulator is estimated using the indirect measurements from asynchronous multirate sensors. e sensor fusion estimate can be implemented in the practical application using a switching mechanism in accordance with sampling time. As shown in Figure 1, the update equations are chosen at the different sampling times. e state estimation can be given by Remark 3. e exponential convergence of the sensor fusion estimate can be proved in the similar way which is presented in [19], which has more detailed conclusion on the stability.

Rigid-Link Flexible-Joint Tracking Control
In this section, the discrete-time terminal sliding mode tracking control-based fusion estimation (AMSFE-DTSMC) is presented for rigid-link flexible manipulators whose state is estimated by the sensor fusion method described in the previous section. e controller is designed by using both position and velocity of the link from the sensor fusion estimate. To design the novel controller, the model in (1a)-(1d) can written as denotes the variable which includes dynamic parameters of links and motors.
In order to formulate the tracking control, define the tracking error q t (k) and estimation error q e (k) at time t k as where q d (k) is the desired position and q(k) denotes the estimation position. Define the reference velocity for tracking and estimation: where _ q(k) denotes the estimation of _ q(k) and Λ t , Λ e are constants and diagonal matrices.

Complexity 5
Define the filtered variables including the estimation error: Consider the discrete terminal sliding surface as follows: where λ is a positive constant diagonal parameter matrix and p � p 1 /p 2 , in which p 1 and p 2 are positive odd integers satisfying p 2 > p 1 . Motivated by the reaching law presented by Weibing Gao et al. in [20], we use the reaching law for exponential discrete sliding mode control as follows: where sgn(·) is the signum function, ε > 0, and h > 0.
Since system states cannot be measured directly, parameters which contain variables q(k), _ q(k), and θ need to be estimated in the controller design. Assume that the estimate error and uncertainties are bounded, and we have where _ q m denotes _ q m (k + 1); M(q, θ) � M(q, θ) and F(q, q, _ q m , _ q m , T, θ) � F(q, q, _ q m , _ q m , T, θ) represent the estimation of dynamic parameters; and ΔM(q, θ) and ΔF(q, q, _ q m , _ q m , T, θ) are bounded variables including the estimation error and system uncertainties, which satisfy where δ M and δ F are constants.

Theorem 1.
Consider the rigid-link flexible-joint manipulator system described by equations (29a) and (29b) and the discrete sliding manifold described by equation (33); by using the reaching law in equation (34), stable control law is designed as where ε and h are positive diagonal matrices which satisfy the following inequations: Proof. Substituting control law (37) in the rigid-link flexible-joint system equations (29a) and (29b), the error dynamics are obtained: (s(k)))].

Simulation Study.
e results obtained from the simulation of proposed control scheme on a two-link RLFJ manipulators shown in Figure 2 are presented in this section. In the simulation, the aim is make the RLFJ manipulator track desired trajectories q 1,d � sin(πt) and q 2,d � cos(πt) from the initial position [q 1i q 2i ] � [0.5, 0.5].
e robotic dynamic parameters are given by where m i and l i are given in Table 1. For i � 1, 2, m i and l i denote mass and length of the ith link. k m � diag 10, 10 { }Num/rad, M m � diag 0.5, 0.5 { } kg · m 2 , F m � diag 4, 0.5 { } Nm · sec/rad. In order to demonstrate the influence of estimation for tracking performance, a DTSMC controller without estimation is also simulated. e DTSMC control law is given by with a sliding surface s(k) � _ q t (k) + λq t (k) p which is different from proposal sliding mode surface.
Parameters of the proposed controller and DTSMC are selected as T � 0.001 s, h � diag 2, 2 { }, λ � diag 1.5, 1.5 { }, Λ t � diag 5, 5 { }, and ε � diag 25, 25 { }. θ(k) � 0.15 is chosen as a constant model parameter whose initial value is θ(k) � 0, p � 3, q � 5, ξ M � diag 3, 3 { }, and ξ F � diag 1, 1 { }. In the simulation, as shown in Figure 3, we assume that the end effector is observed by a fixed camera whose delayed measurements are used directly to calculate the joint position. e delayed time of slow measurement is 0.125 s. e process and measurement noise are chosen as Q � 0.01I and R � 0.1I. e initial position value of joints (joint1, joint2) for tracking control is given by (0.5 rad, 0.5 rad), and for fusion estimator, it is given by (0.9 rad, 0.9 rad). e initial velocity value for tracking control is given by (0, 0), and for estimation, it is given by (0.25 rad/s, 0). e estimate errors of position and velocity are plotted in Figure 4. e estimate errors of parameter θ are shown in Figure 5. e position tracking of the proposed method for two links is shown in Figures 6 and 7 where the comparative result of SMC without fusion estimator is also plotted. In conclusion, the simulation results clearly indicate that the proposed approach guarantees the convergence of tracking errors and has better tracking accuracy.

Experimental Study.
To validate the applicability of the proposed control schemes, a single-link flexible-joint 8 Complexity manipulator with a fixed camera is employed as the experimental plant which is shown in Figure 8. e aim is to make the end effector move along the desired trajectory To observe the state of end effector, the calibrated camera is fixed perpendicular to the robot notion plane. e coordinate relation of Cartesian space and image space is shown in Figure 9. Camera measurements are obtained from the image sequences shown in Figure 10. We define the position in the image coordinate y(k) � [x m (k), y m (k)] given in Figure 11, and the position in Cartesian coordinate    is r(k) � [x c (k), y c (k)]. By using the pinhole camera model, we calculate the mapping from the joint space to image space I k (·) in equation (14) as follows: where H c is the perpendicular distance between the camera and the robot motion plane and f denotes the focal length of   the joint is calculated by using the proposed fusion estimate method shown as red solid line in Figure 12. To show the performance detail of the fusion estimate method, Figure 13 shows the error of joint position estimation.
To validate the performance of the fusion estimate-based DTSMC, comparative experiments are implemented in this paper. e proposed controller and a DTSMC without estimator are employed in this test. Parameters are selected as T � 0.01 s, h � 3, λ � 2, Λ t � 4 and ε � 20, θ(k) � 0.05; p � 3, q � 5, ξ M � 0.2 and ξ F � 2. According to the comparative position tracking performances given in Figure 14, it is obvious that the proposed controller provides a superior behavior.

Conclusion
A novel RLFJ manipulator tracking controller, AMSFE-DTSMC, is proposed in this paper based on DTSMC coupled with asynchronous multirate sensor fusion. States of the manipulator are estimated by EKF-based sensor fusion algorithm which combines asynchronous multirate measurements from visual and non-vision-based sensors. Compared with the non-vision measurements, visual measurements are considered as periodic out-of-sequence measurements, which are used to re-estimate the state. With the state estimation, DTSMC is designed by using a novel sliding surface that includes tracking error and estimation error. By using the Sarpturk inequalities, boundedness of the controlled variables is proved. e effectiveness of the proposed approach is shown in simulation and experimental studies.

Data Availability
No data were used to support this study.

Disclosure
An earlier version of multirate sensor fusion-based DTSMC has been presented in the 10th International Conference on Control and Automation (ICCA) [12]; however, a totally new controller is designed by using a new sliding surface including both tracking error and estimation error in this paper, and the effectiveness of the proposed approach is proved both in the simulation and experimental studies.

Conflicts of Interest
e authors declare that they have no conflicts of interest.