This paper works on hybrid force/position control in robotic manipulation
and proposes an improved radial basis functional (RBF) neural network,
which is a robust relying on the Hamilton Jacobi Issacs principle of
the force control loop. The method compensates uncertainties in a
robot system by using the property of RBF neural network. The error
approximation of neural network is regarded as an external
interference of the system, and it is eliminated by the robust control
method. Since the conventionally fixed structure of RBF network is not
optimal, resource allocating network (RAN) is proposed in this paper
to adjust the network structure in time and avoid the underfit.
Finally the advantage of system stability and transient performance is
demonstrated by the numerical simulations.
1. Introduction
During robotic operation, the end-effectors may perform tactile contact with the environment, which consists of a force interaction between the end-effector and the environment. In addition to robot’s position control, the force control is more necessary in order to fulfill its tasks better. Raibert and Craig [1] firstly introduce such an idea in 1981. After then many other researchers have proposed and explored new hybrid control strategies, for example, by combination with visual information [2–4].
Due to uncertainties of the robot model, the system's performance becomes greatly weaken or even unstable, so robust control methods for robots are widely concerned. By using a fixed controller structure, the method has the advantage of eliminating the impact of the uncertainty, ensuring the stability of the system during its operation.
The main assumption of the method is the fact that only the upper bound of uncertainty is known. However, the upper bound is difficult to be measured which is the limitation of the robust control method. To overpass this limitation, the radial basis functional (RBF) neural networks (RBFNNs) approximate the function to compensate for the lack of robust control. RBFNN has a compact topology structure and rapidly convergence, and its structural parameters can be learned separately [5–8]. Because of its fixed or a more complex structure, RBF will lead to the result that learning time is too long or wasting network resources. Therefore, we use resource allocation network (RAN) in this paper. The RAN method replaces the sampling point by the biggest error sampling point and by doing so, the network can perform self-learning and its complexity is reduced [9, 10]. Radial Basis Functional Link Network (RBFLN) increases the weight from input to output; therefore, RBFNN not only includes RBF advantage, but also compensates for the slow response of RBF.
2. Manipulator Dynamics
The dynamic equation of the n-link manipulator in joint-space coordinates is given byM(q)q̈+C(q,q̇)q̇+G(q)=τp+τf+JTf+ω(q,q̇,t),
where the vector q∈Rn is the joint angle, the vector q̇∈Rn denotes the joint angular velocity, the vector q̈∈Rn is the joint angular acceleration, M(q)∈Rn×n is the symmetric positive definite inertia matrix, C(q,q̇)q̇∈Rn denotes the vector of Coriolis and centrifugal forces, G(q)∈Rn denotes the gravitational vector, τp is the vector of joint actuator torques in position control loop, τf is the vector of joint actuator torques in force control loop, f∈Rn is the force between the end-effector and the environment, J∈Rn×n denotes the Jacobain matrix, and ω(q,q̇,t) represents the vector of external disturbance joint torques and unmodeled dynamics.
In the position loop, the simplest PD controller can be expressed asτp=-Kpep-Kvėp,
where Kp, KV are the constant matrixes, ep=q-qd,ėp=q̇-q̇d.
In the force loop, the dynamic equation should be transferred from joint-space to Cartesian-space [1, 7]. Based on ṙ=Jq̇, r̈=J̇q̇+Jq̈, (2.1) can be derived as follows:Mrr̈+Crṙ+Gr=U+ωr+f,
whereMr=J-TM(q)J-1,Cr=J-T(C(q,q̇)-M(q)J-1J̇)J-1,Gr=J-TG(q),U=J-Tτf,ωr=J-Tω(q,q̇,t)
Equation (2.4) which is showed by the Cartesian coordinate has the following important quality.
Assume that rd is the desired trajectory and fd is the desired force. The force between the end-effector and the environment is given by the following expression [1, 8]:f=Ge(r-re),ef=f-fd,
where Ge is the environment stiffness, re is reference position of environment, so ṙ=Ge-1(ėf+ḟd),r̈=Ge-1(ëf+f̈d).
We can obtain the error equation as follows:MrGe-1ëf+CrGe-1ėf-ef+Δ=U+ωr,
where Δ=MrGe-1f̈d+CrGe-1ḟd+Gr-fd. State variables can be defined as x1=ef,x2=ėf+αef, where α is a given positive number, (2.7) can be derived as ẋ1=x2-αx1,MrGe-1ẋ2=-CrGe-1x2+x1-Δ+U+ωr+ω,
where ω=MrGe-1αėf+CrGe-1αef.
3. Design of Control Law
In order to obtain the control law, we introduce a theorem in this section. Assume that there is a system with disturbance as follows:ẋ=f(x)+g(x)d,z=h(x),
where d is the disturbance and z is the signal of evaluation.
For the force control loop, the operation space of robot is transformed. Because ṙ=Jq̇, r̈=J̇q̇+Jq̈, (2.1) is written asMrr̈+Crṙ+Gr=U+ωr+f,
where Mr=J-TM(q)J-1, Gr=J-TG(q), U=J-Tτf,Cr=J-T(C(q,q̇)-M(q)J-1J̇)J-1, and ωr=J-Tω(q,q̇,t).
Suppose rd is the desired position and fd is the desired force. Then f=Ge(r-re), ef=f-fd=Ge(r-re)-fd. Ge is the rigidity matrix and re is reference position of the environment. ṙ=Ge-1(ėf+ḟd)andr̈=Ge-1(ëf+f̈d). Then error equation is MrGe-1ëf+CrGe-1ėf-ef+Δ=U+ωr,
where Δ=MrGe-1f̈d+CrGe-1ḟd+Gr-fd. Then it is transformed into the state space. x1=ef,x2=ėf+αef and α is a positive number. Equation (3.3) becomesẋ1=x2-αx1,MrGe-1ẋ2=-CrGe-1x2+x1-Δ+U+ωr+ω.
where ω=MrGe-1αėf+CrGe-1αef.
The improved RAN network approaches ωr. εf is the approaching error of the network. ωr=PfWf+XVf+εf, Pf is output matrix of the hidden layer, Wf is the weight matrix from hidden layer to output layer. X is input matrix, Vf is the weight from input layer to output layer. PfWf is the contribution from hidden layer to output layer. XVf is contribution from input layer to output layer. Equation (3.4) can be derived asẋ1=x2-αx1,MrGe-1ẋ2=-CrGe-1x2+x1-Δ+U+PfWf+XVf+εf+ω
εf is regarded as interfere and its evaluation signal is z=2cef=2cx1, then L2 gain is J=sup∥εf∥≠0(∥z∥2/∥εf∥2).
Theorem 3.1.
For (3.5) if the study law of network is given by the following equation:
Ẇf=-ηWf,V̇f=-λVf,(λ,η>0)
the following controller is expressed for the force loop:
U=-(Ge+I)x1+Δ-PfWf-XVf-ω-(12γ2+θ)(x2TGe-1)T
and c in z=2cx1 must meet to (3.8)
α-2c2=β,
where β and θ are given positive numbers, then the L2 gain of closed-loop system (3.5) and (3.7) is less than γ.
Proof.
For (3.5), the Lyapunov function is defined as
V=12x1Tx1+12x2TGe-1MrGe-1x2+12WfTWf+12VfTVf.
Then
V̇=x1Tẋ1+12x2TGe-1ṀrGe-1x2+x2TGe-1MrGe-1ẋ2+WfTẆf+VfTV̇f=x1T(x2-αx1)+12(Ge-1x2)T(Ṁr-2Cr)(Ge-1x2)+x2TGe-1(x1-Δ+U+PfWf+XVf+εf+ω)+VfTV̇f+WfTẆf.
Substituting (3.6) into the above equality, we have
V̇=x1T(x2-αx1)+x2TGe-1(x1-Δ+U+PfWf+XVf+εf+ω)-ηWfTWf-λVfTVf=-αx1Tx1-ηWfTWf-λVfTVf+x2TGe-1(Gex1+x1-Δ+U+PfWf+XVf+εf+ω).
According to HJI, we get
H=V̇-12γ2‖εf‖2+12‖z‖2.
Then
H=-αx1Tx1-ηWfTWf-λVfTVf-12γ2‖εf‖2+2c2‖x1‖2+x2TGe-1(Gex1+x1-Δ+U+PfWf+XVf+εf+ω)=-(α-2c2)‖x1‖2-η‖Wf‖2-λ‖Vf‖2+x2TGe-1εf-12γ2‖εf‖2×x2TGe-1(Gex1+x1-Δ+U+PfWf+XVf+ω).
Due to
x2TGe-1εf-12γ2‖εf‖2=-(-x2TGe-1εf+12γ2‖εf‖2+12γ2‖x2TGe-1‖2-12γ2‖x2TGe-1‖2)=-12‖1γx2TGe-1-γεf‖2+12γ2‖x2TGe-1‖2≤12γ2‖x2TGe-1‖2,
we get
H≤-β‖x1‖2-η‖Wf‖2-λ‖Vf‖2+x2TGe-1×(Gex1+x1-Δ+U+PfWf+XVf+ω+12γ2(x2TGe-1)T).
Substituting (3.7) into the above inequality, we have
H≤-β‖x1‖2-η‖Wf‖2-λ‖Vf‖2-θ‖x2TGe-1‖2≤0.
So the system meets V̇≤(1/2)γ2∥εf∥2-(1/2)∥z∥2.
4. Experiments and Results
To verify the effectiveness of the proposed control strategy, we made some software simulation by using methods [11–16]. Here the model is based on two-link manipulator, which is shown in Figure 1.
A two-link manipulator with constraint surface.
In the simulation, we took a horizontal plane as the work space: r=[xy]Tand describe the constraint surface as X=1.6, the desired trajectory is yd=0.007t+0.5,t∈[0,10], the desired force is fd=5N. Assume that the initial position of the manipulator end effector is r0=[1.50]T and initial velocity is dr=[00]T. In order to analyze comparatively, we use PD control and robust neural network control, respectively, in the force control loop. First the model is controlled by PD controller. The PD parameters are determined by output result. P=57, D=1.3.
We adopt MATLAB Simulink and S-functions to design control system, the parameters are set α=18.1, β=0.1, θ=0.1, γ=0.05, c=3, and η=0.1,λ=0.05. The simulation results are shown in Figures 2–6, among which Figures 2–5 give the tracking results of position and position error and Figure 6 gives force tracking results.
Tracking the location along x-axis under robust neural networks control and PD control.
Tracking the location’s error along x-axis under robust neural networks control and PD control.
Tracking the location along y-axis under robust neural networks control and PD control.
Tracking the location’s error along y-axis under robust neural networks control and PD control.
The force under robust neural networks control and PD control.
Figures 2 and 3 show that the control effect along x-axis is unlikeness. The robust NN control result is superior to conventional PD along x-axis. Figures 4 and 5 show that there’s no obvious difference along y-axis.
Figure 6 shows that the methods under robust neural networks control and PD control can make force convergence desired value. But the effort of the two methods has great difference. The oscillation is severe, and convergence speed is slow under PD control method. The oscillation and convergence speed are improved under RAN NN control method. The stability and transient performance are greatly superior to the effect under PD control.
From the simulation results, we know that the improved RBF neural network robust control method can decrease the dramatic oscillation and improve the convergence speed. The stability and transient performance of the system are much better than the PD control, and therefore it is an effective control method.
5. Conclusion
An improved RAN NN controller has been designed in this paper for robot. In case of difficulty measureing an external disturbance, the upper bound of uncertainty cannot be obtained. The controller can make the system's uncertainty significantly reduced without obtaining the upper bound of uncertainty. It is found that the system can obtain good transient performance and strong adaptability. For the force and position control, it has good robustness and tracking ability. For future study, a simulation platform is constructed in the paper to intuitively demonstrate the control process.
RaibertM. H.CraigJ. J.Hybrid position/force control of manipulators198110321261332-s2.0-0019573303ChenS. Y.LiY. F.Determination of stripe edge blurring for depth sensing2011112389390ChenS. Y.LiY. F.ZhangJ.Vision processing for realtime 3-D data acquisition based on coded structured light200817216717610.1109/TIP.2007.9147552446006ChenS. Y.TongH.WangZ.LiuS.LiM.ZhangB.Improved generalized belief propagation for vision processing20112011124169632740335ZBL1202.94026KaoC.-H.HsuC.-F.WangC.-H.DonH. S.Chaos synchronization using adaptive dynamic neural network controller with variable learning rates2011201120701671MozelliL. A.PalharesR. M.Less conservative H∞ fuzzy control for discrete-time takagi-sugeno systems2011201121361640BelhaouaneM. M.GharianiM. F.Belkhiria AyadiH.BraiekN. B.Improved results on robust stability analysis and stabilization for a class of uncertain nonlinear systems20102010247245632753966ZBL1206.93082XingZ.-Y.QinY.PangX.-M.JiaL.-M.ZhangY.Modelling of the automatic depth control electrohydraulic system using RBF neural network and genetic algorithm 2010201016124014YuA. L.Research on the dynamic modeling based on genetic wavelet neural network for the robot wrist force sensor2008576338533902-s2.0-46449123372JinKunL.2008Beijing, ChinaTsinghua UniversityLooneyC. G.Radial basis functional link nets and fuzzy reasoning2002484895092-s2.0-003682587810.1016/S0925-2312(01)00613-0XingC. Z.2000Beijing, ChinaTsinghua UniversityChenS. Y.ZhangJ.ZhangH.KwokN. M.LiY. F.Intelligent lighting control for vision-based robotic manipulationIEEE Transactions on Industrial Electronics. In pressZhaoY.CheahC. C.Hybrid vision-force control for robot with uncertaintiesProceedings of the IEEE International Conference on Robotics and AutomationMay 20042612662-s2.0-3042619319ChiaveriniS.SciaviccoL.Force/position regulation of compliant robotmanipulators199394361373DoulgeriZ.ArimotoS.A position/force control for a robot finger with soft tip and uncertain kinematics20021931151312-s2.0-003649711110.1002/rob.10027