Singularity-Free Neural Control for the Exponential Trajectory Tracking in Multiple-Input Uncertain Systems with Unknown Deadzone Nonlinearities

The trajectory tracking for a class of uncertain nonlinear systems in which the number of possible states is equal to the number of inputs and each input is preceded by an unknown symmetric deadzone is considered. The unknown dynamics is identified by means of a continuous time recurrent neural network in which the control singularity is conveniently avoided by guaranteeing the invertibility of the coupling matrix. Given this neural network-based mathematical model of the uncertain system, a singularity-free feedback linearization control law is developed in order to compel the system state to follow a reference trajectory. By means of Lyapunov-like analysis, the exponential convergence of the tracking error to a bounded zone can be proven. Likewise, the boundedness of all closed-loop signals can be guaranteed.


Introduction
During the last two decades, the control of systems using artificial neural networks (ANNs) has emerged as an effective and successful alternative to the conventional control techniques. The success of this approach lies on the universal approximation capability of ANNs which avoids the need for very time-consuming first principles modeling. Thus, it is possible to handle a broad class of nonlinear uncertain systems with little or (ideally) no a priori information.
The first deep insight about the identification and control of dynamic systems based on neural networks was provided by Narendra and Parthasarathy in [1]. However, they could not present a systematic procedure to analyze the stability of their neurocontrollers. This issue was addressed by Polycarpou and Ioannou [2], Rovithakis and Christodoulou [3], Kosmatopoulos et al. [4], and Yu and Poznyak [5]. They used Lyapunov-like analysis systematically in order to prove the stability of their algorithms. Based on these results, further refinements and improvements were accomplished in [6][7][8][9] and different applications were explored for robotics [10,11], manufacturing systems [12], chemical process [13], power systems [14], and so on. It is worth mentioning that the vast majority of these studies are based on feedback linearization techniques. An inherent problem associated with these techniques is the possibility of the control singularity. A first approach to try to solve this problem is simply to focus only on a class of systems in which the gain function is known and constant [15,16]. Certainly, to consider only this kind of systems could result into being very restrictive in practice. A generalized procedure to handle the control singularity consists of making modifications to the conventional adaptive algorithms [17][18][19][20][21]. Nonetheless, such modifications could provoke discontinuities in the control signal or well they could require the use of projection techniques. In this last case, the design and implementation process of such controllers could become quite complicated. To avoid the utilization of the projection, an integral-type Lyapunov function was proposed in [22]. On the basis of this function, a singularity-free smooth adaptive neural controller 2 The Scientific World Journal was developed. Notwithstanding, due to the requirement of the integral operation, the practical implementation of this approach is difficult [23]. In [24,25], the neurocontrol of systems with a unique input was studied. The singularity was avoided by maintaining the input weight always different from zero. However, a systematic procedure for this goal was not specified.
Note that for the case of systems with multiple inputs, in particular when the number of states is equal to the number of inputs, the avoidance of the singularity cannot be guaranteed only by maintaining the coupling matrix of the neural network (see (12)), that is, ( ) ( ( )), different from zero. Evidently, a stronger condition is required. In fact, the necessary and sufficient condition to guarantee the nonsingularity is that the coupling matrix should always be invertible. To simplify the implementation of this condition, the input weight matrix ( ) and the sigmoidal function matrix ( ( )) can be constructed as square matrices. Besides, ( ( )) can be selected in such a way that its invertibility can be assured. Then, the problem now is focused on guaranteeing the invertibility of the input weight ( ). In this paper, unlike [24,25] where no concrete procedure was specified and as an alternative to the projection techniques presented in [17][18][19][20][21], we propose a simple strategy to avoid the control singularity. Taking into account that a necessary and sufficient condition for the invertibility of the square matrix ( ) is that det( ( )) ̸ = 0 or equivalently | det( ( ))| > 0, we define a positive threshold in such a way that when | det( ( ))| ≥ , the weights of the neural network are updated according to stable learning laws. However, at the instant when the condition | det( ( ))| < is presented, the process of learning is immediately stopped. The effect of this modification on the stability of the identification error is thoroughly studied by means of Lyapunov analysis. The proposed strategy is applied to the identification and control of a class of uncertain nonlinear systems with multiple inputs each one subjected to an unknown deadzone.
The deadzone is a nonsmooth nonlinearity commonly found in many practical systems such as electrohydraulic systems [26], pneumatic servo systems [27], DC servo motors [28], and rudders and propellers [29]. When the deadzone is not considered explicitly during the design process, the performance of the control system could be degraded due to an increase of the steady-state error, the presence of limit cycles, or inclusive instability [30][31][32].
A direct way of compensating the deleterious effect of the deadzone is by calculating its inverse. However, this is not an easy question because in many practical situations, both the parameters and the output of the deadzone are unknown. To overcome this problem, in a pioneer work [30], Tao and Kokotovic proposed to employ an adaptive inverse of the deadzone. This scheme was applied to linear systems in transfer function form. Cho and Bai [33] extended this work and achieved a perfect asymptotic adaptive cancellation of the deadzone. However, their work assumed that the deadzone output was measurable. In [34], the work of Tao and Kokotovic was extended to linear systems in a state space form with nonmeasurable deadzone output. In [35], a new smooth parameterization of the deadzone was proposed and a class of SISO systems with completely known nonlinear functions and with linearly parameterized unknown constants was controlled by using backstepping technique. In order to avoid the construction of the adaptive inverse, in [36], the same class of nonlinear systems as in [35] was controlled by means of a robust adaptive approach and by modeling the deadzone as a combination of a linear term and a disturbancelike term. The controller design in [36] was based on the assumption that maximum and minimum values for the deadzone parameters are a priori known. However, a specific procedure to find such bounds was not provided. Based on the universal approximation property of the neural networks, a wider class of SISO systems in Brunovsky canonical form with completely unknown nonlinear functions and unknown constant control gain was considered in [37][38][39]. Apparently, the generalization of these results to the case when the control gain is varying and state dependent is trivial. Nevertheless, the solution to this problem is not simple due to the singularity possibility for the control law. In [40,41], this problem was overcome.
All the aforementioned works about deadzone studied a very particular class of systems, that is, systems in strict Brunovsky canonical form with a unique input. In this paper, we consider a wider class of systems, that is, uncertain nonlinear systems with multiple inputs where each input is preceded by an unknown symmetric deadzone. This global system could be seen as formed by an unknown affine system (see (1)) whose inputs are the outputs of the different deadzones. By generalizing the model used in [36], the multiple deadzones can be represented by means of a diagonal matrix multiplied by the global input vector plus a disturbance-like vector (the diagonal matrix is composed by the unknown symmetric slopes of each deadzone). By using this model, a continuous time recurrent neural network is employed to identify the global unknown dynamics. On the basis of this neural network, an instantaneous mathematical model of the uncertain system can be obtained, and a singularityfree feedback linearization control law is developed in such a way that the system state is compelled to follow a bounded reference trajectory. Once again, by using Lyapunov analysis, the exponential convergence of the tracking error to a bounded zone can be shown. Likewise, the boundedness of all closed-loop signals can be guaranteed.

Preliminaries
2.1. Notation. Throughout this paper, we will use the following notation.

Description of the System.
In this study, the system to be controlled consists of an unknown multi-input nonlinear The Scientific World Journal 3 plant in which each input is preceded by an unknown symmetric deadzone; that is, DEADZONE: where ( ) ∈ R is the measurable state vector for ∈ R + := { : ≥ 0}, : R → R is an unknown but continuous nonlinear vector function, : R → R × is an unknown but continuous nonlinear matrix function, ( ) ∈ R represents an unknown but bounded deterministic disturbance, the th element of the vector ( ) ∈ R , that is, ( ), represents the output of the th deadzone, V ( ) is the input to the th deadzone, , and , represent the right and left constant breakpoints of the th deadzone, and is the constant slope of the th deadzone. In accordance with [30,31], the deadzone model (2) is a static simplification of diverse physical phenomena with negligible fast dynamics.
Note that V( ) ∈ R is the actual control input to the global system described by (1) and (2). Hereafter, it is considered that the following assumptions are valid.

Deadzone Representation as a Linear Term and a
Disturbance-Like Term. The model of the th deadzone (2) can alternatively be described as follows [34,42]: where ( ) is given by Note that (4) is the negative of a saturation function. Thus, although ( ) could not be exactly known, its boundedness can be assured. Consider that the positive constant is an upper bound for ( ); that is, ‖ ( )‖ ∞ ≤ .
Based on (3), in [43], the relationship between ( ) and V( ) can be expressed as where Consider that the positive constant is an upper bound for ( ).

Identification Process with Guaranteed Invertibility of ( ) ( ( ))
In this section, the identification problem of the unknown global dynamics described by (1) and (2) using a recurrent neural network is considered. Note that an alternative representation for (1) is given bẏ where ∈ R × is a Hurwitz matrix which can be selected for simplicity as = − × , is a positive constant proposed by the designer, * 1 ∈ R × and * 2 ∈ R × are unknown constant weight matrices, (⋅) is an activation vector function with sigmoidal components, that is, where , , , and are positive constants which can be specified by the designer, (⋅) : R → R × is a sigmoidal function selected as (⋅) := diag( 11 (⋅), 22 where , , , and are positive constants which can be specified by the designer, and : R × R → R is the unmodeled dynamics which can be defined simply as Remark 4. Note that the structure for the sigmoidal function ( ( )) was selected in such a way that its invertibility can always be guaranteed.
Remark 8. It can be observed that by using the model (5), the actual control input V( ) appears now directly into the dynamics.
Since, by construction, ( ( )) is bounded, the term * 2 ( ( )) ( ) must also be bounded. Let us define the following expression: ( ) := * 2 ( ( )) ( ) + ( ( ), ( )) + ( ). Clearly, this expression is bounded. Let us denote an upper bound for ( ) as . This bound is a positive constant not necessarily a priori known. Now, note that the term * 2 ( ( )) V( ) can be alternatively expressed as * ( ( ))V( ) where * ∈ R × is an unknown weight matrix. In view of the above, (10) can be rewritten aṡ Now, consider the following series-parallel structure for a continuous-time recurrent neural network wherê∈ R is the state of the neural network, V( ) ∈ R is the control input as in (10), and 1 ( ) ∈ R × and ( ) ∈ R × are the time-varying weight matrices. In order to solve the problem of identifying system (1)-(2) based on the recurrent neural network (12), given the measurable state ( ) and the input V( ), we should be able to adjust on line the weights 1 ( ) and ( ) by proper learning laws such that the identification error Δ( ) :=̂( ) − ( ) can be reduced to a bounded zone around zero and, at the same time, the invertibility of ( ) ( ( )) can be guaranteed. Specifically, we employ in this study the following learning laws: where 1 , ℓ 1 , 2 , and ℓ 2 are positive constants which can be selected by the designer, and is a positive constant adjustable by the designer. Based on the learning laws (13) and (14), we can establish here the following result. Theorem 9. If Assumptions 2, 3, 6, and 7 are satisfied, the constant is selected greater than 1.5, and the weight matrices (a) the identification error and the weights of the neural network (12) are bounded as follows: (b1) when ( ) = 1, the norm of the identification error, that is, |̂( ) − ( )|, converges exponentially fast to a zone bounded by the term where 1 := min{(2 − 1), ℓ 1 , ℓ 2 } and 1 := where 2 := − (3/2) and 2 is an upper bound for the term Proof of Theorem 9. First, let us determine the dynamics of the identification error. The first derivative of Δ( ) is simplẏ Substituting (12) and (11) into (21) yields the following: The first derivative of ( ) is    (13) and (14). Therefore, by substituting (13) It is easy to show that Substituting (27) if (29) is substituted into (28) and reducing the like terms, we can get Besides, it can be proven [46] that The Scientific World Journal In view of the following bound as a function of ( ) can finally be determined for This implies that the following bound for ( ) can be established (the demonstration of this intermediate result can be consulted in [43]) Since by definition 1 and 1 are positive constants, the righthand side of the inequality (36) can be bounded by (0) + ( 1 / 1 ). Thus, ( ) ∈ ∞ and since by construction ( ) is a nonnegative function, the boundedness of Δ( ),̃1( ), and̃( ) can be guaranteed. Because * 1 and * are bounded, 1 ( ) =̃1( )+ * 1 and ( ) =̃( )+ * must be bounded too and the first part of Theorem 9 has been proven. With respect to the second part of this theorem, from (23), it is evident that (1/2)|Δ( )| 2 ≤ ( ). Taking into account this fact and from (36), we get By taking the limit as → ∞ of the inequality (37), we can guarantee that |Δ( )| converges exponentially fast to a zone bounded by the term √2 1 / 1 and the part (b1) of the Theorem 9 has been proven.
Remark 10. Note that the utilization of ( ) permits to guarantee that det( ( )) ̸ = 0 for ∀ ≥ 0. Hence, ( ) is an invertible matrix for ∀ ≥ 0. Certainly, the designer should select (0) in such a way that this condition can be fulfilled.

Tracking Controller
In this section, an appropriate control law V( ) will be determined in such a way that the state ( ) of system (1)-(2) follows a given reference trajectory ( ), and, at the same time, all closed-loop signals stay bounded. According to this definition, (44) can be expressed simply aṡ Note that if the learning laws (13) and (14) The first derivative of (47) is simplẏ Substituting (46) By using the principle of feedback linearization, we propose the following control law: where is a Hurwitz matrix which can be selected by simplicity as = − × , is a positive constant proposed by the designer such that > 0.5. If (50) is substituted into (49), we can geṫ( We can analyze the dynamics of the tracking error ( ) given in (51) by proposing the following Lyapunov function candidate: The first derivative of (52) iṡ Substituting (51) into (53) yieldṡ It is easy to show that Taking into account (55) and given = − × , (54) becomeṡ By defining the following positive constants 3 := 2 − 1 and , (57) can be expressed aṡ Hence, According to (52) and (59), it can be established that From this last inequality, the boundedness of | ( )| can be concluded. From the above and taking into account that ( ) = ( ) − ( ) and as, according to Assumption 11, ( ) is bounded, ( ) must also be bounded. This implies, according to (50), that V( ) belongs to ∞ and this last result agrees with Assumption 7. Finally, by taking the limit as → ∞ in both sides of the inequality (60), we can guarantee that | ( )| converges exponentially fast to a zone bounded by the term √2 3 / 3 . In this way, the following theorem has been proven.
Theorem 12. If Assumptions 1-11 are satisfied, the constant is selected greater than 0.5, the weight matrices 1 ( ), ( ) of the neural network (12) are adjusted by the learning laws (13) and (14), and the control law (50) is applied to the system formed by (1)-(2), then (a) the tracking error and the state of system (1) are bounded (b) the norm of the tracking error, that is, | ( ) − ( )|, converges exponentially fast to a zone bounded by the term

Conclusions
In this paper, the exponential tracking for a class of nonlinear systems with unknown deadzones using recurrent neural networks was considered. Since physical model is not available, the neural networks are used to identify the unknown dynamics. The main novelty in this study is a systematic procedure for the modification of the learning  laws of the synaptic weights in such a way that the avoidance of the control singularity can be guaranteed. This objective is achieved by continuously monitoring the determinant of the coupling matrix or more specifically the input weight matrix. By defining a threshold for the determinant of the input weight, a "dangerous" region next to the singularity can be established. When such region is reached, the learning process is immediately stopped. In this way, the invertibility of the coupling matrix is guaranteed. The effect of this modification on the identification error stability is rigorously studied by means of Lyapunov analysis. On the basis of the instantaneous mathematical model obtained by the identification process, a singularity-free feedback linearization control law is developed in order to compel the system state to follow a reference trajectory. By means of Lyapunov-like analysis, the exponential convergence of the tracking error to a bounded zone can be proven. Likewise, the boundedness of all closed-loop signals can be guaranteed. Certainly, the main attractiveness of the suggested approach is its simplicity. However, it must be mentioned that the turning off of the learning law could reduce the system performance. In fact, in such conditions, the control action becomes mainly proportional.
The Scientific World Journal