State-Estimator-Based Asynchronous Repetitive Control of Discrete-Time Markovian Switching Systems

*is paper investigates the problem of asynchronous repetitive control for a class of discrete-time Markovian switching systems. *e control goal is to track a given periodic reference without steady-state error. To achieve this goal, an asynchronous repetitive controller that renders the overall closed-loop switched system mean square stable is proposed. To reflect realistic scenarios, the proposed approach does not assume that the system modes are available synchronously to the controller but instead designs a detector that provides estimated values of the systemmodes to the controller. Based on a detected-mode-dependent estimator, the plant and asynchronous repetitive controller are formulated as a closed-loop stochastic system. By utilizing tools from stochastic Lyapunov–Krasovskii stability theory, we develop sufficient conditions in terms of linear matrix inequalities (LMIs) such that the closed-loop system is mean square stable and also simultaneously establish a synthesis procedure for obtaining the gain matrices. We provide numerical simulations on an electrical circuit switched system to illustrate the approach.


Introduction
As a special class of hybrid dynamic systems, Markovian switching systems are modeled by a set of linear or nonlinear governing equations with the switching between systems in the set determined by Markov chains [1]. Since Markovian switching systems can be used to describe abrupt variations caused by the random failures of components and sudden environmental changes, important and useful results have been reported in many practical applications, such as robot manipulators [2], power systems [3], economic systems [4], sensor networks [5], neural networks [6], multiagent systems [7], and networked control systems [8][9][10]. To date, a variety of results have been published to address the development of controllers for stabilization of Markovian switching systems, see [11][12][13][14][15][16][17][18][19][20][21] and references therein. ese results on control and filtering of Markovian switching systems are based on the assumption that the mode information of the plant is fully available to the controller or estimator at every instant of time to ensure that the switching of controller or estimator is synchronous with that of the plant. Accordingly, the designed controller is typically referred to as modedependent or synchronous; this assumption restricts the application of the controller design approach to many practical systems because the mode information of the plant may not be accessible to the controller due to communication delays and/or missing measurements which may lead to asynchronous behavior between the controller and the system modes.
Since an asynchronous controller has broader applicability than a synchronous controller, the asynchronous control design problem has received wider attention in recent years [22][23][24][25][26][27]. For general switched systems, a strategy to stabilize continuous-time switched systems with asynchronous switching was provided in [22]. Asynchronous H ∞ filtering was studied for discrete-time switched Takagi-Sugeno (T-S) fuzzy systems in [23]. Based on the hidden Markov model, an H 2 controller design was proposed to stochastically stabilize a class of Markovian switching systems with partial information in [24]. A passivity-based asynchronous control problem for Markovian switching systems was considered in [25]. An H 2 filtering design for discrete-time hidden Markovian switching systems was discussed in [26]. In [27], an asynchronous sliding mode control design was proposed for a class of uncertain Markovian switching systems with time-varying delays and stochastic perturbation. Although recent published results are encouraging, there are still many relevant open problems of importance to practical applications, and we consider one such essential problem in this paper.
Since the control tasks in many applications that can be modeled by such systems are often repetitive, increased use of repetitive control formulations can be found in many applications, disk drive systems [28], rotating machinery [29], micro-/nanomanipulation applications [30], and power electronics systems [31]; repetitive control strategies use error measurements from the previous period to reduce subsequent steady-state tracking errors for periodic exogenous input signals. ere is a rich body of literature related to repetitive control design techniques [32][33][34][35][36][37][38][39][40]. Most repetitive control designs in the literature are developed for deterministic systems, whereas designs for switched stochastic dynamical systems are sparse. In particular, the problem of designing a mode-dependent repetitive controller for discrete-time Markovian switching systems was first studied in [40], where a set of sufficient conditions in terms of linear matrix inequalities is derived for stabilization by combining a 2D Lyapunov functional and singular value decomposition of the output matrix. However, the problem of asynchronous repetitive control for Markovian switching systems has not been investigated, mainly due to the complexity of addressing the asynchronous behavior between the controller modes and the Markovian switching system modes.
In this paper, we design a state-estimator-based asynchronous repetitive controller for a class of discrete-time Markovian switching systems for asymptotic desired reference signal tracking of the output by considering the asynchronous behavior between the Markovian switching system modes and the controller modes.
e proposed design provides the following contributions to the existing literature: (1) e estimate of the system mode is obtained by a detector via a hidden Markov model with a mode detection probability matrix which is utilized in the state estimator and controller. is approach relaxes the assumption prevalent in the existing literature that the system mode is available to the controller, and the controller and system mode switching is synchronized. (2) By employing the proposed state-estimator-based asynchronous repetitive control scheme, we develop the closed-loop system as a hidden Markovian jump system. We show that the closed-loop system can achieve mean square asymptotically stable tracking under a set of sufficient conditions which can be expressed in terms of solvable LMIs. (3) We also provide a synthesis procedure to obtain the estimator and controller design matrices; further, to facilitate the solution of the LMIs, we pose a stochastic optimization problem for determining the key gain parameters. e rest of the paper is organized as follows. In Section 2, we describe the plant and the associated asynchronous repetitive control approach and formulate the closed-loop governing equations for the overall system. e main theoretical framework together with the synthesis procedure is provided in Section 3. Application of the approach to an RLC circuit system and numerical simulations are presented in Section 4. Section 5 provides a summary of this work and potential topics for the future.

Notations.
e following notations are employed in the paper: Z + 0 for the set of non-negative integers; R n for the n dimensional Euclidean space; R m×n for the set of all m × n matrices; I for the identity matrix with dimensions derived from the context; ‖x‖ and ‖A‖ for the Euclidean norm of a vector x and a matrix A; superscripts ⊤ and − 1 for matrix transposition and matrix inverse; ⋆ to denote the terms due to symmetry in a matrix. X ≺ Y(X ≻ Y), where X and Y are both symmetric matrices, to mean the matrix X − Y is negative (positive) definite. λ max (A) and λ min (A) denote the eigenvalues of matrix A with maximum and minimum real parts, respectively. E denotes the mathematical expectation operator.

System Description.
To model the switching process as a discrete-time homogeneous Markov chain, we consider the parameter θ(k), k ∈ Z + 0 which takes values from a finite set M � 1, 2, 3, . . . , M { } with the following mode transition probabilities: where π ij ≥ 0, ∀i, j ∈ M, and M j�1 π ij � 1. Based on this Markov chain, we define the M × M transition probability matrix as π whose (i, j)-th element is π ij . Let x(k) ∈ R n , u(k) ∈ R l and y(k) ∈ R m , respectively, denote the state, control input, and system output vectors. Let A(θ(k)), B(θ(k)), C(θ(k)), and D(θ(k)) denote the system matrices with appropriate dimensions. e discretetime linear Markovian switching system that we consider in this paper is given by In the following, for ease of readability, for θ(k) � i ∈ M, the system matrices A(θ(k)), B(θ(k)), C(θ(k)), and D(θ(k)) are denoted by A i , B i , C i , and D i which are assumed to be known. e relative degree of the plant is assumed to be zero, which implies that D i ≠ 0.
Since it is generally not possible to accurately measure the evolution of the system mode θ(k), we consider a probabilistic detector that provides the estimated values of θ(k) with a certain probability. Let the estimated system mode signal be denoted by ρ(k) which is utilized by the controller and need not be synchronized with the system mode θ(k). We consider a hidden Markov model (θ(k), ρ(k), M, N) to characterize the asynchronous phenomenon as follows: can be regarded as a hidden Markov model. We assume that the cardinality of the set N is not greater than the cardinality of M, i.e., |N| ≤ |M|. e asynchronous model (3) covers the mode-dependent and mode-independent cases: (i) when N � M and μ ip � 1 for p � i, i.e., μ � I, the model (3) reduces to the synchronous mode-dependent case and (ii) when N � 1 { }, i.e., the model detector has only one mode ρ(k) � 1, the asynchronous model (3) becomes a mode-independent case.

Asynchronous Repetitive Control.
We consider the repetitive control structure provided in Figure 1, where r(k) is the periodic reference input signal with period T that needs to be tracked, v(k) is the output of the repetitive controller, and e(k) � r(k) − y(k) is the tracking error. e dynamic model of the discrete-time repetitive controller C R (z) is given by For the controller and estimator design, only the estimate ρ(k) of the system mode θ(k) is assumed to be known. e following detected-mode-dependent estimator is utilized to provide the estimate of the state of the linear Markovian switching system (2): where x(k) ∈ R n is the estimated state, y(k) is the output of the asynchronous estimator, and A e (p), B e (p) and C e (p), p ∈ N are estimator parameter matrices which need to be designed. Let the estimation error be en, the estimator error governing equation is given by We consider the following detected-mode-dependent repetitive control input for the plant: Let the exogenous reference input signal be zero, i.e., r(k) � 0. Considering k > T, r(k) � 0, and combining (2), (4), and (7), we can write the control input in terms of the estimation error as where E i (p) and F v i (p), ∀i ∈ M, p ∈ N are related to the control gain parameters K v (p) and C e (p) as follows: , Remark 1. e effect of the stochastic jumps between detected modes is reflected in the asynchronous repetitive control law (7) via gain matrices K v (p) and C e (p), p ∈ N.
e above formulation translates the design of these gain matrices into finding parametric matrices E i (p) and which can be obtained as the solutions of a set of coupled matrix inequalities that depend on the transition and mode-detection probabilities π ij and μ ip , respectively (cf. eorem 2).

Closed-Loop Governing Equations and Preliminaries.
Combining (2), (5), and (8) yields e closed-loop governing equations follow from (11), (10), (13) and can be expressed in compact form as follows: where Based on the above closed-loop governing equations, the problem considered in this paper can be stated as follows: for the discrete-time linear Markovian jump system (2), develop a design procedure to determine estimator matrices A e (p), B e (p), and C e (p) and controller gain matrices K v (p) and C e (p) such that the closed-loop system (14) is mean square stable.
To derive the main results, we will employ the following definition, assumption, and lemma.
Definition 1 (see [41]). e discrete-time linear Markovian switching system (2) with u(k) ≡ 0 is said to be mean square stable if for every initial condition x(0) ∈ R n and θ(0) ∈ M, the following holds: Assumption 1. e system output matrix C i ∈ R m×n , has full row rank, i.e., rank(C i ) � m.
e singular-value decomposition (SVD) of C i can be obtained as follows: where Λ i ∈ R m×m is a diagonal matrix with positive diagonal elements in descending order; 0 ∈ R m×(n− m) is the matrix with zero as its elements; and G i ∈ R m×m and H i ∈ R n×n are orthogonal matrices.
Lemma 1 (see [41]). For a given matrix where H ∈ R n×n is the right orthogonal matrix of SVD, X 11 ∈ R m×m and X 22 ∈ R (n− m)×(n− m) .

Main Results
In this section, we provide the asynchronous repetitive controller design approach for the system given by (2). First, we present the sufficient condition in the following lemma to ensure that the closed-loop system (14) is mean square stable.
Theorem 1. Let α i and β i , ∀i ∈ M, denote positive scalars. Define the following matrices: e closed-loop asynchronous repetitive control system (14) is mean square stable if there exist matrices P 1i ≻ 0, P 2i ≻ 0 and Q i ≻ 0, ∀i ∈ M with appropriate dimensions such that the following matrix inequality holds for all i ∈ M, p ∈ N: x  (k) A e (p) Figure 1: e overall closed-loop system configuration.

Complexity
Proof. For any system mode θ(k) � i ∈ M, we construct a stochastic Lyapunov functional for the closed-loop system (14) as where V 1 (ξ(k), i) and V 2 (v(k − T), i) are given as follows: In the following developments, the expectation operator symbol E is omitted from the right-hand side of some expressions for presentation clarity. Based on the result of [41], utilizing (23) along the trajectories of (14), we simplify E ΔV 1 (ξ(k), i) and E ΔV 2 (v(k − T), i) as follows: Combining (25) and (26), we obtain Based on the matrix inequality (20), for any η(k) ≠ 0 we have where λ min (− Θ i (p)) denotes the minimal eigenvalue of − Θ i (p) and c � inf λ min (− Θ i (p)) , for all i ∈ M, p ∈ N. From (28), for any positive integer K > T ≥ 1, we can obtain As K ⟶ +∞, we have the following: which implies that Complexity erefore, the closed-loop asynchronous repetitive control system (14) is mean square stable in the sense of Definition 1. □ Remark 2. In contrast to the repetitive control designs presented in the recent literature, eorem 1 presents a clear framework to ensure mean square stability for discrete-time Markovian switching systems with an estimator-based asynchronous repetitive controller. Further, the above approach employs multiple and totally mode-dependent Lyapunov functional (21) in arriving at the result in eorem 1, which leads to a less conservative result compared with the approach given in [40] that includes the mode-independent term, such as e condition given in (20) contains nonlinear product terms of unknown gain matrices, and thus, it is a nonlinear matrix inequality. It is generally difficult to solve this inequality as there are no known established methods currently available in the literature to solve such a nonlinear matrix inequality. In this paper, the full row rank assumption on the system output matrix C i , its SVD, and Lemma 1 are utilized to decompose the nonlinear product terms and solve the inequality.
e following theorem provides a sufficient condition to ensure mean square stability of the closed-loop asynchronous repetitive control system (20) in terms of an LMI and subsequently provides a method to synthesize the gain matrices via parametric matrices E i (p), F v i (p), ∀i ∈ M, ∀p ∈ N.
Proof. e proof of this theorem follows from eorem 1. By Schur complement, (20) is equivalent to where and Choose any p ∈ N, we notice that where

Numerical Example and Simulations
In this section, we consider a second-order RLC circuit system as shown in Figure 2, which can be modeled as a discrete-time linear Markovian switching system (2).
By choosing u C and i L as state variables, u as input, and the load voltage u R 2 as output, the state-space representation of this circuit system is given by Because of environmental changes and uncertainties, one can expect fluctuations in circuit parameters. We model the underlying system with these parameter fluctuations as a Markov jump system as follows: where x � [u C , i L ] ⊤ and y � u R 2 . Upon discretization of these governing equations (zeroorder hold approximation with sampling period T s � 0.01 s), we obtain where e evolution of the system and controller/estimator modes is illustrated in Figures 3 and 4, respectively.
For θ ∈ M � 1, 2, 3 { }, the system parameters are chosen as e periodic reference signal r(k) is generated via sampling of the output of the following linear time-invariant (lti) exosystem: _ ω(t) � Fω(t), where F � 0 1 − (π/2) 2 0 , G � 1 2 . e sampling period is T s � 0.01 seconds. e period of the continuous-time periodic reference signal r(t) is 4 seconds, i.e., T � 4. en, the number of samples for each period of the periodic reference signal r(k) is N � T/T s � 400.

Complexity
As stated in Remark 5, we can choose a performance index, to seek the best overall control and learning performance. In this paper, we obtain α 1 � 0.5, α 2 � 1, α 3 � 2, β 1 � 1, β 2 � 1.5, β 3 � 2. By utilizing eorem 2, we directly obtain that We choose the initial value of ω(t) as ω(0) � − 2 − 1 ⊤ . e initial state values are chosen as Under the above parameters, evolution of the system state x(k) and estimation error δ(k) are plotted in Figures 5 and 6, respectively. e trajectories of the reference signal r(k) and system output y(k) are shown in Figure 7. It can be observed that the overall closed-loop system achieves a satisfactory state estimation and reference tracking performance, which demonstrates the effectiveness of the proposed design scheme in Section 3.

Remark 7.
Based on the numerical simulations results, the asynchronous repetitive control method proposed in this paper has the following advantages over the methods provided in [32][33][34][35][36][37][38][39]: (1) the proposed method does not require the controller or estimator to switch synchronously with the plant; (2) the proposed asynchronous repetitive control method is broader and inclusive in the sense that a variety of more restrictive cases can be obtained from it. For example, when N � M and μ ip � 1 for p � i, i.e., μ � I, the asynchronous repetitive control method reduces to the synchronous mode-dependent case. When N � 1 { }, i.e., the mode detector has only one mode ρ(k) � 1; it reduces to the asynchronous mode-independent case. When M � N � 1 { }, it reduces to the synchronous mode-independent case, i.e., the general form of repetitive control.

Conclusion
In this work, we provided an asynchronous repetitive control strategy for discrete-time Markovian switching systems. A hidden Markov model was adopted to describe the asynchronous phenomenon appearing between the system modes and controller modes. With a state-estimatorbased asynchronous repetitive controller, sufficient conditions for mean square stability of the closed-loop stochastic system were derived by using some basic results from stochastic analysis and matrix inequalities. With the proposed approach, all the design matrices can be obtained by solving a set of linear matrix inequalities. To illustrate the approach and its feasibility, numerical simulations on a circuit system were shown to demonstrate the effectiveness of the proposed design scheme. In this paper, we have illustrated the approach using a simple numerical example; one potential future topic is to apply the formulation to an engineering application and conduct numerical and experimental investigations; we plan to collaborate with practicing engineers to pursue this. Further, another possible future work is to develop both necessary and sufficient conditions for such systems, which will also open up more concrete opportunities for applying the method to more practical systems.

Data Availability
No data were used to support this study.

Conflicts of Interest
e authors declare that there are no conflicts of interest regarding the publication of this paper.