Asymptotic Stabilization of Continuous-Time Linear Systems with Input and State Quantizations

This paper discusses the asymptotic stabilization problem of linear systems with input and state quantizations. In order to achieve asymptotic stabilization of such systems, we propose a state-feedback controller comprising two control parts: the main part is used to determine the fundamental characteristics of the system associated with the cost, and the additional part is employed to eliminate the effects of input and state quanizations. In particular, in order to implement the additional part, we introduce a quantizer with a region-decision making process (RDMP) for a certain linear switching surface. The simulation results show the effectiveness of the proposed controller.

In the framework of combined quantization, in particular, there are two main approaches for asymptotic stability of quantized feedback systems.The first approach is to employ logarithmic quantizers [23,24].Based on the assumption that logarithmic quantizers have an infinite number of quantization levels, [23] designed a guaranteed cost controller and [24] presented model predictive control of linear systems.However, it was practically difficult to achieve asymptotic stability in [23,24] without the above assumption.The second approach is to employ quantizers with adjustable sensitivity (or zoom variable) [18,21].Based on the assumptions that the sensitivity can be changed dynamically and it approaches zero as time passes, [18,21] achieved asymptotic stability.However, in the case of a finite set of possible values for the sensitivity, asymptotic stability could not be achieved.To the best of our knowledge, in the framework of combined quantization, no studies based on simple uniform quantizers have been performed.The goal of this paper is to develop a control policy without the aforementioned assumptions in order to achieve asymptotic stabilization of systems using combined quantization.
This paper deals with the asymptotic stabilization problem of linear systems with uniform input and state quantizations, where the quantization levels of the uniform quantizers are not design parameters but predetermined constants.For asymptotic stabilization of systems with input quantization, the authors in [15][16][17] proposed a simple yet powerful concept that required the exact state information in order to eliminate the effect of input quantization.Since the quantized state information rather than the exact state information was measurable in systems with state quantization, they could not handle systems with state quantization.However, this paper extends the previous studies to linear systems with not only input quantization, but also state quantization.In other words, to achieve asymptotic stabilization of such systems, we propose a state-feedback controller.Like the controllers in [15][16][17], the proposed controller consists of main and additional control parts.The former is responsible for determining the fundamental characteristics of the system associated with the cost and the latter is responsible for simultaneously eliminating the effects of input and state quantizations.Here, since each component of the latter part is designed as an integer multiple of the input quantization level, the decreasing property of a Lyapunov function associated with the cost can be maintained.In particular, in order to implement the latter part, we introduce a quantizer with a region-decision making process (RDMP), where the RDMP plays the role of determining where the state is located with respect to a certain linear switching surface.As mentioned above, since this paper focuses on asymptotic stabilization of linear systems using combined quantization rather than on communication constraints such as the communication rate and the channel capacity, it is not directly related to control with limited information but is included in the framework of combined quantization.The main contribution of this paper is the design of a novel RDMP quantizer-based state-feedback controller that achieves asymptotic stability for quantized feedback systems.Further, the merit of the proposed controller is that asymptotic stability can be achieved regardless of the assumptions such as adjustable sensitivity and an infinite number of quantization levels.
This paper is organized as follows.Section 2 provides a system description.Section 3 introduces an RDMP quantizer and a state-feedback controller for linear systems with input and state quantizations.Section 4 presents some simulation results for validating the proposed controller.Finally, Section 5 presents the conclusion along with a summary.The notations in this paper are consistent with those in [16] with the following additional notations.sgn() is the sign of a scalar .For  ∈ R  , []  denotes the th component of .The inner product of  and  is denoted as ⟨, ⟩ =   .Further,   is a unit vector with the th nonzero entry; that is,

System Description
Consider the following continuous-time linear system with input and state quantizations: where () ∈ R  , () ∈ R  , and (⋅) are the state, control input, and mapping function from   to   , respectively.
Further, the midtread uniform quantization operators   (⋅) and   (⋅) with respect to () and () are defined as where round(⋅) is a function that rounds to the nearest integer.
Hereafter, we refer to the fixed values   (> 0) and   (> 0) as quantizing levels with respect to () and (), respectively.Since this paper focuses on the steady-state performance (asymptotic stabilization) rather than the saturation level of the uniform quantizers, we assume that the saturation levels are sufficiently large.Based on this assumption, we note that the quantization errors ∇() and ∇() are defined as where each component of ∇() and ∇() at time  is bounded by   /2 and   /2, respectively; that is,

Main Results
Before providing a state-feedback controller that can achieve asymptotic stabilization of system (1), let us first consider the following linear system: In addition to (5), choose the following cost: where Q and R are positive definite matrices.
Lemma 1.For linear system (5) without input and state quantizations, suppose that there exist a symmetric positive definite matrix , a matrix , and a positive scalar  such that Then, the controller can be constructed as () = (), which makes the state () converge to the origin asymptotically, where  ≜   −1 .Further, in order to obtain the minimum upper bound of the cost in (6), we minimize  subject to (7) and (8).
Now, based on the solutions obtained from Lemma 1, we can construct a state-feedback controller for system (1) as  () =  (  ( ())) +  () =   ( ()) +  () , (11) where   (()) is the main control part used to achieve goals such as linear quadratic (LQ) regulation and pole assignment, and () is the additional control part to simultaneously handle the input and state quantization errors.Here, we choose each component of () to be the integer multiple of the input quantizing level   , which results in the relation   (()) =   (  (()) + ()) =   (  (())) + ().This relation enables us to divide the role of the controller in (11) into two parts: the main control part responsible for determining the fundamental characteristics of the system associated with the cost and the additional control part responsible for eliminating the effects of input and state quantizations.
The authors in [15][16][17] proposed a novel concept for asymptotic stabilization of systems with input quantization; however, since the exact state information is necessary to eliminate the effect of quantization, they could not handle systems with state quantization.To achieve asymptotic stabilization of systems with input and state quantizations, we introduce an RDMP quantizer that enables us to achieve the goal by using the quantized state information as follows.The overall closed-loop system to be considered is shown in Figure 1(a), where a midtread uniform quantizer   (⋅) and an RDMP quantizer are employed in the input side and the state side, respectively.As described in Figure 1(b), the RDMP quantizer comprises a midtread uniform quantizer   (⋅), a register, and the RDMP, where, based on the matrix  obtained from Lemma 1, the register stores .With respect to a linear switching surface      () = 0, the RDMP plays the role of determining whether the state () lies in the region Ω  = {() |      () ≥ 0} or Ω  = {() |      () < 0}.As illustrated in Figure 1(c), the RDMP is designed using a midrise uniform quantizer   (⋅), amplifiers, adders, and so on.The operator   (⋅) is defined as [  (())]  =   (  ()) ≜  {floor(  ()/) + 0.5}, where   () is the th component of (), floor() is the largest integer not greater than a scalar , and the fixed value (> 0) is a quantizing level.Further, for the sake of convenience, the saturation level of   (⋅) is set to ±1, although it can be set to any value.Thus, the quantized measurement   () ∈ R + , that is, the output of the RDMP quantizer, is defined as   () ≜ [  (  ())   (  ())] , where () ∈ R  and   () =      ().Therefore, the main control part   (()) and the additional control part () in ( 11) can be designed using the quantized measurements   (()) and   (()), respectively.Theorem 2. For system (1) with input and state quantizations, suppose that there exist a symmetric positive definite matrix , a matrix , and a positive scalar  such that (7) and (8) hold.Then, based on the RDMP quantizer shown in Figure 1, the controller can be constructed as () =   (()) + () in (11), which makes the state () converge to the origin asymptotically, where  ≜   −1 and the th component of () can be designed as and the operator ⌈⌉ is the smallest integer not less than a scalar .Further, in order to obtain the minimum upper bound of the cost in (6), we minimize  subject to (7) and (8).
As shown in the proof of Theorem 2, we design () as in (12) in order to eliminate the energy in the sense of Lyapunov caused by ∇() and ∇() and to maintain the decreasing property of a Lyapunov function associated with the cost.However, the chattering phenomenon may be caused by the proposed controller since the main control part employs   (()) and () employs sgn().In order to alleviate the chattering phenomenon without significant performance degradation, the continuous function tanh() can be employed in (12) instead of sgn() [25].Then, V(()) may not be negative for small .However, for any given , there exists a sufficiently large  > 0 such that V(()) < 0 for || > , and all trajectories enter the strip || < .This approximation steers the state into a neighborhood of () = 0, and the size of the neighborhood shrinks as  → ∞; on the other hand, as  → ∞, the chattering phenomenon increases.That is,  is a trade-off parameter between the steady-state performance and the chattering phenomenon so that the maximum  in the practically allowable range is chosen.
Remark 3. The feature of this paper is that asymptotic stabilization of system (1) with input and state quantizations can be achieved even though coarse quantizers with large   and   are used instead of fine quantizers.Therefore, since we employ finite-level coarse quantizers with sufficiently large quantizing levels that enable us to avoid saturation, the assumption in Section 2 that the saturation levels of the quantizers are sufficiently large is indeed reasonable.Remark 4. Here, a remarkable point is that if the information transmitted from the RDMP quantizer to the controller does not contain the sign of   () =      () but the information quantized by a midtread quantizer, it is almost impossible to completely reconstruct the sign of   () on the controller side due to the presence of its quantization errors.Thus, it is certainly more desirable to transmit the sign of   () to the controller side, instead of its quantized value.For this purpose, this paper has used the two regions Ω  = {() |      () ≥ 0} and Ω  = {() |      () < 0} in the RDMP, which play an important role in determining the sign of   () =      ().As a result, based on the sign assigned with the help of the two regions, we obtain the following relation: The above relation enables us to eliminate the energy in the sense of Lyapunov caused by ∇() and ∇().
and the minimized  is 11.4534.Figure 2 illustrates the trajectories of the states for each controller, and Figure 3 shows ‖()‖ 2 on a logarithmic scale.As shown in Figures 2 and 3, in the case of Type-1 controller, the state does not converge to the origin and remains around a certain region.However, in the case of Type-2 controller, the state converges to the origin asymptotically.Further, Tables 1 and 2 present ‖()‖ 2 in the steady state for different values of   and   , respectively.As shown in the tables, in the case of Type-1 controller, ‖()‖ 2 converges to a certain value due to the effects of input and state quantizations; however, in the case of Type-2 controller, ‖()‖ 2 converges to zero (almost within the numerical precision ≃10 −5 ).That is, from the tables, we can clearly see that Type-2 controller achieves asymptotic stability for each of given   and   .Figures 3 and 4 show that Type-3 controller using tanh() in () can alleviate the chattering phenomenon of Type-2 controller without significant performance degradation.

Conclusion
This paper addressed the asymptotic stabilization problem of linear systems with input and state quantizations.In order to achieve asymptotic stabilization of such systems, a statefeedback controller was proposed.For implementing the proposed controller, we introduced the RDMP quantizer   that enabled us to eliminate the effects of input and state quantizations without the use of the exact state information and did not require assumptions such as adjustable sensitivity or an infinite number of quantizing levels.As shown in the simulation results, in the case of Type-1 controller, the state did not converge to the origin; however, in the case of Type-2 controller, the state converged to the origin despite input and state quantizations.Further, Type-3 controller alleviated the chattering phenomenon of Type-2 controller without significant performance degradation.

Figure 2 :
Figure 2: The trajectories of the states.

Figure 4 :
Figure 4: The trajectories of the inputs.