We consider the clustering-based procedures for the identification of discrete-time hybrid systems in the piecewise affine (PWA) form. These methods exploit three main techniques which are clustering, linear identification, and pattern recognition. The clustering method based on the k-means algorithm is treated in this paper. It consists in estimating both the parameter vector of each submodel and the coefficients of each partition while knowing the model orders na and nb and the number of submodels s. The performance of this approach can be threatened by the presence of outliers and poor initializations. To overcome these problems, we propose new techniques for data classification. The proposed techniques exploit Chiu’s clustering technique and the self-artificial Kohonen neural network approach in order to improve the performance of both the clustering and the final linear regression procedure. Simulation results are presented to illustrate the performance of the proposed method.
1. Introduction
Hybrid systems have received great attention in the last years since the behavior of a broad class of physical systems interacts continuous and discrete-event phenomena. The hybrid system is governed by continuous differential equations and discrete variables. The continuous behavior is the fact of the natural evolution of the physical process, whereas the discrete behavior can be due to the presence of switches, operating phases, transitions, computer program codes, and so forth. Several classes have been proposed in the literature for the representation of hybrid systems such as jump linear models (JL models) [1], Markov jump linear models (MJL models) [2], Mixed Logic Dynamical models (MLD models) [3, 4], Max-Min-Plus-Scaling systems (MMPS models) [5], Linear Complementarity models (LC models) [6], Extended Linear Complementarity models (ELC models) [7], and Piecewise Linear models (PWA models) [8, 9]. Only the PWA models are considered in this paper. These models are obtained by decomposing the state-input domain into a finite number of nonoverlapping convex polyhedral regions and by associating a simple linear or affine model to each region. This class of hybrid systems offers several interesting advantages. Firstly, it can approximate any nonlinear system with arbitrary accuracy [10]. Moreover, the properties of equivalence between PWARX models and other classes of hybrid systems allow transferring the results of PWA models to these classes [11]. Therefore, the notion of PWA models can be used to represent complex nonlinear continuous systems. In fact, we can exploit the “divide to reign” strategy which consists in decomposing the domain range of the nonlinear system into a set of operating regions. For each operation region, a linear or affine model is associated. So, the considered complex nonlinear system becomes by modeling as a hybrid system switching between linear submodels. The analysis and control of PWA systems, like any other type of dynamic system, require a mathematical model of its behavior. This model can be defined through a detailed analysis of the phenomena described by the system using the various laws that govern its operation. This approach can lead to very complicated models that cause problems of exploitation and implementation. However, for engineering, a mathematical model must provide a compromise between accuracy and simplicity of operation. A solution to this problem consists in using the identification approach which allows to build a mathematical model from observed input-output data. In the case of PWARX systems, the identification problem is known to be a challenging problem because it involves both the estimation of the parameters of the affine submodels and the hyperplanes defining the partition of the state-input regression. Several approaches have been proposed in the literature for the identification of PWARX systems. These methods can be classified in numerous categories of solutions such as algebraic solution [12], clustering-based solution [8], Bayesian solution [13], bounded-error solution [14], and sparse optimization solution [15, 16]. The clustering-based solution has been the most popular because it is characterized by its capacity for modeling complex systems and its simplicity of implementation. It uses the following steps to identify the parameters and the hyperplanes:
constructing small data set from the initial data set,
estimating a parameter vector for each small data set,
classifying the parameter vectors in s clusters,
classifying the initial data set and estimate s submodels with their partitions.
It is easy to deduce that data classification represents the main step toward the objective of PWARX system identification because a successful identification of the parameters depends on the correct data classification. The early approaches use classical clustering algorithms for the data classification [8, 17, 18]. These approaches are characterized by their simplicity of computation and implementation. But they can converge to local minima in the case of poor initializations. Furthermore, their performances degrade when the data are contaminated by the presence of outliers in the data to be classified. Obviously, the use of more powerful clustering algorithms can enhance the performance of these methods. In fact, we suggest to improve the performance of this approach by using other algorithms for data classification such as Chiu’s algorithm [19] and the self-adapting artificial kohonen neural Network algorithm [20]. These algorithms allow to reduce the effect of outliers. Moreover, they do not need any initialization. This paper is organized as follows. In Section 2, we present the model and its main assumptions. Section 3 recalls the main steps of the identification of PWARX systems based on clustering technique. Section 4 presents the motivation of using the two proposed methods. In Sections 5 and 6, we describe two algorithms for data clustering allowing to resolve the main problems of the existing methods. The performances of the proposed approach are evaluated and compared through simulation results in Section 7. Section 8 concludes the paper.
2. Model and Assumptions
In the following, we address the problem of identification of PWARX model described by
(1)y(k)=f(φ(k))+e(k),
where
y(k)∈R is the system output,
e(k) is the noise,
k is the now time index,
φ(k) is the vector of regressors which belongs to a bounded polyhedron H in Rd:
(2)φ(k)=[y(k-1),…,y(k-na)u(k-1),…,u(k-nb)]T,
where u(k)∈Rnu is the system input, na and nb are the system orders, and d=na+nu(nb+1).
f is a piecewise affine function defined by
(3)f(φ)={θ1Tφ-ifφ∈H1⋮θsTφ-ifφ∈Hs,
where φ-=[φT1]T, s is the number of submodels, Hi are polyhedral partitions of the bounded domain H, and θi∈Rd+1 is the parameter vector.
The following assumptions are assumed to be verified.
(A1) The orders na and nb and the number of submodels s are known.
(A2) The noise e(k) is assumed to be a Gaussian process independent and identically distributed with zero mean and finite variance σ2.
(A3) The regions {Hi}i=1s are the polyhedral partitions of a bounded domain H⊂Rd such that
(4)⋃i=1sHi=H,Hi∩Hj=∅∀i≠j.
Problem Statement. Identify the partitions {Hi}i=1s and the parameter vectors {θi}i=1s of the PWARX model using a data set {y(k),φ(k)}k=1N.
3. Identification of PWARX Models Based on Clustering Approach
This section recalls the main steps of the clustering-based approach for the identification of the PWARX models [8, 17].
3.1. Data Classification
For every pair of data {φ(k),y(k)}k=1N, we construct a local set ρk={tk1,…,tknρ} containing in ascending order the index k of {φ(k),y(k)} and its (nρ-1) nearest neighbors satisfying
(5)∀(φ˘,y˘)∈ρk,∥φ(k)-φ˘∥2≤∥φ(k)-φ^∥2,∀(φ^,y^)∉ρk.
Among the obtained local sets ρk, some may contain only data from the same model as they are called pure local sets, and others can collect data from multiple submodels that are called mixed sets.
The parameter nρ is chosen randomly as nρ>d+1. It influences decisively on the performance of the algorithm. The optimal value of nρ is always a compromise between two phenomena: the more this parameter is bigger, the more the parameter estimation is improved and the effect of noise is rejected. However, a large value of nρ increases the number of local mixed sets.
For each local set, we can identify an affine model. To accomplish this task, we adopt the least square method to determine the local parameters θk:
(6)θk=(ϕkTϕk)-1ϕkTYk,
where ϕk=[φ-(tk1)⋯φ-(tknρ)]T and Yk=[y(tk1)⋯y(tknρ)]T.
Our objective is to classify the vectors θk in s separate classes using a suitable classification technique.
In this paper, three techniques of classification are treated: the k-means algorithms where the classification is done by minimizing a suitable criterion [8, 21] and Chiu’s clustering technique and the self-artificial Kohonen neural network which are detailed in Sections 5 and 6.
3.2. Parameters Estimation
As the obtained data are now classified, it is possible to determine the s ARX submodels. We can then estimate the parameter vectors of each submodel θi, i=1,…,s using the least square method.
3.3. Regions Estimation
The final step is to determine the regions Hi. The methods of statistical learning such as the support vector machines (SVM) offer an interesting solution to accomplish this task [22, 23]. Support vector machines are a popular machine learning method for classification, regression, and other learning task. Originally, the SVM approach was addressed to binary classification, and then it has been extended to multiclass classification. This study is still an ongoing research issue [24, 25].
In our case, it is matter of finding for every i≠j the hyperplane that separate points existing in Hi and in Hj. Given two sets Hi and Hj, i≠j, the linear separation problem is to find w∈Rd and b∈R such that
(7)wTφk+b>0∀φk∈Hi,wTφk+b<0∀φk∈Hj.
This problem can be easily rewritten as a feasibility problem with linear inequality constraints. The estimated hyperplane separating Hi from Hj is denoted with Mi,jφ=mi,j, where Mi,j and mi,j are matrices of suitable dimensions. Moreover, we assume that the points in Hi belong to the half-space Mi,jφ≤mi,j.
The regions Hi are obtained by solving these linear inequalities. It is then enough to consider the bounded polyhedron [21]:
(8)[Mi,1′⋯Mi,s′M′]φ≤[mi,1′⋯mi,s′m′],
where Mx≤m are the linear inequalities describing H.
4. Motivation of Adopting Chiu’s Clustering Technique and the Self-Artificial Kohonen Neural Network
The classification is an important step to achieve the objective of PWARX model identification because successful identification of both submodels and partitions depends on the performance of the used clustering technique. In fact, this problem presents an area of research in which few results have been devoted in the past because most of the existing methods for the identification of PWARX models are based on classical clustering algorithms such as k-means methods. However, the classical clustering methods even the modified k-means algorithms allow only to reduce the influence of outliers and the poor initializations. Consequently, they still suffer from many drawbacks which can be summarized as follows.
They depend on the input signal which must be a persistent excitation to permit to the submodels to have a balanced input [26].
The parameter nρ must have a small value in order to simplify the computation complexity. However, the best results are generally obtained with a high value of nρ.
The k-means algorithm does not guarantee the convergence toward an optimal cluster, and therefore it can converge to a local minimum. This is due mainly to the randomly initialization step used by this algorithm.
We are interested in using another techniques of classification that can identify and eliminate the misclassified points and can avoid the random initializations. As we have said that we will adopt a similar regression scheme to that of the k-means procedure, we will focus then on a way that can separate the local parameters θk, k=1,…,N.
Consider, for example, a dispersion of the local parameters as shown in Figure 1, for example, having the following real parameters:
(9)θ1=[-10],θ2=[10],θ3=[3-2].
The local parameters θk, k=1,2,3.
Based on the results presented in Figure 1, it is well noted that the local parameters are scattered in a way that they most often get around the real parameters. Therefore, the existence of natural groupings of data points because of the PWA properties is clearly observed. We find that determining the centers of these groupings is an interesting solution for our identification problem. For this purpose, we find in [27, 28] a simple and effective algorithm proposed by Chiu for data points clustering. Moreover, we find in the self-adapting artificial Kohonen neural network an interesting and effective way for data classification [20].
5. Chiu’s Classification Method for the Identification of PWARX Systems
Clustering of data forms the basis of many modeling and pattern classification algorithms. The purpose of clustering is to find natural groupings of data in a large data set, thus revealing patterns in the data that can provide a concise representation of the data behavior. Chiu proposed in [27, 28] a simple and effective algorithm for data points clustering.
5.1. Principle
Chiu’s classification method consists in computing a potential value for each point from the data set based on its distances to the actual data points and consider each data point as a potential cluster center. The point having the highest potential value is chosen as the first cluster center. The key idea in this method is that once the first cluster center is chosen, the potential of all other points is reduced according to their distance from the cluster center. All points near the first cluster center will have greatly reduced potential. The next cluster center takes then the highest remaining potential value. This procedure of acquiring new cluster center and reducing the potential of the surrounding points repeats until the potential of all points falls below a threshold or until reaching the number of required clusters.
5.2. PWARX System Identification Based on Chiu’s Classification Method
We now present the use of Chiu’s classification method for the identification of PWARX systems. In fact, consider the local parameters obtained by applying the least square method to the grouping obtained by associating with each φ its (nρ-1) nearest neighbors as it is described in the k-means procedure. These N local parameter vectors (θi, i=1,…,N) picked out by applying (6) are the objective of our proposed classification technique. Thus, we compute a potential value for each parameter vector θi using the following expression:
(10)Pi=∑j=1Ne-(4/ra2)∥θi-θj∥2,
where ra is a positive constant.
The potential of each parameter vector is a function of its distances to all other parameter vectors. Thus, a parameter vector with many neighboring data points will have the highest potential value. The constant ra is the radius defining the neighborhood which can be determined by the following expression:
(11)ra=αN∑i=1N1nρ∑j=1nρ∥θi-θj∥,
where α can be chosen such as 0<α<1.
Since from the set of parameter vectors (θi, i=1,…,N) there are some parameters obtained from mixed local sets, it is clear that we have interest in eliminating them. Equation (10) can be exploited to eliminate the misclassified parameter vectors. As this equation attributes to the outliers a low potential, we can fix a threshold γ under which the parameter vectors are not accepted and then removed from the data set. This threshold is described by the following equation:
(12)γ=min(P)+β(max(P)-min(P)),
where 0<β<1.
After this treatment the set of parameter vectors is filtered and reduced to (θi,i=1,…,N′)(N′<N). Then, from this new data set we select the parameter vector with the highest potential value as the first cluster center. Let θ1* be this first center, and let P1* be its potential. The potential is then updated by this formula
(13)Pi⟸Pi-P1*e-(4/rb2)∥θi-θ1*∥2.
The parameter vectors near the first cluster center will have then a reduced potential, and so they are unlikely to be selected as the next center. The parameter rb is a positive constant that must be chosen larger than ra to avoid obtaining closely spaced cluster centers. The constant rb is computed using this formula
(14)rb=αN∑i=1Nmaxj=1:nρ(∥θi-θj∥).
In general after obtaining the kth cluster center, the potential of every parameter vector is updated by the following formula:
(15)Pi⟸Pi-Pk*e-(4/rb2)∥θi-θk*∥2,
where Pk* and θi* are, respectively, the potential and the center of the kth parameter vector.
This work is then repeated until obtaining s potential and s centers. The s obtained centers are our sought parameter vectors.
Now, after obtaining s centers it matters to search the elements belonging to each cluster. So, we calculate the distance between the estimated output and the real one and classify φ(k) into the cluster whose distance is the minimum:
(16)argmin(θiTφ-k-yk),i=1,…,s.
5.3. Algorithm
See Algorithm 1.
<bold>Algorithm 1</bold>
Data: Dispose of {θi}i=1N from a given data set (φi,yi)
Main steps:
(i) Compute Pi for every {θi}i=1N according to (10)
(ii) Determine the filtered data points {θi}i=1N′,(N′<N)
(iii) Compute the first cluster center θ1* from (10)
repeat
Compute the other cluster centers according to the updated potential formula (15)
untilobtaining sclusters;
Result: Determination of the parameters {θi}i=1s
5.4. Properties
The new clustering technique has several interesting properties which can be summarized as follows.
This method does not require any initialization of centers. Therefore, the problem of convergence towards local minima is overcome.
This method removes the misclassified parameter vectors (θi) from the data set and repeats the overall identification procedure on the reduced set of data points. The outliers can be removed thanks to (10) that associates a low potentials with these parameter vectors.
The choice of the parameter nρ is more flexible. In fact, we can improve the performance with high value of nρ.
6. PWARX Identification Using a Self-Adapting Artificial Kohonen Neural Network6.1. Principle
The Kohonen neural network is an interesting and effective tool for data classification [20].
The self-organizing Kohonen map is an oriented artificial neural network, consisting of two layers. In the input layer, the neurons correspond to the variables describing the observations. The output layer is, generally, organized as a grid (map) of neurons with two dimensions. Each neuron represents a group of similar observations.
The Kohonen network is a technique for automatic classification (clustering, unsupervised learning). The objective is to produce a group so that the members in the same cluster are similar and the members located in different clusters are different.
The neural network used in the proposed method is formed by one input layer of p neurons and by one output layer of n neurons. The architecture of this network is given by Figure 2 [29]. Each neuron of the Kohonen card receives p signals coming from the input layer. The weight wpn is relative to the connection between the input neuron p and the output neuron n. The weight vector Wi associated with neuron i is then composed of p elements.
The architecture for the generation of different observation vectors for modeling.
A Kohonen card computes the euclidian distance between an input Y and its weight vector W.
Kohonen learning uses a function η, whose value η(i,k) represents the strength of the coupling between neuron i and neuron k during the training process. η(i,k)=1 for all input neurons i in the neighborhood of neuron k and η(i,k)=0 for all other neurons.
6.2. Algorithm
The learning algorithm for Kohonen networks is shown in Algorithm 2.
<bold>Algorithm 2</bold>
Data: Dispose of an input vector Y
Main steps:
(1) The n-dimensional weight vectors W1,W2,…,Wn are
selected at random.
(2) Each card neuron computes the distance between the weight vector Wi and the input vector Y.
(3) The competition between neurons is based on the winner-takes-all strategy. The neuron having the nearest weight vector Wi to the input Y wins the competition. The winner neuron output η(i,k)=1 and the other ones are putting then at zero η(i,k)=0
(4) The weight vectors are updated using the neighborhood function and the update rule:
Wi⟵Wi+α(Y-Wi)η(i,k), (*)
where i=1,…,n and α is a constant verifying 0<α<1
(5) Stop if the maximum number of iterations has been reached, otherwise continue with step 2.
6.3. PWARX System Identification Based on the Kohonen Neural Network Method
Our purpose is to exploit the Kohonen self-organizing map to identify PWARX systems. Therefore, consider a collection of N data points (θk,k=1,…,N) obtained by (6) of the k-means-based procedure. We propose, as a treatment to eliminate the outliers, to apply (10) of the Chiu’s clustering technique. After obtaining the filtered set of data points θk, k=1,…,N′ (N′<N), the step of data classification will be done by using the Kohonen neural network algorithm while taking an input vector Yk=θk.
The output layer is then formed by s neurons (s is the submodels number), and Wi are the clusters’ centers.
After obtaining s cluster centers Wi, we have to define the elements belonging to each original cluster partitioning the regressor input. To perform this task, we calculate the distance between the estimated output and the real one and classify φk into the corresponding cluster according to the following formula:
(17)argmin(WiTφ-k-yk),i=1,…,s.
7. Simulations Results
We now present two simulations examples to illustrate the performance of the proposed approaches.
7.1. Quality Measures
Hence, the objective of the simulations is to compare the performance of the proposed methods with that of the modified k-means approach. The following quality measures are used to study the performance of each method [30].
(i) The maximum of relative error of parameter vectors is defined by
(18)Δθ=maxi=1,…,s∥θi-θ-i∥2∥θ-i∥2,
where θ-i and θi are, respectively, the true and the estimated parameter vectors for submodel i. The identified model is deemed acceptable if Δθ is small or close to zero.
(ii) The averaged sum of the squared residual errors is defined by
(19)σe2=1s∑i=1sSSRi|Di|,
where SSRi=∑(y(k),φ(k))∈Di(y(k)-[φ(k)′1]θi)2 and |Di| is the cardinality of the cluster Di.
The identified model is considered acceptable if σe2 is small and/or close to the expected noise variance of the true system.
(iii) The percentage of the output variation, that is explained by the model, is defined by
(20)FIT=100·(1-∥y^-y∥2∥y-y-∥2),
where y^ and y are, respectively, the estimated and the true outputs’ vectors and y- is the mean value of y.
The identified model is considered acceptable if FIT is close to 100.
7.2. Example 1
Consider the following PWARX system [17]:
(21)y(k)={[-10][u(k-1)1]′+e(k)ifφ∈[-40[[10][u(k-1)1]′+e(k)ifφ∈[02[[3-2][u(k-1)1]′+e(k)ifφ∈[24],
where s=3,na=0,nb=1,H=[-44], the input u(k)∈R is generated randomly according to the uniform distribution on H,e(k) is a white gaussian noise of variance σ2=0.05, and φ(k)=[u(k-1)] is the regressor vector.
We evaluate the performances of the proposed algorithms (Chiu-based algorithm and Kohonen-based algorithm) and the k-means algorithm by using the same identification data.
Figure 3 presents the input and the real output of the system.
Input and real outputs.
The parameter nρ defining the cardinality of the local data sets is chosen as follows: k-mean algorithm (nρ=6), Chiu’s-based algorithm (nρ=15), and Kohonen-based algorithm (nρ=20). Taking into account the parameter nρ appropriately chosen, each algorithm generates a sequence of local parameters. These local parameters are then classified into three sets, and the center of each set is also determined as it is shown in Figure 4. The centers of each set are depicted by the star symbols.
Local parameters separated into three sets.
Based on the results presented in Figure 4, we observe that the outliers are removed by the proposed methods. However, the modified k-means method has preserved the outliers.
After obtaining the estimated parameter vectors, we apply the SVM algorithm in order to estimate the regions. We can then attribute each parameter vector to the corresponding region where it is defined.
The estimated output obtained with three algorithms is presented in Figure 5, and the estimated parameter vectors are illustrated in Table 1.
Estimated parameters (Example 1).
True values
k-means
Chiu
Kohonen
θ1
−1
−0.9971
−0.9830
−0.9785
0
0.0014
0.0115
0.0135
θ2
1
0.9134
1.0221
1.0072
0
0.1034
0.0178
0.0037
θ3
3
3.0092
3.0047
2.9666
−2
−2.0258
−2.0121
−1.9031
H1
−4
−4.0000
−4.0000
−4.0000
0
−0.3926
0.0164
−0.0346
H2
0
−0.2449
−0.0164
0.0346
2
1.9852
1.7262
1.6834
H3
2
2.0101
1.5946
1.6678
4
4.0000
4.0000
4.0000
Estimated output with three algorithms.
Table 2 presents the quality measures of (18), (19), and (20) of the two proposed methods and the k-means approach.
Validation results (Example 1).
k-means
Chiu
Kohonen
Δθ
0.0361
0.0284
0.0284
σe2
0.0054
0.0024
0.0028
FIT
97.4086
98.1155
97.9654
Based on the results presented in Tables 2, 1, and Figure 5, we observe that the proposed methods give better performances than the k-means method. The reason is that the proposed method reduces the influence of outliers and does not require any arbitrary initialization.
7.3. Example 2
Consider the following PWARX model [31]:
(22)y(k)={[0.40.50.3]φ-(k)+e(k)ifφ(k)∈H1,[-0.70.6-0.5]φ-(k)+e(k)ifφ(k)∈H2,[0.4-0.2-0.2]φ-(k)+e(k)ifφ(k)∈H3,
where s=3,na=1,nb=1, and φ(k)=[y(k-1)u(k-1)]T is the regressor vector.
Consider
(23)H1={φ∈ℜ2:[10.30]φ-≥0,[00.50]φ->0},H2={φ∈ℜ2:[10.30]φ-≤0,[1-0.30]φ-<0},H3={φ∈ℜ2:[1-0.30]φ-≥0,[00.50]φ-≤0}.
The input signal u(k) and the noise signal e(k) are random sequences from the normal distribution with variances, respectively, 0.5 and 0.05.
For the k-means algorithm, the parameters of the affine submodels are estimated by minimizing the criterion function. Therefore, the optimization algorithm has the drawback of getting trapped in a local minimum, and poor results can be obtained. In addition, all submodels must have a balanced excitation, but this is not always guaranteed. Thus, we cannot apply the Monte Carlo simulation to the k-means algorithm. Only the algorithm based on Chiu’-clustering technique and the Kohonen neural network-based one are considered in this example.
We carry on this model a Monte Carlo simulation of size 100 with different noise realizations and different input excitations. The size of data generated in each simulation is N=250.
We follow the same procedures described above. The estimated parameter vectors are illustrated in Table 3.
Estimated parameters with the two methods.
True values
Chiu (nρ=17)
Kohonen (nρ=17)
θ1
[0.40.50.3]
[0.40460.51380.2919]
[0.38100.51540.2699]
θ2
[-0.70.6-0.5]
[-0.61790.5336-0.4740]
[-0.64090.5464-0.4734]
θ3
[0.4-0.2-0.2]
[0.4015-0.2071-0.2042]
[0.4244-0.2028-0.2134]
The quality measures are computed by the mean of the 100 measures of each simulation. They are presented in Table 4.
Validation results (Example 2).
Chiu
Kohonen
Δθ
0.2001
0.1667
σe2
0.0054
0.0058
FIT
82.0520
78.4514
8. Conclusion
In this paper, we have considered only the clustering-based procedures for the identification of PWARX systems. We focused on the most challenging step which is the task of classification of data points.
The clustering-based procedures require that the model orders na and nb, and the number of submodels s is a priori fixed. The parameter nρ, defining the cardinality of the local data sets, is the main tuning knob.
The clustering method based on the k-means algorithm treated in this paper showed that it performs poorly if the number of mixed local data sets is high. The increase of the number of mixed local data sets which depends on the chosen parameter nρ leads to the presence of outliers. Added to that the poor initializations lead the algorithm to converge to local minima.
To overcome these problems, we have proposed two techniques of classification. The first is Chiu’s clustering technique and the second is the Kohonen neural network-based one. The proposed methods can guarantee optimal classification.
The main problem to deal with is the classification data points that are consistent with more than one submodel, namely, data points lying in the proximity of the intersection of two or more submodels. Wrong attribution of these data points may lead to misclassifications when estimating the polyhedral regions.
Finally, the choice of persistently exciting input signals for identification (i.e., allowing for the correct identification of all the affine dynamics) is another important topic to be addressed. Moreover, when dealing with discontinuous PWARX models, the choice of the input signal should be such that not only all the affine dynamics are sufficiently excited but also accurate shaping of the boundaries of the regions is possible.
Acknowledgment
This work was supported by the Ministry of the Higher Education and Scientific Research in Tunisia.
FengX.LoparoK. A.JiY.ChizeckH. J.Stochastic stability properties of jump linear systemsDoucetA.GordonN. J.KrishnamurthyV.Particle filters for state estimation of jump Markov linear systemsBemporadA.MorariM.Control of systems integrating logic, dynamics, and constraintsBemporadA.Ferrari-TrecateG.MorariM.Observability and controllability of piecewise affine and hybrid systemsDe SchutterB.Van den BoomT.On model predictive control for max-min-plus-scaling discrete event systems200000-04The NetherlandsControl Systems Engineering, Faculty of Information Technology and Systems, Delft University of Technologyvan der SchaftA. J.SchumacherJ. M.Complementarity modeling of hybrid systemsDe SchutterB.Optimal control of a class of linear hybrid systems with saturationProceedings of the 38th IEEE Conference on Decision and Control (CDC '99)December 1999397839832-s2.0-0033324995Ferrari-TrecateG.MuselliM.LiberatiD.MorariM.A clustering technique for the identification of piecewise affine systemsBakoL.Contribution à l’identification de systèmes dynamiques hybrides2008LinJ.-N.UnbehauenR.Canonical piecewise-linear approximationsHeemelsW. P. M. H.De SchutterB.BemporadA.On the equivalence of classes of hybrid dynamical models1Proceedings of the 40th IEEE Conference on Decision and Control (CDC '01)December 20013643692-s2.0-0035713174TianY.FloquetT.BelkouraL.PerruquettiW.Algebraic switching time identification for a class of linear hybrid systemsJuloskiA. L.WeilandS.HeemelsW. P. M. H.A Bayesian approach to identification of hybrid systemsBemporadA.GarulliA.PaolettiS.VicinoA.A bounded-error approach to piecewise affine system identificationBakoL.Identification of switched linear systems via sparse optimizationBakoL.LecoeucheS.A sparse optimization approach to state observer design for switched linear systemsFerrari-TrecateG.MuselliM.LiberatiD.MorariM.Identification of piecewise affine and hybrid systems5Proceedings of the American Control ConferenceJune 2001352135262-s2.0-0034848825NakadaH.TakabaK.KatayamaT.Identification of piecewise affine systems based on statistical clustering techniqueChiuS.Fuzzy model identification based on cluster estimationKwiatkowskiJ.PawlikM.Markowska-KaczmarU.KoniecznyD.Performance evaluation of different Kohenen network parallelization techniquesProceedings of the International Symposium on Parallel Computing in Electrical Engineering (PARELEC '06)September 20063313362-s2.0-3804901717110.1109/PARELEC.2006.66Ferrari-TrecateG.MuselliM.LiberatiD.MorariM.A clustering technique for the identification of piecewise affine systemsWilliamsC.DudaR. O.HartP. E.StorkD. G.HsuC.-W.LinC.-J.A comparison of methods for multiclass support vector machinesWestonJ.WatkinsC.Support vector machines for multi-class pattern recognitionProceedings of the 7th European Symposium on Artificial Neural Networks1999219224PetreczkyM.BakoL.On the notion of persistence of excitation for linear switched systemsProceedings of the 50th IEEE Conference on Decision and Control and European Control Conference (CDC-ECC '11)201118401847ChiuS.An efficient method for extracting fuzzy classification rules from high dimensional dataChiuS.DuboisD.Extracting fuzzy rules from data for function approximation and pattern classificationTalmoudiS.AbderrahimK.AbdennourR. B.KsouriM.Multimodel approach using neural networks for complex systems modeling and identificationJuloskiA. Lj.PaolettiS.RollJ.Recent techniques for the identification of piecewise affine and hybrid systemsBoukharoubaK.