Extension of the Multi-TP Model Transformation to Functions with Different Numbers of Variables

,


Introduction
The appearance of the Singular Value Decomposition (SVD) was one of the largest breakthroughs in matrix algebra [1].Its applicability was extended to tensors in the form of the Higher-Order SVD [2] around 2000.Recently, a further extension of the SVD and HOSVD concept, known as the tensor product (TP) Model Transformation, was proposed for functions in control theory [3].A comprehensive overview is given in [4].Various extensions of the TP model transformation such as the bilinear-, pseudo-, multi-, and generalised TP model transformation, as well as the concept of HOSVD canonical form of TS fuzzy or TP models, were proposed in [4][5][6][7], with a special focus on TS fuzzy models in [8].The approximation power of the TP model transformation applied to TS fuzzy models is investigated in [9].
The above-mentioned extensions and variations of the TP model transformation were primarily applied to fuzzy model complexity reduction [10,11] and in the widely used TS fuzzy model based PDC (Parallel Distributed Compensation) control theories [12][13][14].But also, in general, it has been applied to polytopic model, TP/TS fuzzy model, and LMI (Linear Matrix Inequality [15]) based control theories.The most important features of the TP model transformation are guaranteed by the key transformation step whereby a numerically reconstructed HOSVD structure is determined.Key features of the transformation are as follows: (i) It is executable on models given by equations or soft computing based representations, such as fuzzy rules or neural networks or other black-box models.The only requirement is that the model must provide an output for each input (at least on a discrete scale, see Section 4, Step 1).
(ii) It will find the minimal complexity, namely, the minimal number of rules of the TS fuzzy model.If further complexity reduction is required, it provides one of the best trade-offs between the number of rules and approximation error.
(iii) It works like a principle component analysis, in that it determines the order of the components/fuzzy rules according to their importance.
(iv) It is capable of deriving the antecedent fuzzy sets according to various constraints.For instance, it can be used to define different convex hulls, a capability which has recently been shown to play an important role in control theory.Based on the above, various theories and applications have emerged using the TP model transformation.Further computational improvements were proposed in [16,17].It has been proved in [5,[18][19][20] that LMI based control design theories are very sensitive for convex hulls defined by consequents (vertices) of TS fuzzy models.Thus, the convex hull manipulation capability of the TP model transformation is an important and necessary step in LMI based control design.Very effective convex hull manipulation methods were incorporated into the TP model transformation in [21][22][23].Further useful control approaches and applications were published in the field of control theory [24][25][26][27][28][29][30][31][32][33][34][35][36][37][38][39][40][41].Many powerful approaches are published on the field of sliding mode control in [29,42,43].In physiological control the usability of TP model transformation has been demonstrated as well [44][45][46][47][48][49].Various further theories and applications are studied in .
One of the key advantages of the TP model transformation is that is capable of finding the minimal complexity of all components of the system and guarantees the same antecedent system for all components.This is a very typical requirement in design or stability verification methodologies, that is, the model, controller, and observer need to have the same antecedent system, hence, convex representation.Therefore, the simultaneous manipulation of the components with the multi-TP model transformation or the generalised TP model transformation (that combines all variants of the TP model transformation) yields further possibilities for control performance optimisation [18][19][20].
Despite the above advantages, a crucial limitation of the generalised TP model transformation is that it can only be applied to a set of systems which have the same number of inputs.For instance, consider four different systems given with different representations, as shown in Figure 1.S1 is a fuzzy logic model; S2 is neural network; S3 is given by an equation; and S4 is a black-box model.All of these models have the same inputs but may have different sized output tensors.The multi-TP model transformation is capable of simultaneously transforming all systems to TP or TS fuzzy model form, such that the same antecedent sets are defined on the inputs.The generalised TP model transformation can also transform to predefined antecedent fuzzy sets.
A further generalisation proposed in this paper can be applied to systems like in the example given in Figure 2.
Here each system may be given by different representations (like in the above case) but may also have different numbers of inputs.The transformation can simultaneously convert all of the systems to TS fuzzy model form, such that the antecedent fuzzy sets will either be the same or assume a predefined structure.From all other perspectives, the proposed TP model transformation inherits all of the advantageous features of the previous TP-based approaches.Recenly proposed SOS-type (Sum-of-sqares) TS fuzzy LPV models are also widely applied in fuzzy control theories [88,89].The further extension of the TP model transformation to such systems is highly welcome in future works.(xii) Grid:  : (xiv) Discretised function F (Ω,) of (x) denotes the sampling of (x) over pair (Ω, ).Thus, it is a tensor with the size of  1 ×  2 × ⋅ ⋅ ⋅ ×   and entries: For further details, refer to [4,5].

The Proposed TP Model Transformation
Assume that a set of functions is given as The output tensor Y  of each function   (x  ) may differ in the number of dimensions and its size as Y  ∈ R  1, × 2, ×⋅⋅⋅×   , , where   denotes the number of dimensions of the output and   denotes the number of elements in dimension .
The goal of the TP model transformation is to transform ∀ :   (x  ) into TP function form as under the following constraints given on the weighting functions.
(i) Unified Constraints for ∀ :   (x  ).All resulting TP functions will have the same weighting function system on each dimension defined by the set V ⊆ N (obviously, if the function has that input dimension): Thus, (2) can be given as follows:

The Computation of the Proposed TP Model Transformation
Step 1 (discretisation).
(i) Discretisation of all   (x  ) results in tensor (ii) Discretise the predefined weighting functions over the dimensions of Ω: Remark 1.This step is executed in the same way as in the case of the original TP model transformation; see [4,5,8].
Step 2 (defining TP structures).Execute the following steps in each dimension  ∈ N: (i) Lay out tensors F (Ω,)  in dimensions  if vector x  has the following dimension:

Complexity
(ii) If  ∈ B then create Execute SVD on T  and SN, NN, NO, CNO and complexity trade-off by discarding singular values in the same way as in the original TP model transformation, which results in As a matter of fact, if nonzero singular values are discarded then it is only an approximation.Let (iii) If  ∈ D then execute SVD on H  as and, according to the conditions, execute SN, NN, NO, CNO, and complexity trade-off by discarding singular values in the same way as in the original TP model transformation: Again, if nonzero singular values are discarded then it is only an approximation.Let (iv) Finally, where where (⋅) + denotes the pseudoinverse.
Step 3 (reconstruction of the weighting functions).This step is the same as in the multi-TP model transformation [4,5,8].
Having the result of the above steps, F and S  , we can recalculate the weighting functions at any point.We may calculate the first two steps over a grid, which is not too dense, but calculate the weighting function over a very dense grid (as suggested in [5]), and then construct piecewise linear functions.As a result we have w V ( V ) and w , (  ).
Then we achieved the goal.We have the TP model form of all functions with the given constraints: or if a complexity trade-off is executed (nonzero singular values are discarded), where ∀ : w , (x  ) = w  (x  ) . ( Or in other words, Remark 2. The convex hull manipulation and the complexity trade-off are done in the second step.Therefore the approximation accuracy is controlled here by the discarded nonzero singular values.However, the discarded nonzero singular values lead to approximation error.If the given weighting function system is not sufficient (i.e., the number of the weighting functions is less than the rank of that dimension) then we arrive at an approximation only.The use of the pseudoinverse guarantees, however, that it will be the best approximation.
System 3. In order to have a systematic notation, we denote the input vector of System 3 as It is a neural network; see Figure 3: where   () is the activation function (let it be a very simple one in the present case:   () = ) of the neurons and  , are the weights connecting the th input neuron to the th output neuron.Thus the output of the system is System 4. The input vector of System 4 is

Complexity 5
This system is given by formulas such as System 5. The input vector of System 5 is where x 3 ⊏ x and Ω 3 ⊏ Ω.This is given by a fuzzy logic model.Assume that two rules are given ( = 1, 2): IF   THEN   .Further assume that the membership functions are in Ruspini partition: and the consequent sets are singleton sets located at elements 5 and 6 of the output universe.It is a TS fuzzy model and, therefore, the transfer function (product-sum-gravity) of the model is System 6.The input vector of System 6 is This is a black-box model that can provide  4 for any input  1 ,  3 .
(In order to follow all computational steps of the example, let us reveal what is the output of the black-box  4 = 3 1 +  3 .)

Conditions of the TP Model
Transformation.The goal of the example is to transform all the four systems to TS fuzzy representations (or TP model if the resulting weighting functions cannot be represented as antecedent fuzzy states), with the following conditions: (i) All systems must have the same antecedent function system on the input interval of  1 .The antecedent functions must be in Ruspini partition, namely, in SN and NN type.In order to have a complexity minimised representation, a further requirement is that the number of antecedent functions must be minimal.
(ii) The same antecedent function system of variable  2 is predefined for all systems: where "" denotes "predefined," and (iii) The only requirement for the weighting function system of the input  3 of each system is that they must be the singular functions of the HOSVD canonical form (othonormed system ordered by the higherorder singular values).These functions are not representable as antecedent functions of fuzzy sets, since they may take negative values as well.Obviously they will not be the same for all systems.

Execution of the Proposed TP Model Transformation.
It is worth emphasizing again that the previous methods for TP model representation cannot be applied in the present case, since the elements of the input vectors are different.
(ii) Let us discretise the systems over the rectangular grid defined by vectors g  ,  = 1, 2, 3.The discretisation of System  results in F (Ω  ,  ) ,  ∈ .In case of System 3, where the first three dimensions are assigned to the input variables and the last dimension is assigned to the output vector.The discretisation of System 4 yields where the first two dimensions are assigned to the input variables  2 ,  3 and the last two dimensions are assigned to the output matrix.The discretisation of System 5 yields the following vector: The discretisation of System 6 results in where the first two dimensions are assigned to the input variables  1 ,  3 and the last dimension is assigned to the output vector.
Let us discretise the predefined weighting function as well:
Then having the discretised tensors and weighting functions of all systems we can numerically reconstruct the weighting functions [4,5] as

Conclusion
The proposed TP model transformation can be executed on a set of models where the dimensionality of the inputs may differ.The proposed TP model transformation has all the advantages of the previous ones, including easy convex hull manipulation, complexity trade-offs, pseudo TP model transformation, and automatic and numerical execution.

2. 1 . 3 (
Notation.The following notations are used in the paper: (i) Scalar:  is scalar.(ii) Vector: a contains elements   .(iii) Matrix: A contains elements  , .(iv) Tensor: A contains elements  ,,,... .(v) Set: A : {, , , . ..}, for example,  ∈ A. (vi) Index : the upper bounds of the indices are denoted by the uppercase letter, for example, .(vii) Index  ∈ I denotes that index  takes the elements of set I ⊆ {1, 2, . . ., } ⊂ N, respectively.I : {1, 2, . . ., } is understood as per default.(viii) Interval:  = [ min ,  max ].Complexity ix) Space: Ω :  1 ×  2 × ⋅ ⋅ ⋅ ×   is an  dimensional hypercube.(x) x ∈ Ω expresses the fact that vector x is within the space Ω.The dimensions of x and Ω are the same.(xi) ⊏ denotes a dimensionality reduced subset in general as follows: (a) In the case of spaces: Θ ⊏ Ω states that Θ is a hypercube with the same sized intervals as Ω, but has a smaller number of dimensions.(b) In the case of vectors: a ⊏ b, where a ∈ Θ ⊂ R  and b ∈ Ω ⊂ R  means that  <  and Θ ⊏ Ω. (c) In the case of tensors: A ⊏ B means for instance that A is obtained by deleting complete dimensions from tensor B.
(a) Weighting function systems w  (  ),  ∈ A ⊆ V, are predefined.(b) Weighting function systems w  (  ),  ∈ B ⊆ V will be derived by the transformation; only their types are predefined (i.e., SN, NN, NO, CNO, RNO, INO, and IRNO).Further the number of the weighting functions are minimised.(ii) Different Constraints for Each   (x  ).The resulting TP functions have different weighting functions on dimensions Z ⊆ N: (a) Weighting function systems w , (  ),  ∈ C ⊆ Z are predefined of each   (x  ).(b) The types (i.e., SN, NN, NO, CNO, RNL, INO, and IRNO) of the weighting function systems w , (x  ) are predefined for dimensions  ∈ D ⊆ Z of each   (x  ).