Multisensor Fused Fault Diagnosis for Rotation Machinery Based on Supervised Second-Order Tensor Locality Preserving Projection and Weighted k-Nearest Neighbor Classifier under Assembled Matrix Distance Metric

In order to sufficiently capture the useful fault-related information available in the multiple vibration sensors used in rotation machinery, while concurrently avoiding the introduction of the limitation of dimensionality, a new fault diagnosis method for rotation machinery based on supervised second-order tensor locality preserving projection (SSTLPP) and weighted k-nearest neighbor classifier (WKNNC)with an assembledmatrix distancemetric (AMDM) is presented. Second-order tensor representation of multisensor fused conditional features is employed to replace the prevailing vector description of features from a single sensor. Then, an SSTLPP algorithm under AMDM (SSTLPP-AMDM) is presented to realize dimensional reduction of original highdimensional feature tensor. Compared with classical second-order tensor locality preserving projection (STLPP), the SSTLPPAMDM algorithm not only considers both local neighbor information and class label information but also replaces the existing Frobenius distancemeasure withAMDM for construction of the similarity weightingmatrix. Finally, the obtained low-dimensional feature tensor is input into WKNNC with AMDM to implement the fault diagnosis of the rotation machinery. A fault diagnosis experiment is performed for a gearbox which demonstrates that the second-order tensor formed multisensor fused fault data has good results for multisensor fusion fault diagnosis and the formulated fault diagnosis method can effectively improve diagnostic accuracy.


Introduction
As one of the most common mechanical equipment classes, rotation machinery occupies an important role in industrial applications such as manufacturing, metallurgy, energy, and transportation.Due to tough working environments, similar materials, and structural properties, rotation machinery can be subject to malfunctions or failures.This can significantly decrease machinery service performance including manufacturing quality and operation safety and cause machinery to break down, which may lead to serious catastrophes [1].Accordingly, research into fault diagnosis of rotation machinery has attracted considerable attention by researchers in related domains in recent years.The vibration signals collected from velocity or accelerator sensors located in machinery housing are generally regarded as the foundation of fault diagnostic procedures.However, most existing studies on fault diagnosis of rotation machinery have empirically or experimentally focused on analyzing single sensor signals [2][3][4], and the remaining studies have performed multisensor fused fault diagnosis through complex fusion algorithms such as blind source separation (BSS) [5] and D-S evidence theory.The single-sensor-based fault diagnosis methods belonging to the former category of studies generally lead to loss of valuable information available from multiple sensors, and the multisensor fused diagnosis methods appearing in the latter category of studies are tend to cause a high computational load.To tackle these issues, this paper presents a secondorder tensor representation of fault samples including fault Shock and Vibration feature dimensions and sensor locations dimensions, which is used in an efficient multisensor fused fault diagnosis framework.
Large volumes of feature parameters generated by timedomain, frequency-domain, and time-frequency-domain analysis of vibration signals are commonly integrated into a high-dimensional data set to obtain accurate fault diagnostic results [6].This high-dimensional feature set can provide more valuable information, but it also increases the computational load and may even trigger dimensionality issues.One approach to address this problem is to apply dimension reduction technology.Compared with classical linear dimensionality reduction methods such as principle component analysis (PCA) [7], linear discriminate analysis (LDA) [8], and multidimensional scaling (MDS) [9], a new technology for discovering intrinsic low-dimensional structure of nonlinear distributed data hidden in high-dimensional space has emerged which is known as manifold learning and has become a current research focus.Representative manifold learning methods include isometric mapping (ISOMAP) [10], locality linear embedding (LLE) [11], Laplacian eigenmaps (LE) [12], and local tangent space alignment (LTSA) [13].The effectiveness of these basic manifold learning algorithms and their variants for fault diagnosis of rotation machinery has been validated frequently by a large number of studies.For instance, Li et al. [14].proposed a fault diagnosis method using dimension reduction with linear local tangent space alignment (LLTSA).Ding et al. [15] developed a fusion feature extraction method based on locality preserving projection (LPP) for rolling element bearing fault classification.Additionally, an envelope manifold demodulation method was investigated for planetary gear fault detection in [16].It should be observed that the input sample for these methods is generally represented by a vector with a highdimensional feature space.It is obvious that these manifold learning algorithms are not suitable when a multisensor fused faulty sample is represented as a second-order tensor, namely, a matrix.Furthermore, tensor representation based manifold learning methods have received little investigation for fault diagnosis.Fortunately, there are several secondorder or higher-order tensor extended manifold algorithms, such as second-order tensor locality preserving projection (STLPP) [17], tensor neighborhood preserving embedding (TNPE) [18], a tensor version of discriminant locality linear embedding (DLLE/T) [19], and tensor PCA [20].These algorithms have been progressively applied in the areas of two-dimensional or higher-dimensional image classification, computer vision, and pattern recognition and offer a feasible solution for tensor-represented fault diagnosis.Out of the methods mentioned above, STLPP possesses the ability to discover intrinsic local geometric and topological properties of a manifold embedded in a second-order tensor space, on the basis of inherited strengths of LPP.However, it has been found that there are several limitations of the STLPP algorithm.For instance, STLPP is an unsupervised method for dimension reduction and thus does not consider discriminant information which is useful for fault classification.Secondly, the similarity with second-order tensor formed samples in traditional STLPP has been computed using the Frobenius distance measure [17] which is the same as the Euclidean distance of the vectorized version of matrix formed samples, so it may still cause a loss of spatial locality information.To tackle these problems, this paper introduces the concept of supervision into the framework of a traditional STLPP and employs an assembled matrix distance metric (AMDM) which has been successfully utilized in 2DPCA [21] into the construction of a similarity weighting matrix to obtain better matching between two second-order tensor formed faulty samples.
To further improve the accuracy and the efficiency of fault diagnosis, intelligent classification methods are considered as an indispensable component in the diagnostic procedure.These methods include artificial neural networks (ANN) [22], support vector machines (SVM) [23], and fuzzy-based systems [24] as well as Bayesian based classifiers [25].Compared with these methods, the k-nearest neighbor classifier (KNNC) ranks k neighbors of testing samples from training samples and uses the class labels of similarity neighbors to classify input test samples by evaluating the similarity between samples in the feature space [26,27].The KNNC method has many benefits, including a lower calculation requirement, quicker speed, and higher pattern recognition accuracy [28].Therefore, it is considered to be the simplest tool for faulty pattern recognition.There are some existing shortcomings for traditional KNNC which classifies the sample labels using unified weights, and thus the weighted knearest neighbor classifier (WKNNC) was developed which assigns different weights to nearest neighbors to represent the impact of each neighbor on each unknown sample.Therefore, this paper uses the WKNNC to establish the relationships between features of samples and conditional classifications.Additionally, the AMDM mentioned above is also employed for the similarity evaluation of low-dimensional secondorder formed samples after SSTLPP based dimension reduction in WKNNC.
The remainder of this paper is organized as follows.The proposed supervised second-order tensor locality preserving projection based on assembled matrix distance metric (SSTLPP-AMDM) algorithm is discussed in detail in Section 2. The weighted k-nearest neighbor classifier with an assembled matrix distance metric (WKNNC-AMDM) is described in Section 3. Section 4 provides the overall framework for the proposed multisensors fused fault diagnosis.In Section 5, a fault diagnosis experiment is performed for a gearbox to validate the proposed method.Finally, the conclusions are given in Section 6.

Supervised Second-Order Tensor Locality
Preserving Projection Based on Assembled Matrix Distance Metric (SSTLPP-AMDM) Using a series of mathematical derivations, the optimal values for  and  are obtained by iteratively computing the generalized eigenvectors of the following formulations: where   = ∑           ,   = ∑           ,  V = ∑           , and  V = ∑           .Finally, the low-dimensional representations of the original data are obtained using   =     .

Computation of a Supervised Similarity Weighting Matrix
Based on AMDM.As described in the previous section, there are a certain number of limitations when using the prevailing computation method for the similarity weighting matrix  of the nearest neighbor graph .For instance, the Frobenius distance metric (FDM) used for the similarity evaluation between different second-order tensor formed samples is essentially the Euclidean distance of the vectorized version of the matrix formed samples, and thus it neglects spatial geometrical information of each element in matrix formed samples and thus has poor matching performance for different samples.Additionally, class label information of the training samples is not effectively used in the traditional STLPP, although this information can be helpful for subsequent accurate classification assignment.To address these issues, this paper formulates a novel supervised similarity matrix computation method that decides the similarity between matrix formed samples using an assembled matrix distance metric that takes the classification information into account.
Firstly, for any two arbitrary matrix formed samples  = ( ℎ ) × and  = ( ℎ ) × , the distance between the two samples can be measured using the following assembled matrix distance metric (AMDM) [21]: where  denotes a variable parameter which strongly affects the representation ability of the defined distance function for subsequent classification assignment.It is obviously that the Frobenius distance metric is a special case of the AMDM with  = 2, and the Yang distance metric proposed by Yang et al. in [29] is another special case for  = 1.It has also been theoretically and experimentally verified that an assembled matrix distance metric with a lower value of , that is, 0 <  < 1, outperforms existing Frobenius distance and Yang distance measures in terms of the final classification accuracy.Accordingly, the value of  for the employed AMDM is set between 0 and 1, 0 <  < 1, and its exact value is determined by repeated experiments.Secondly, by understanding the class label information of the training samples and the AMDM based distances between samples, the proposed supervised similarity weighting matrix based on AMDM can be defined as ,   is among  nearest neighbors of   , vice versa, and where   denotes the element at column  and row  in the new formulated supervised similarity matrix , which represents the similarity degree of the matrix formed samples   and   .  and   are the class labels of samples   and   , respectively. is the penalty coefficient which is used to characterize the reduction in the similarity degree.Since   is one of the  nearest neighbors of   or   is one of the  nearest neighbors of   , the corresponding class labels are inconsistent, and thus the value of  should be set to  > 1.
The newly formulated similarity weighting matrix computation equation shown in (4) can be viewed as the combination and extension of the prevailing "0-heat kernel function" and the "0-1" binary mode, in which the former is intimately related to the manifold structure and the latter is regarded as the direct expression of the label information.The properties and corresponding advantages of the supervised similarity weighting matrix based on AMDM can be summarized as follows.(i) A more accurate representation of the matching relationship between matrix formed samples can be achieved using AMDM rather than traditional STLPP, which uses the Frobenius distance metric.(ii) The inclusion of the penalty parameter  ( > 1) results in larger differences between 1 and (exp(−dist 2 AMDM /))  as the assembled matrix distance increases, which allows the interclass and intraclass similarity to be easily distinguished.

SSTLPP-AMDM Algorithm.
This paper proposes a novel supervised second-order tensor locality preserving projection algorithm with the assembled matrix distance metric (SSTLPP-AMDM) that uses the improvements in both the matrix distances computation of samples in the projection space and the similarity weighting matrix computation expression.In contrast to traditional STLPP, the two transformation matrices  and  that represent both the neighborhood graph structure and the class label information are obtained by solving the following objective function: The distance between two mapped sample points   =      and   =      in the embedded tensor space is measured using the assembled matrix distance metric to achieve a better matching result.The element   of the supervised similarity weighting matrix which is computed by ( 4) is employed to represent the neighboring degree of samples   and   and considers both the local structure and class information.The diagonal matrix   = ∑    has the ability to characterize the degree of importance of the mapped sample point   in the embedded tensor space to represent the original sample point   .The optimal transformation matrices  and  are solved in a similar way to traditional STLPP by applying an iterative scheme.The specific implementation process can be described as follows.Firstly, an initial matrix  is set as an identity matrix and the first iterative solution of  is then obtained by solving the generalized eigenvector problem shown in (6).Secondly,  is updated by solving the generalized eigenvector problem shown in (7).By iteratively computing the generalized eigenvectors of ( 6) and (7) for a predefined number of repetitions, the optimal transformation matrices  and  are obtained.Finally, the second-order lowdimensional projection   of the original second-order highdimensional sample   is obtained.
In summary, there are two main advantages to the newly proposed SSTLPP-AMDM.(1) The local structure information and the class information act cooperatively in the computation of the similarity weighting matrix, and thus the supervised similarity weighting matrix proposed in this paper outperforms other prevailing similarity weighting matrix computation methods in terms of representation of the similarity degree between samples.(2) The application of AMDM to measure the distance between both the sample points in the original second-order tensor space and the mapped sample points in the embedded second-order tensor space ensures that the measured samples have a better matching performance than the existing Frobenius distance measure.Therefore, the SSTLPP-AMDM algorithm has superior classification and dimension reduction characteristics than traditional STLPP.

Weighted 𝑘-Nearest Neighbor Classifier with Assembled Matrix Distance Metric (WKNNC-AMDM)
As stated above, the KNNC method proposed by Cover and Hart in 1967 [28] is regarded by many as the simplest pattern classification algorithm.Due to its advantages of a lower calculation requirement, quicker speed, and higher identification accuracy, KNNC has been widely applied to various types of pattern recognition problems, especially fault diagnosis issues.The main KNNC concept is described in the following two steps.
Step 1.For a given unknown labeled sample   , k similar samples { 1 ,  2 , . . .,   } in the training sample set are searched to construct a neighbor set .
Step 2. A maximum voting rule is used on all samples in  to obtain the class that   belongs to.
The above description shows that there are two focus points to KNNC: a similarity measurement method between samples and the establishment of a decision rule.For the first focus point, there have been many similarity measurement methods suggested by previous publications, such as the Euclidean distance, the Manhattan distance, and the cosine angle.However, these vector representations of the databased metric indexes described above are unsuitable for similarity measurement of the matrix formed data points appearing in this paper.Thus, the AMDM is introduced for the similarity computation of samples in KNNC.It is known that AMDM outperforms common FDM in terms of the similarity presentation between matrix formed samples for classification.Additionally, since selection of neighbors is greatly impacted by the sparsity of the sample distribution, this paper employs a novel assembled matrix distance based on density to efficiently measure the similarity between   and its neighbor   , ( = 1, 2, . . ., ) using the following formula: Unlike the classical KNNC voting strategy that uses unified weights for neighbors, in this paper, a weighted voting strategy is used to form the weighted k-nearest neighbor classifier (WKNNC), which assigns different weights to each sample   in , reflecting the influence each neighbor has on an unknown sample   .A new neighbor set  = ( 1 ,  2 , . . .,   ) is generally reconstructed in ascending order of distance; that is, dist(  ,  1 ) ≤ dist(  ,   ), and thus the voting weight   of sample   is computed using the following equation: Consequently, the class label of an unknown labeled sample   can be determined as follows: where   denotes the class label of   in  and ( =   ) is the Di carat function which has the functional value equal to 1 when  =   , and otherwise it is equal to zero.Additionally, the selection of  is an issue that requires attention in the WKNNC algorithm.In this paper the value of  is set to  = 2 ×  + 1, since the classification precision is only just assured when the number of samples in  equals  + 1, where the number of classes in training set is  [30].

Overall Framework of the Proposed Fault Diagnostic Method
Based on the preparations above, this paper proposes a novel multisensor fused fault diagnosis method based on SSTLPP-AMDM and WKNNC-AMDM for rotation machinery.The flow chart for the proposed method is shown in Figure 1.
There are three main steps to the diagnostic procedure, which will be discussed in detail in this section.Firstly, through prevalent multidomain signal analysis and truncated sampling, a multisensor fused faulty sample set with an  ×  × -dimensional third-order tensor representation is constructed and then decomposed into an ×× 1 -dimensional training sample set and an ×× 2 testing sample set, where  is the number of vibration sensors located in the equipment being diagnosed,  is the number of features originating from the vibration signal of a single sensor, and  is the number of samples and  =  1 +  2 .
The second step is compression of the high-dimensional  ×  ×  1 tensor into a relatively low-dimensional   ×   ×  1 tensor using SSTLPP-AMDM.By constructing a supervised similarity weighting matrix based on AMDM, the minimization problem is formulated for the weighted sum of the assembled matrix distances between samples in the embedded tensor space, in order to find the optimal transformation matrices  and .
Finally, the low-dimensional projection of the testing sample set and the low-dimensional projection of the training sample set obtained in the previous step are input into WKNNC for fault diagnosis.

Experimental Results and Analysis
5.1.Experimental Setup.The validity of the newly proposed method will now be demonstrated using a fault diagnosis experiment of a single-stage gearbox.As shown in Figure 2(a), this paper employed a rotation machinery fault diagnosis experiment platform system of type QPZZ-II, which was converted into a gearbox fault test bench while the timing belt pulley at the side of gearbox was connected to the motor shaft.A diagram of the gearbox fault experiment system is displayed in Figure 2(b).Seven displacement sensors and accelerometers which were reinstalled to the input shaft and the end housings of the four bearings were employed to collect faulty vibration signals.Specific location information is shown in Table 1.
During the experiment, the sampling frequency was 5120 Hz, there were 53248 sampling points, the rotation speed of the drive motor was 880 rev/min, and the load was 0.2 A. There were six types of conditions used in the gearbox fault simulation experiment: (1) normal (Norm), (2) corrosion of the gearwheel (C G), (3) broken teeth in the gearwheel (B G), (4) wear of the pinion (W P), (5) broken teeth in the gearwheel coupled with wear of the pinion (B G C W P), and (6) corrosion of the gearwheel coupled with wear of the pinion (C G C W P). Figure 3 shows the time-domain waveforms of the faulty samples originating from the seven different sensors under each condition, and the time-domain waveforms of the faulty samples originating from a single sensor under the six conditions are displayed in Figure 4.It can be observed from these graphs that the sensors reinstalled to different equipment positions have the distinct ability to characterize changes in the machinery condition, and thus it is a feasible method of fusing faulty information from multiple sensors for accurate fault diagnosis.50 samples under each condition from a single sensor were subsequently selected, and 30 of these samples were used Step one Step two Step three Gearbox to be diagnosed to train the fault diagnosis model, with the remaining samples used for the testing purposes.The length of each sample was 1024.Furthermore, five time-domain feature parameters and five frequency-domain parameters were calculated to construct a feature set: root mean square, skewness, kurtosis, impulse factor, peak factor, mean frequency, frequency center, root mean square frequency, standard deviation frequency, and kurtosis frequency, as commonly defined in the previous literature [31,32].Accordingly, a 7 × 10 × 300-dimensional tensor formed sample set labeled with the corresponding classes was modeled to act as the input for the entire fault diagnosis experiment, which was composed of

Performance Analysis and Comparison of Different Dimension Reduction
Algorithms.This subsection validates the effectiveness of the proposed SSTLPP-AMDM algorithm for dimension reduction in fault diagnosis of rotation machinery, as well as its superiority to the traditional STLPP method.Using the calculation procedure described in Section 2.2 with the training sample set constructed above as the input, SSTLPP-AMDM based dimension reduction is implemented to obtain the explicit transformation matrices  and , as well as the low-dimensional secondorder tensor formed projection .In this experiment, the neighbor parameter  was set to 13, the similarity penalty  For further confirmation of the superiority of the proposed second-order tensor formed faulty samples originating from multisensor fusion over the vector-represented multisensor fused samples and the prevailing vector-formed faulty samples that originated from merely a single sensor, two further groups of experiments were designed.These experiments are the LPP-based dimension reduction of vector expressed multisensor fused samples (LPP-VM) and the LPP-based dimension reduction of faulty samples from any single sensor (LPP-VS) and their purpose is to compare with SSTLPP-AMDM which is input by the proposed second-order tensor-represented faulty samples in terms of the dimension reduction effect shown as Figure 5(a).The dimension reduction results of these two experiments are displayed in Figures 6(a) and 6(b), respectively.In contrast to Figure 5(a), these results demonstrate that the proposed second-order tensor-represented multisensor fused samples combined with SSTLPP-AMDM achieve the best clustering performance of the three experiments, that is, LPP-VM, LPP-VS, and the proposed SSTLPP-AMDM.Beyond that, by comparing Figure 6(a) with Figure 6(b), it can be intuitively seen that the first diagram shows better sample clustering results than the second diagram, not only in terms of between-class decentralization but also in terms of withinclass aggregation.This indirectly demonstrates the benefit of using multisensor data fusion to increase the integrity of the fault information.In order to ensure the precision of these experimental verification conclusions, two other sets of comparison experiments were further implemented.The first was a quantitative analysis of the dimension reduction results of ten types of approaches which respectively adopted SSTLPP-AMDM, STLPP, and LPP as well as three different inputs including the second-order tensor formed multisensor fused data, the vector-represented multisensor fused data, and the vectored samples data from a single sensor.The detailed comparison results are provided in Table 2 and Figure 7.The other group of experiments compared the classification accuracies of these ten types of fault feature dimension reduction results using three popular intelligent fault classifiers: a support vector machine (SVM), a multilayer perception (MLP) neural network, and a support vector data description (SVDD).The specific experimental description is discussed in the following paragraphs and the comparison results are shown in Table 3.
The results shown in the scatter distribution diagrams in Figure 6 were used for the qualitative analysis to assess the characteristics of the dimension reduction results based on SSTLPP-AMDM, STLPP, and LPP combined with different inputs.Three commonly used clustering performance measure indicators were used to quantitatively evaluate the ability of the dimension reduction algorithms to be used for subsequent fault classification: the within-class scatter   , the between-class scatter   , and the synthesized withinclass-between-class scatter .The mathematical equations for these three indicators can be written as follows: where  is the number of conditional classes,  = ∑  =1   is the total number of samples where   is the number of samples belonging to the th class,    is the feature value of  the th sample in the th class,    is the mean feature value of the th class, and   = 1/ ∑  =1    is the total mean feature value of all classes.It should be noted that the clustering performance of each feature is proportional to the values of the between-class scatter   and the synthesized within-classbetween-class scatter  and yet is inversely proportional to the value of the within-class scatter   .
The previously-mentioned training sample set is used, which contains six-class faulty condition data for the gearbox as the input.Ten groups of experiments are performed to calculate the corresponding scatter parameters of the first three-dimensional features of the vectored dimension reduction results: (1) SSTLPP-AMDM based dimension reduction for second-order tensor formed multisensor fused data (SSTLPP-AMDM for STMD), (2) traditional STLPP-based dimension reduction for second-order tensor formed multisensor fused data (STLPP for STMD), (3) LPP-based dimension reduction for vector-represented multisensor fused data (LPP for VMD), and ( 4)-( 10) LPP-based dimension reduction of vectored sample data from seven different positional sensors (LPP for VSD1∼7).The scatter computation results for each dimensional feature based on each of the different ten methods are shown in Table 2 and the corresponding average scatter parameter values are displayed in Figure 7.It can be seen that the SSTLPP-AMDM based dimension reduction results of tensor formed multisensor fused samples have the smallest within-class scatter   , the largest betweenclass scatter   , and thus the largest synthesized scatter .The traditional STLPP-based dimension reduction for tensor formed multisensor fused data and the LPP-based dimension reduction for vector-represented multisensor fused data obtain larger   values, smaller   values, and smaller  values than SSTLPP-AMDM for STMD but achieve smaller   values, larger   values, and larger  values than the other seven types of LPP-based dimension reduction approaches for vectored sample data originating from a single sensor.These comparison and analysis results indicate that SSTLPP-AMDM for STMD is much more effective than any of the other nine dimension reduction methods for different types of samples data in terms of the clustering performance of the dimension reduction results.
As mentioned earlier, in order to acquire direct evidence of the superiority of the proposed SSTLPP-AMDM algorithm as well as the multisensor data fusion, three frequentlyused intelligent classifiers (SVM, MLP neural network, and SVDD), respectively, acted on the first three-dimensional features of the vectored dimension reduction results of the ten methods (M1∼M10), which are marked as F1∼F10.Each experiment is carried out ten times.For the SVM classifier, this paper employs a radial basis kernel function and the value of the kernel parameter is 1.For the MLP neural network, the commonly used three layers structure is employed: input layer, hidden layer, and output layer, and the numbers of nodes in the input and output layers are set to 3 and 6.These values depend on the number of input features and output classes.The geometric pyramid rule determines that the number of hidden layer nodes is 5.The Gaussian kernel function is used for the SVDD model, and the corresponding kernel parameter is set to 3. The classification results of the three models which are applied to each of the ten types of feature sets originating from the previous experiment are listed in Table 3.
As shown in Table 3, the reduced feature set of the tensor formed multisensor fused fault data based on the proposed SSTLPP-AMDM dimension reduction algorithm (F1) achieves higher classification accuracy than the other nine types of reduced feature sets (F2∼F10) for all three classifiers.The reduced feature sets of the tensor formed multisensor fused fault data based on the traditional STLPP algorithm (F2) and that of the vector-represented multisensor fused data based on LPP (F3) achieve the second and third highest classification accuracy.The seven types of reduced feature sets of vectored fault data originating from a single sensor (F4∼F10) have the lowest level of classification accuracy.These results further confirm the effectiveness of the proposed SSTLPP-AMDM combined with the formulated tensor-represented multisensor data fusion to increase the amount of useful information in the feature set and facilitate the subsequent classification task.

Overall Performance Validation of the Proposed Fault Diagnosis Approach.
The following experiments and analysis were also employed to verify the superiority of the proposed WKNNC-AMDM method, as well as the overall fault diagnosis approach proposed by this paper.Using the implementation procedure for the proposed fault diagnosis method shown in Figure 1, a final fault diagnostic result is achieved by inputting the low-dimensional tensor formed testing sample set after dimension reduction with SSTLPP-AMDM into WKNNC-AMDM.Furthermore, the classification performance of WKNNC-AMDM combined with the low-dimensional second-order tensor formed multisensor fused sample data after dimension reduction is compared with the WKNNC-FDM and the KNNC-FDM, which both have the same input data.For all three classifiers, the neighborhood size was set to 13.Each experiment was performed ten times and the classification results of the three classifiers for the fault sample data of a gearbox including the cumulative number of false classification samples (Cum.number of FCS), the distribution of false classification samples within the six different faulty classes (number of FCS within Classes 1∼6), and the total testing accuracy are listed in detail in Table 4.
It can be seen from Table 4 that although the same input data is used for the three classifiers, namely, the lowdimensional tensor formed testing sample set after dimension reduction with SSTLPP-AMDM, the proposed WKNNC under AMDM has a classification accuracy of 100%, which is higher than the 89.17% accuracy achieved using WKNNC under traditional FDM and the 70.83% accuracy achieved using the classical KNNC with FDM.These results indicate that the WKNNC-AMDM method has superior classification performance to WKNNC-FDM and KNNC-FDM, due to the addition of the assembled matrix distance metric for the similarity representation of the second-order tensor formed samples and the weighted voting strategy for the nearest neighbor classifier.This experiment also effectively

Conclusions
This paper has presented a novel multisensor fused fault diagnosis approach for rotation machinery based on SSTLPP-AMDM and WKNNC-AMDM.Based on significant experimental analysis and comparisons that were performed, the main conclusions can be summarized as follows.
(1) In contrast with traditional STLPP, the proposed SSTLPP-AMDM algorithm can obtain better dimension reduction effects for the original high-dimensional second-order tensor-represented samples.This was achieved by the addition of the class label information and improvement of the similarity evaluation method for matrix formed samples by AMDM.Furthermore, it was also verified that SSTLPP-AMDM based dimension reduction of multisensor fused second-order tensor formed samples is superior to LPP-based dimension reduction of multisensor fused vector-formed samples and LPP-based dimension reduction of vector-formed samples from a single sensor in terms of the clustering performance of samples of different classes after reduction.
(2) The proposed WKNNC-AMDM can obtain higher classification accuracy than WKNNC-FDM and KNNC-FDM due to the introduction of weighted voting strategy and assembled matrix distance metric for similarity representation of second-order tensor formed samples.
(3) Using the advantages of second-order tensor formed multisensor fused faulty sample representation, SSTLPP-AMDM for efficient dimension reduction, and WKNNC-AMDM for rapid fault classification, the proposed fault diagnosis approach achieves higher classification accuracy for rotation machinery than the other homogenous methods.
In summary, the proposed fault diagnosis approach has the following strengths: more adequate fault information, lower calculation complexity, and higher fault recognition accuracy.Therefore, it is extremely suited to engineering applications for fault diagnosis of rotation machinery.

Figure 1 :
Figure 1: Implementation process of the proposed fault diagnosis method based on SSTLPP and WKNNC.

Figure 3 :
Figure 3: Test signals originating from the seven different sensors under the following conditions: (a) Norm, (b) C G, (c) B G, (d) W P, (e) B G C W P, and (f) C G C W P.

Figure 5 :
Figure 5: Scatter plots of the vector-formed dimension reduction result based on different algorithms for the training sample set: (a) SSTLPP-AMDM and (b) STLPP.

Figure 6 :Figure 7 :
Figure 6: Scatter plots of dimension reduction results based on LPP with different input data: (a) LPP-VM and (b) LPP-VS.
of the overall proposed total fault diagnosis framework, which comprehensively includes the SSTLPP-AMDM based dimension reduction and the WKNNC-AMDM.
is the Frobenius norm of the matrix; that is, ‖‖  = √∑  ∑   2  ;   denotes the elements of the weight matrix  of the nearest neighbor graph , which is equal to exp(−‖  −   ‖ 2  /) when   is one of the  nearest neighbors of   or   is one of the  nearest neighbors of   ; otherwise it is equal to zero. is a diagonal matrix;   = ∑    .
2.1.Introduction to Second-Order Tensor Locality Preserving Projection (STLPP).As the tensor extension of LPP, TLPP is essentially equivalent to finding a linear approximation

Table 1 :
Specific information of seven sensors.

Table 2 :
Comparison of scatter parameter values based on ten different dimension reduction methods.

Table 3 :
Comparison of fault classification results based on three classifiers and ten types of reduced feature sets.

Table 4 :
Fault diagnosis results of three different classifiers with the reduced low-dimensional tensor formed multisensor fused samples.