MPE Mathematical Problems in Engineering 1563-5147 1024-123X Hindawi Publishing Corporation 757828 10.1155/2012/757828 757828 Research Article Fault Detection for Industrial Processes Zhang Yingwei Zhang Lingjun Zhang Hailong Zhang Huaguang State Laboratory of Synthesis Automation of Process Industry Northeastern University Liaoning Shenyang 110004 China northeastern.edu 2012 11 12 2012 2012 23 08 2012 15 11 2012 2012 Copyright © 2012 Yingwei Zhang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

A new fault-relevant KPCA algorithm is proposed. Then the fault detection approach is proposed based on the fault-relevant KPCA algorithm. The proposed method further decomposes both the KPCA principal space and residual space into two subspaces. Compared with traditional statistical techniques, the fault subspace is separated based on the fault-relevant influence. This method can find fault-relevant principal directions and principal components of systematic subspace and residual subspace for process monitoring. The proposed monitoring approach is applied to Tennessee Eastman process and penicillin fermentation process. The simulation results show the effectiveness of the proposed method.

1. Introduction

Process monitoring and fault diagnosis are important for the safety and reliability of industrial processes . As a data-driven process monitoring methodology, multivariate statistical analysis techniques, such as principal component analysis (PCA) and partial least squares (PLS), have been used widely for detection and diagnosis of abnormal operating situations in many industrial processes in the last few decades [5, 1316]. The major advantage of these methods is ability to handle larger number of highly correlated variables and reduce the high-dimensional process measurement into a low-dimensional latent space. The monitoring based onthese methods is straightforward.

PCA is one of the most widely used linear techniques for transforming data into a new space. It divides data information into the significant patterns, such as linear tendencies or directions in model subspace, and the uncertainties, such as noises or outliers located in residual subspace. T2 statistic and SPE statistic, represented by Mahalanobis and Euclidian distances, are used to elucidate the pattern variations in the model and residual subspaces, respectively . PLS decomposition methods are used similar to PCA for process monitoring and are more effective in supervising the variations in the process variables that are more influential on quality variables . T2 statistic and SPE statistic are also employed in the PLS monitoring system. These methods develop a normal operating model with the normal data gathered from the normal process and define the normal operation regions. The new process behaviors can thus be compared with the predefined ones by the monitoring system to ensure whether they remain normal or not. When the process moves out of the normal operation regions, it is concluded that an “unusual and faulty” change in the process behaviors has occurred. Nowadays, many extensions of the conventional PCA/PLS algorithms have been reported [15, 2331]. Recently, Li et al. proposed a total projection to latent structure (T-PLS) and discussed the policy of process monitoring and fault diagnosis based on the new structure [32, 33]. They analyzed the problem faced in conventional PLS based process monitoring policy which only divides the measured variable space into two subspaces and uses two monitoring statistics for PLS scores and residuals, respectively. They indicated that the output-irrelevant variations are also included by PLS scores and PLS residuals do not necessarily cover only small X-variations. The proposed T-PLS algorithm further decomposed the PLS systematic subspace to separate the output-orthogonal part from the output-correlated part, and the PLS residual subspace to separate large variations from noises. T-PLS based monitoring system was then developed based on the four-process subspace.

KPCA is one nonlinear version of PCA. It can efficiently compute PCs in a high-dimensional feature space using nonlinear kernel functions. The core idea of KPCA is to first map the data space into a feature space using a nonlinear mapping and then carry out the PCA operation in the feature space. KPCA divides the data into a systematic subspace and a residual subspace and uses T2 statistic and SPE statistic to monitor these two subspaces, respectively [13, 15, 16, 34].

In this paper, to improve the KPCA model, a fault-relevant KPCA algorithm is proposed and the approach of process monitoring based on the new fault-relevant KPCA algorithm is proposed for fault detection. The proposed method further decomposes both the KPCA principal space and residual space into two subspaces by checking the influences by process disturbances. The basic objective for further subspace decomposition is to separate the part which is influenced greatly by the fault from the part that is not clearly fault-relevant, that is, to find the fault-relevant directions and fault-relevant principal components. Then a new monitoring method is proposed based on the fault-relevant directions. Compared with traditional statistical techniques, the fault subspace is separated based on the fault-relevant influence.

The remaining sections of this paper are organized as follows. Section 2 revisits the KPCA model and then presents the fault-relevant KPCA algorithm. Section 3 introduces the on-line monitoring method of fault-relevant KPCA. Model development and on-line monitoring are proposed in Section 3. In Section 4, the simulation results are given to illustrate the effectiveness of the new method. At last the conclusions are drawn in Section 5.

2. Algorithm of Fault-Relevant KPCA

For the traditional PCA algorithm, some faults may not influence all the principal directions; that is, to a given fault, some principal directions are not relevant. KPCA algorithm is the method for nonlinear data extended from PCA algorithm, so it has the same characteristics mentioned above . The proposed fault-relevant KPCA algorithm finds the principal directions which are relevant to, or affected by the disturbances and then measures the changes of the variation along these principal directions. Therefore the proposed algorithm has higher sensitivity and accuracy for process monitoring and it can detect the faults faster.

The purpose of the proposed algorithm is to get the fault-relevant principal directions in the systematic subspace and those of the residual subspace. With the obtained fault-relevant principal directions and a new set of data, the scores of new data can be gotten. Therefore, the T2 statistic and SPE statistic can be further calculated to monitor the process.

In KPCA, the training samples x1,x2,,xNRM gotten from normal process are mapped into feature space using nonlinear mapping: Φ:RMF. The covariance matrix in the feature space F can be calculated as follows: (2.1)CΦ=1Nj=1NΦ(xj)Φ(xj)T, where it is assumed that k=1NΦ(xk)=0, and Φ(·) is a nonlinear mapping function that projects the input vectors from the input space to F. Principal components in F can be obtained by finding eigenvectors of CΦ, which is straightforward to the PCA procedure in input space, using the equation as follows: (2.2)λp=CΦp, where λ denotes eigenvalues and p denotes eigenvectors of covariance matrix CΦ.

For λ0, solution p (eigenvector) can be regarded as a linear combination of Φ(x1),Φ(x2),,Φ(xN), that is, p=j=1NαjΦ(xj).

Using kernel trick Kij=Φ(xi)·Φ(xj), the eigenvalue problem can be expressed as a simplified form as follows: (2.3)λα=(1N)Kα, where α=[α1α2αN]T and KRN×N are a gram matrix which is composed of Kij.

Then, the calculation is equivalent to resolving the eigen problem of (2.3). To satisfy the assumption k=1NΦ(xk)=0,K must be mean-centered before the calculation. The centered gram matrix K- can be easily obtained by K-=K-KE-EK+EKE, where each element of E is equal to 1/N and ERN×N. Also, the coefficient α should be normalized to satisfy α2=(1/N)λ, which corresponds to the normality constraint, p2=1 of eigenvector.

The scores T of vector X are then extracted by projecting Φ(x) onto eigenvectors p in F and the number of scores is N. As some principles, R scores are selected to be principal components, and R corresponding directions of p are gotten at the same time [36, 37]. R directions selected from p span the systematic subspace and the remaining N-R directions span the residual subspace. The PCs of X is (2.4)T=[t1t2tR],tk=pk,Φ(x)=i=1NαikΦ(xi),Φ(x)=K-αk, where k=1,,R, and R is the number of principal components.

Now PCs of training data are gotten the PCs that are relevant to faults will be found next as follows.

First, a fault process space Φf(x) is separated into a systematic subspace and a residual subspace following the separation rule of the process space Φ(x). One data set XfRL×M collected from a fault case is projected into F with the same mapping function Φ(·) to get Φf(x)(2.5)Tf=[tf,1tf,2tf,R],tf,k=pk,Φf(xi)=i=1NaikΦf(xi),Φ(xi)=K-fαk, where Kf,ij=(Φf(xi)·Φ(xj)), K-f=Kf-EL×NK-KfE+EL×NKE, and (2.6)EL×N=1NRL×N.

T f spans the systematic subspace of Φf(x).

Then, the fault-relevant PCs of fault data Φf(x) can be gotten via (2.7)Tf,r=Pr,Φf(x),Pr=[pr,1pr,2pr,l],pr,l=j=1NαjlΦ(xj),l=1,,Rf,r. From (2.7), (2.8) can be obtained as follows: (2.8)tf,rl=K-fαl,l=1,,Rf,r, where Rf,r=rank(Tf), and the subscript r denotes fault relevant.

And the fault-relevant PCs of normal data Φ(x) can be calculated with the same principle: (2.9)Tr=Pr,Φ(x),trl=K-αl,l=1,,Rf,r.

In this way, some largest fault-relevant directions of normal data and fault data are revealed, respectively. Define the ratio of the fault-relevant PC variances between fault case and normal case as follows: (2.10)Ratioi=var(Tf,r(:,i))var(Tr(:,i)),i=1,2,,Rf,r, where var(·) denotes the variance of PCs and Tf,r(:,i) denotes the ith column vector in matrix Tf,r as well as Tr(:,i).

The largest value of Ratioi denotes the direction along which there are the largest changes of process variation from normal status to fault case. If the Ratioi index is smaller than 1, it means that the concerned variations in the fault status are smaller than those in normal case. Keep the directions with values of larger than 1 which are the fault-relevant directions with increased variations. The number of retained principal directions is Rp. The Rp fault-relevant principal directions compose Pp and the remaining R-Rp directions of Pr, which are fault irrelevant, compose Po. Tp are the fault-relevant PCs of normal data with Rp components and To are the fault-irrelevant PCs, with R-Rp components.

The pk=j=1NαjkΦ(xj), k=1,,R span the systematic subspace and the pk=j=1NαjkΦ(xj), k=R+1,,N span the residual subspace. Define the column number of principal directions in residual subspace are R*, R*=N-R and define P* as follows: (2.11)P*=[pR+1pR+2pN].

Then, the PCs of normal case in the residual subspace can be calculated as follows: (2.12)T*=P*,Φ(x),T*=[tR+1tR+2tN],tk=pk*,Φ(xi)=i=1NaikΦ(xi),Φ(xi)=K-αk,k=R+1,,N.

Similarly, the PCs of fault case in the residual subspace can be calculated as follows: (2.13)Tf*=P*,Φf(x),Tf*=[tf,R+1tf,R+2tf,N],tf,k=pk*,Φf(xi)=i=1NaikΦf(xi),Φ(xi)=K-fαk,k=R+1,,N.

Following the ways of (2.7), the fault-relevant principal directions and principal components in residual subspace of fault case can be obtained, respectively. One has (2.14)Tf,r*=Pr*,Φf(x),(2.15)Pr*=[pR+1pR+2pRf,r*],(2.16)pr,l=j=1NαjlΦ(xj),l=R+1,,Rf,r*,(2.17)t*f,rl=K-fαl,l=R+1,,Rf,r*,(2.18)Tf*=[tf,R+1tf,R+2tf,Rf,r*], where Rf,r*=rank(Tf*). Then the principal components in residual subspace of normal case can be also worked out with obtained fault-relevant principal directions in (2.15) as follows: (2.19)t*rl=K-αl,l=R+1,,Rf,r*,(2.20)Tr*=[tR+1tR+2tRf,r*].

Then the fault-relevant residual subspace of fault case is Φf,r*(x)=Tf,r*Pr*T and the fault-relevant residual subspace of normal case is Φr*(x)=Tr*Pr*T. Define the ratio of the squared errors between fault case and normal case along each direction in the fault-relevant residual subspace: (2.21)Ratioj=Tf,r*(:,j)Pr*(:,j)T2Tr*(:,j)Pr*(:,j)T2=Tf,r*(:,j)TTf,r*(:,j)Tr*(:,j)TTr*(:,j),j=1,2,,Rf,r*-R.

The largest value denotes the direction along which there are the largest changes of squared errors from normal status to fault case. Keep those fault-relevant residual directions with values of larger than 1 which are the fault-relevant directions with increased squared errors. The final number of dimensions of fault-relevant residual subspace is RP*. Correspondingly, the fault-relevant residual subspace is spanned by Pp* which is composed of the sorted directions extracted from Pr*. The remaining directions of Pr*, which are fault irrelevant, compose Po*. The fault-relevant PCs of normal case compose Tp* which has RP* components and the fault-irrelevant PCs compose To*, with N-Rp* components.

There exist a number of kernel functions. According to Mercer’s theorem of functional analysis, there exists a mapping into a space where a kernel function acts as a dot product if the kernel function is a continuous kernel of a positive integral operator. Hence, the requirement on the kernel function is that it satisfies Mercer’s theorem. Theoretically, all functions that satisfy Mercer’s theorem can be utilized, while there are several most widely used kernel functions such as Gaussian function (K(x,y)=exp(-x-y2/c)), polynomial (K(x,y)=x,yd), sigmoid (K(x,y)=tanh(β0x,y+β1)), where d, β0, β1, and c are specified a priori by the user. Gaussian kernel is selected in this paper for its good performance.

3. On-Line Monitoring Strategy of Fault-Relevant KPCA

The fault-relevant KPCA-based monitoring method is similar to that using KPCA. The Hotelling’s T2 statistic and the Q-statistic in the feature can be interpreted in the same way. Two systematic subspaces both have their own T2 statistic and two residual subspaces both have their own Q-statistic too. Define the T2 statistic of fault-relevant systematic subspace T2p and that of fault-irrelevant systematic subspace T2o. Define the Q-statistic of fault-relevant residual subspace SPEp and that of fault-irrelevant residual subspace SPEo. For one new data set XnewRN×M, those four statistics can be obtained as follows: (3.1)T2p=TpnewΛp-1TpnewT,(3.2)T2o=TonewΛο-1TonewT,(3.3)SPEp=i=RNtnew,i2-j=1Rp*tpnew,j*2,(3.4)SPEo=i=RNtnew,i2-j=1N-Rp*tonew,j*2.

In (3.1) and (3.3), Tpnew=Pp,Φnew(x)=K-newαl, l is the fault-relevant directions of systematic subspace with Rp components. Λp are the fault-relevant directions of λ, with Rp components. One has (3.5)Knew,ij=Φnew(xi)·Φ(xj), where (3.6)K-new=Knew-EM×NK-KnewE+EM×NKE,EM×N=1NRM×N.

In (3.2) and (3.4), Tonew=Po,Φnew(x)=K-newαl, and l is the fault-irrelevant directions of systematic subspace with R-Rp components. Λo is the fault-irrelevant directions of λ, with R-Rp components.

In (3.2) and (3.4), tnew,i* is the ith component of Tnew* and Tnew*=P*,Φnew(x)=K-newα.

In (3.3), tpnew,j* is the jth component of Tpnew* and Tpnew*=Pp*,Φnew(x)=K-newαl, l denotes the fault-relevant directions of residual subspace with RP* components.

In (3.4), tonew,j* is the jth components of Tonew* and Tonew*=Po*,Φnew(x)=K-newαl, l denotes the fault-irrelevant directions of residual subspace with N-Rp* components.

The confidence limit of T2 is obtained using the F-distribution : (3.7)Tp,N,α2~p(N-1)N-pFp,N-p,α, where N is the number of samples in the model and p is the number of PCs.

The confidence limit of SPE can be computed from its approximate distribution : (3.8)SPEα~gχh2, where g is a weighting parameter included to account for the magnitude of SPE and h accounts for the degrees of freedom. If a and b are the estimated mean and variance of the SPE, then g and h can be approximated by g=b/2a and h=2a2/b.

3.1. Developing the Different Fault-Relevant Models

Acquire normal operating data and several different known fault data sets.

Given a set of M-dimensional normal operating data xkRM, k=1,,N and a set of M-dimensional fault data xf,kRM, k=1,,N, compute the kernel matrix KRN×N by [K]ij=Kij=Φ(xi),Φ(xj)=[k(xi,xj)] and KfRN×N by [Kf]ij=Kf,ij=Φf(xi),Φ(xj)=[k(xf,i,xj)].

Carry out centering in the feature space for k=1NΦ(xk)=0 and k=1NΦf(xk)=0 as follows: (3.9)K-=K-KE-EK+EKE,K-f=Kf-EK-KfE+EKE, where (3.10)E=1NRN×N.For k different faults, k different K-f, that is,  k different models can be gotten.

Solve the eigenvalue problem λα=(1/N)Kα and normalize α such that α2=1/Nλ.

3.2. On-Line Monitoring

The main thought of on-line monitoring is that k different models are developed with k different fault data sets. Monitoring statistics are calculated, respectively, in these models with the on-line data at the same time. When the monitoring statistic of one model goes out of the confidence limit, the abnormality is detected and the fault is identified meanwhile, that is, the type of fault with which that model was developed. The specific steps are as follows.

Obtain new data for each sample.

Given the M-dimensional test data xtRM, compute the kernel vector ktR1×N,[kt]ij=Kt,ij=Φt(xi),Φ(xj)=[k(xt,i,xj)], where xjRM is the normal operating data.

Mean center the test kernel vector kt as follows: (3.11)k-t=kt-1tK-ktE+1tKE, where K and E are obtained from step 2 and 3 of the modeling procedure and 1t=1/N[1,,1]R1×N.

For the test data Xt, compute Tnew,Tpnew,Tonew,Tpnew*,Tonew* with P, Pp, Po, Pp*, Po* in k models, respectively.

Calculate the monitoring statistics of four subspaces of the test data in k different models.

Monitor whether T2 or d exceeds its control limit calculated in the modeling procedure.

4. Simulation Study

The proposed fault-relevant KPCA method was applied to fault detection and diagnosis in benchmark simulations of Tennessee Eastman process and penicillin fermentation process and compared with the conventional KPCA model.

4.1. Tennessee Eastman Benchmark

The well-known TE process has been widely used for testing various process monitoring and fault diagnosis methods [11, 12] since it was first introduced by Downs and Vogel . The process is constructed by five major operation units: a reactor, a product condenser, a vapor-liquid separator, a recycle compressor, and a product stripper. It contains two blocks of process variables: 41 measured variables and 11 manipulated variables. Process measurements are sampled with interval of three minutes. The details on the process description can be found in Downs and Vogel’s work.

As a complex chemical process, TE process provides a superior simulation platform to validate the proposed method. In this study, fifty-two variables, including 41 process measurement variables and 11 manipulated variables, are used. Four hundred and eighty normal samples are used for model identification. Fifteen known faults as described in Downs and Vogel’s work are considered. Faults 1–7 are associated with step changes in different process variables, for example, in the A/C feed ratio and D feed temperature. Faults 8–12 are associated with random variables in certain variables, for example, an increase in the variability of reactor cooling water inlet temperature. For Fault 13, there is a slow drift in the reaction kinetics. For Faults 14 and 15, two cooling water valves are stuck.

Based on KPCA algorithm, the normal process space is decomposed into a systematic subspace and a residual subspace first. Then some fault-relevant directions or principal components are picked up from the systematic subspace with the help of information extracted from fault data. In this article, Fault 1, Fault 7, and Fault 13 are used to develop different monitoring models. In the models built with these faults, all the principal components in the residual subspace are fault relevant, so that the SPE charts are the same as that of KPCA.

For Fault 1, Figure 1(a) shows the Tp2 statistic values calculated with fault-relevant principal components calculated by fault-relevant KPCA method and Figure 1(b) shows the KPCA T2 statistic values. The KPCA T2 statistic give alarming signals from the 181th sample and fault-relevant Tp2 goes out of control from the 175th sample. The Tp2 statistic detected the fault earlier than T2 statistic.

Monitoring results of the Tennessee Eastman process based on (a) fault-relevant KPCA and (b) KPCA in the case of Fault 1.

Fault-relevant KPCA

KPCA

For Fault 7, the results in Figure 2 show that the Tp2 statistic notice the fault earlier than T2 statistic. The real Tp2 chart for this fault goes down when it detects the fault; therefore it was turned so that conventional chart confidence limit could be used to detect the fault. The fault-relevant method detected the fault from the 163th sample while the KPCA method detected the fault from the 167th sample.

Monitoring results of the Tennessee Eastman process based on (a) fault-relevant KPCA and (b) KPCA in the case of Fault 7.

Fault-relevant KPCA

KPCA

For Fault 13, as shown in Figure 3, the two statistics have the same monitoring result. Both statistics detected the fault from the 213th sample.

Monitoring results of the Tennessee Eastman process based on (a) fault-relevant KPCA and (b) KPCA in the case of Fault 13.

Fault-relevant KPCA

KPCA

In summary, the proposed method pays more attention to the fault-relevant process variations and separates them from the fault-irrelevant variations for monitoring. Comparatively, KPCA model treats them together. For Fault 1, Fault 7, and Fault 13, the monitoring results show that the fault-relevant KPCA based monitoring performance is better than that based on KPCA model. For Fault 13, the proposed method based on monitoring performance is not worse than that based on KPCA.

The choosing of the kernel parameter is important for KPCA and other kernel methods, which would affect their performances. Similarly, the kernel parameter is also an influential factor in this method and its monitoring. Changing of the kernel parameter, the shape of the Tp2 chart changes. For some faults, a good kernel parameter for T2 statistic of KPCA may not be appropriate for Tp2 statistic which is sensitive to the fault with another kernel parameter. For some faults, the fault-relevant principal components are also sensitive, but the Tp2 statistic calculated by these components is not satisfactory. Therefore, for some faults, the proposed method does not have a satisfactory performance.

4.2. Penicillin Fermentation

In this section, the proposed method is applied to the monitoring of a well-known benchmark process, penicillin fermentation process. A flow diagram of the penicillin fermentation process is given in Figure 4. Trajectories of nine variables from a nominal batch run are shown in Figure 5. The production of secondary metabolites such as antibiotics has been the subject of many studies because of its academic and industrial importance. Here, we focus on the process to produce penicillin, which has nonlinear dynamics and multiphase characteristics. In typical operating procedure for the modeled fed-batch fermentation, most of the necessary cell mass is obtained during the initial preculture phase. When most of the initially added substrate has been consumed by the microorganisms, the substrate feed begins. The penicillin starts to be generated at the exponential growth phase and continues to be produced until the stationary phase. A low substrate concentration in the fermentor is necessary for achieving a high product formation rate due to the catabolite repressor. Consequently, glucose is fed continuously during fermentation at the beginning. In the present simulation experiment, a total of 60 reference batches are generated using a simulator (PenSim v2.0 simulator). Detail process description is well explained from http://www.chee.iit.edu/~cinar/software.htm. These simulations are run under closed-loop control of pH and temperature, while glucose addition is performed under open loop. Small variations are automatically added to mimic the real normal operating conditions under the default initial setting conditions. The duration of each batch is 400 h, consisting of a pre-culture phase of about 45 h and a fed-batch phase of about 355 h [41, 42].

Penicillin fermentation process.

Trajectories of nine variables from a nominal batch run.

The models are constructed using the proposed method. KPCA is then tested against monitoring of fault batches. Fault 1 is implemented by introducing a 10% step increase in the Aeration rate at 100 h and retaining until 300 h. Fault 2 is implemented by introducing a 2% step increase in the Aeration rate at 100 h and retaining until 300 h. Fault 3 is implemented by introducing a 10% step increase in the agitator power at 100 h and retaining until 300 h. The monitoring results are shown in Figures 6, 7, and 8, respectively. As shown in Figure 6, the proposed fault-relevant KPCA method and KPCA can detect faults varying in large ranges. In our study, when the faults vary in a small range, the proposed method can detect faults successfully, but the T2 of KPCA cannot detect the faults, as shown in Figures 7 and 8. Therefore the proposed method can detect tiny fault and it is more sensitive than KPCA for these faults.

Monitoring results of the penicillin fermentation process based on (a) fault-relevant KPCA and (b) KPCA in the case of Fault 1.

Fault-relevant KPCA

KPCA

Monitoring results of the penicillin fermentation process based on (a) fault-relevant KPCA and (b) KPCA in the case of Fault 2.

Fault-relevant KPCA

KPCA

Monitoring results of the penicillin fermentation process based on (a) fault-relevant KPCA and (b) KPCA in the case of Fault 3.

Fault-relevant KPCA

KPCA

5. Conclusions

In this article, the fault-relevant KPCA algorithm is proposed to decompose the process variations from the fault-relevant perspective. By further decomposing the KPCA subspaces, the underlying process information can be more comprehensively looked into, which is helpful to the detection of abnormal changes. Fault-relevant principal components extracted from KPCA systematic subspace and residual subspace are used to monitor the process. With fault-relevant principal components, instead of with all principal components which may not be influenced by the disturbances, better monitoring results are gotten. The case study on TEP and penicillin fermentation process is performed to show the performance of the fault-relevant KPCA algorithm for process monitoring. In general, swifter and more sensitive fault detection is reported in comparison with the conventional KPCA method.

Acknowledgments

The work is supported by China’s National 973 program (2009CB320602 and 2009CB320604) and the NSF (60974057 and 61020106003).

Chun-Chin H. Chao-Ton S. An adaptive forecast-based chart for non-Gaussian processes monitoring: with application to equipment malfunctions detection in a thermal power plant IEEE Transactions on Control Systems Technology 2010 19 5 1245 1250 2-s2.0-78349244833 10.1109/TCST.2010.2083664 Samara P. A. Fouskitakis G. N. Sakallariou J. S. Fassois S. D. A statistical method for the detection of sensor abrupt faults in aircraft control systems IEEE Transactions on Control Systems Technology 2008 16 4 789 798 2-s2.0-46849098186 10.1109/TCST.2007.903109 Chen T. Zhang J. On-line multivariate statistical monitoring of batch processes using Gaussian mixture model Computers and Chemical Engineering 2010 34 4 500 507 2-s2.0-77649189520 10.1016/j.compchemeng.2009.08.007 Zhang B. Sconyers C. Byington C. Patrick R. Orchard M. E. Vachtsevanos G. A probabilistic fault detection approach: application to bearing fault detection IEEE Transactions on Industrial Electronics 2011 58 5 2011 2018 2-s2.0-79954538160 10.1109/TIE.2010.2058072 Qin S. J. Statistical process monitoring: basics and beyond Journal of Chemometrics 2003 17 8-9 480 502 2-s2.0-0242354134 Chen Q. Kruger U. Analysis of extended partial least squares for monitoring large-scale processes IEEE Transactions on Control Systems Technology 2005 13 5 807 813 2-s2.0-26244448876 10.1109/TCST.2005.852113 Ayhan B. Chow M. Y. Song M. H. Multiple discriminant analysis and neural-network-based monolith and partition fault-detection schemes for broken rotor bar in induction motors IEEE Transactions on Industrial Electronics 2006 53 4 1298 1308 2-s2.0-33747584401 10.1109/TIE.2006.878301 Kruger U. Kumar S. Littler T. Improved principal component monitoring using the local approach Automatica. A Journal of IFAC, the International Federation of Automatic Control 2007 43 9 1532 1542 10.1016/j.automatica.2007.02.016 2327065 ZBL1126.62122 Alcala C. F. Qin S. J. Reconstruction-based contribution for process monitoring Automatica 2009 45 7 1593 1600 10.1016/j.automatica.2009.02.027 2879472 ZBL1188.90074 Muradore R. Fiorini P. A PLS-based statistical approach for fault detection and isolation of robotic manipulators IEEE Transactions on Industrial Electronics 2012 59 8 3167 3175 10.1109/TIE.2011.2167110 Li G. Qin S. J. Zhou D. Geometric properties of partial least squares for process monitoring Automatica. A Journal of IFAC, the International Federation of Automatic Control 2010 46 1 204 210 10.1016/j.automatica.2009.10.030 2578292 ZBL1233.62208 Li G. Joe Qin S. Zhou D. Output relevant fault reconstruction and fault subspace extraction in total projection to latent structures models Industrial and Engineering Chemistry Research 2010 49 19 9175 9183 2-s2.0-77957561573 10.1021/ie901939n Lee J. M. Yoo C. K. Choi S. W. Vanrolleghem P. A. Lee I. B. Nonlinear process monitoring using kernel principal component analysis Chemical Engineering Science 2004 59 1 223 234 2-s2.0-0346911568 10.1016/j.ces.2003.09.012 MacGregor J. F. Kourti T. Statistical process control of multivariate processes Control Engineering Practice 1995 3 3 403 414 2-s2.0-0029267381 Lee G. Han C. Yoon E. S. Multiple-fault diagnosis of the Tennessee Eastman process based on system decomposition and dynamic PLS Industrial and Engineering Chemistry Research 2004 43 25 8037 8048 2-s2.0-9744237208 Lee J. M. Yoo C. Lee I. B. Statistical process monitoring with independent component analysis Journal of Process Control 2004 14 5 467 485 2-s2.0-1342285571 10.1016/j.jprocont.2003.09.004 Wold S. Esbensen K. Geladi P. Principal component analysis Chemometrics and Intelligent Laboratory Systems 1987 2 1–3 37 52 2-s2.0-45949123735 Dunteman G. H. Principal Component Analysis 1989 London, UK SAGE publication LTD Jackson J. E. A User's Guide to Principal Component 1991 New York, NY, USA Wiley Kleinbaum D. G. Kupper L. L. Muller K. E. Applied Regression Analysis and Other Multivariable Methods 2033 California, Calif, USA Wdasworth Publishing Co Inc 957918 Burnham A. J. Viveros R. Macgregor J. F. Frameworks for latent variable multivariate regression Journal of Chemometrics 1996 10 1 31 45 2-s2.0-2842581444 Dayal B. S. Macgregor J. F. Improved PLS algorithms Journal of Chemometrics 1997 11 1 73 85 2-s2.0-0002815256 Qin S. J. Recursive PLS algorithms for adaptive data modeling Computers and Chemical Engineering 1998 22 4-5 503 514 2-s2.0-0032044750 Zhao C. Wang F. Zhang Y. Nonlinear process monitoring based on kernel dissimilarity analysis Control Engineering Practice 2009 17 1 221 230 2-s2.0-53649106193 10.1016/j.conengprac.2008.07.001 Zhang Y. W. Zhou H. Qin S. J. Decentralized fault diagnosis of large-scale processes using multiblock kernel principal component analysis Zidonghua Xuebao/ Acta Automatica Sinica 2010 36 4 593 597 2-s2.0-77952682127 10.3724/SP.J.1004.2010.00593 Zhang Y. Zhou H. Qin S. J. Chai T. Decentralized fault diagnosis of large-scale processes using multiblock kernel partial least squares IEEE Transactions on Industrial Informatics 2010 6 1 3 10 2-s2.0-76849100172 10.1109/TII.2009.2033181 Zhang Y. Hu Z. Multivariate process monitoring and analysis based on multi-scale KPLS Chemical Engineering Research and Design 2011 89 12 2667 2678 2-s2.0-79957829576 10.1016/j.cherd.2011.05.005 Jin H. D. Lee Y. H. Lee G. Han C. Robust recursive principal component analysis modeling for adaptive monitoring Industrial and Engineering Chemistry Research 2006 45 2 696 703 2-s2.0-31544440191 10.1021/ie050850t Lee Y. H. Jin H. D. Han C. On-line process state classification for adaptive monitoring Industrial and Engineering Chemistry Research 2006 45 9 3095 3107 2-s2.0-33646496576 10.1021/ie048969+ Yang J. Frangi A. F. Yang J. Y. Zhang D. Jin Z. KPCA plus LDA: a complete kernel fisher discriminant framework for feature extraction and recognition IEEE Transactions on Pattern Analysis and Machine Intelligence 2005 27 2 230 244 2-s2.0-14544297033 10.1109/TPAMI.2005.33 Wang X. Kruger U. Irwin G. W. McCullough G. McDowell N. Nonlinear PCA with the local approach for diesel engine fault detection and diagnosis IEEE Transactions on Control Systems Technology 2008 16 1 122 129 2-s2.0-37749003880 10.1109/TCST.2007.899744 Zhou D. Lee G. Qin S. J. Total projection to latent structures for process monitoring AIChE Journal 2010 56 168 178 Li G. Alcala C. F. Qin S. J. Zhou D. Generalized reconstruction-based contributions for output-relevant fault diagnosis with application to the tennessee Eastman process IEEE Transactions on Control Systems Technology 2010 19 5 1114 1127 2-s2.0-77957783600 10.1109/TCST.2010.2071415 Cho J. H. Lee J. M. Choi S. W. Lee D. Lee I. B. Fault identification for process monitoring using kernel principal component analysis Chemical Engineering Science 2005 60 1 279 288 2-s2.0-10244238854 10.1016/j.ces.2004.08.007 Choi S. W. Lee C. Lee J. M. Park J. H. Lee I. B. Fault detection and identification of nonlinear processes based on kernel PCA Chemometrics and Intelligent Laboratory Systems 2005 75 1 55 67 2-s2.0-11144331636 10.1016/j.chemolab.2004.05.001 Valle S. Li W. Qin S. J. Selection of the number of principal components: the variance of the reconstruction error criterion with a comparison to other methods Industrial and Engineering Chemistry Research 1999 38 11 4389 4401 2-s2.0-0033230994 Wold S. Cross-validatory estimation of the number of components in factor and principal component models Technometrics 1978 4 397 405 Lowry C. A. Montgomery D. C. Review of multivariate control charts IIE Transactions 1995 27 6 800 810 2-s2.0-0029517963 Nomikos P. MacGregor J. F. Multivariate SPC charts for monitoring batch processes Technometrics 1995 37 1 41 59 2-s2.0-0029252734 Downs J. J. Vogel E. F. A plant-wide industrial process control problem Computers and Chemical Engineering 1993 17 3 245 255 2-s2.0-0027561446 Zhang Y. Zhang Y. Complex process monitoring using modified partial least squares method of independent component regression Chemometrics and Intelligent Laboratory Systems 2009 98 2 143 148 2-s2.0-69349083126 10.1016/j.chemolab.2009.06.001 Zhang Y. Li S. Teng Y. Dynamic processes monitoring using recursive kernel principal component analysis Chemical Engineering Science 2012 72 78 86 10.1016/j.ces.2011.12.026