A new strategy for internal model control (IMC) is proposed using a regression algorithm of quasilinear model with extreme learning machine (QL-ELM). Aimed at the chemical process with nonlinearity, the learning process of the internal model and inverse model is derived. The proposed QL-ELM is constructed as a linear ARX model with a complicated nonlinear coefficient. It shows some good approximation ability and fast convergence. The complicated coefficients are separated into two parts. The linear part is determined by recursive least square (RLS), while the nonlinear part is identified through extreme learning machine. The parameters of linear part and the output weights of ELM are estimated iteratively. The proposed internal model control is applied to CSTR process. The effectiveness and accuracy of the proposed method are extensively verified through numerical results.
1. Introduction
Internal model control is to design control strategy based on a kind of mathematical model of the process. Because of its obvious dynamic and static performance, as well as simple structure and strong robustness, internal model control plays an increasingly significant effect in control area [1, 2]. Two crucial problems in the inverse system approach are identification of plant model and determination of controller settings. For a complex nonlinear system, it is difficult to obtain an accurate internal model and its inverse model. In recent years, much effort has been devoted to nonlinear system modeling based on artificial neural networks (NNs) and support vector machine (SVM) [3–5]. It is widely applied to use the solution of trained inverse model as a nonlinear controller [6]. However, the disadvantages of the dynamic gradient method lie in its long training time, minor update of weights, and high probability of training failure. SVM method based on standard optimization often suffers from parameter adjustment difficulties. Moreover, the update information based on errors in internal model and inverse model also leads to decrease of the control performance [7].
To deal with the above problems, extreme learning machine (ELM) proposed by Huang et al. [8–10] shows great advantages. Its simplified neural network structure makes the learning speed fast. A smaller training error can be obtained via a canonical equation. The advantage of ELM is its low computational effort and high generalization ability. Therefore, ELM has been successfully applied in many areas, such as classification of EEG signals and protein sequence [11], building regression model [12, 13], and fault diagnosis [14, 15].
For the nonlinear modeling, the key point is to find a suitable model structure. Volterra model is a kind of crucial nonlinear system model [16]. It provides an elaborate mathematical description for a great many of nonlinear systems. Recently, some researchers proposed the proof of inverse theory for IMC based on Volterra model [17]. However, the obvious shortcoming that limits its application is its high complexity in the identification of kernel function.
In recent years, some block-oriented models have been proposed and applied widely, such as Wiener model and Hammerstein model [18–21] which consist of a static nonlinear function and a linear dynamic subsystem. Both of them are of simple structures and can be used to identify some highly nonlinear process, such as pH neutralization process [22] and fermentation process [23]. However, sometimes it is difficult to separate the system concerned into a linear dynamic block and a memoryless nonlinear one. Another class of methods based on local linearization of the structure, combining the nonlinear nonparametric models with some conventional statistical models, has achieved some great results. McLoone et al. [24] proposed an off-line hybrid training algorithm for feed-forward neural networks. Peng et al. [25, 26] proposed hybrid pseudolinear RBF-AR, RBF-ARX models. A cascaded structure of the ARX-NN model is proposed by Hu et al. [27]. The idea of these two different classes of methods is to separate the linear and nonlinear identification, so as to facilitate the inverse computation.
However, these models show high nonlinear characteristics, which are difficult to analysis in theory, without exploiting some good linearity properties. It is well known that simple structure, such as ARX model, has a lot of advantages in modeling. Firstly, its linear properties will significantly simplify the parameter estimation. Secondly, it is convenient to deduce regression predictor. Furthermore, linearity structure is also convenient for control design as well as the control law derivation. A good representation cannot only approximate the nonlinear function accurately, but also simplify the identification process. Hu et al. [28, 29] proposed a quasilinear model constructed by a linear structure using a quasilinear ARX model for nonlinear process mapping. From a macrostandpoint, the model can be seen as a linear structure which is a redundant for the regression ability. Its complex coefficients reflect the nonlinearity of the system. The model has a great flexibility to deal with the system nonlinearity.
Inspired by this kind of quasilinear ARX model as well as the thought of separate identification, the motivation of this paper is intended to propose a class of quasilinear ELM model, which can be separated into a linear part and a nonlinear kernel part. It cannot only identify ordinary nonlinear system, but also simplify the identification process via separating the model complexity. In this paper, a novel internal model control based on quasilinear-ELM (QL-ELM) structure is proposed for CSTR system. Taking advantage of separate identification, the quasilinear model consists of a linear part and a nonlinear kernel part. The parameters of nonlinear part are estimated by ELM, which increases the flexibility of the model. The linear parameters are estimated by using the RLS method. A recursive algorithm is conducted to estimate the parameters in both parts. Moreover, QL-ELM is used to set up the internal and inverse model of nonlinear CSTR systems. Through the establishment of the inverse model, the control action is obtained to achieve fixed-point control and tracking control of concentration. Taking the advantage of its characteristics of high modeling accuracy and less human interference, the closed-loop system control is more stable and has less steady-state deviation. Simulation results demonstrate the dynamic performance and tracking ability of the proposed QL-ELM based IMC strategy.
This paper is organized in six sections. Following the introduction, the traditional extreme learning machine is illustrated in Section 2. In Section 3 the algorithm of QL-ELM is presented. IMC with QL-ELM is described in Section 4. To show the applicability of the proposed method, simulations results for CSTR are presented in Section 5. Finally, the conclusion is presented in Section 6.
2. Extreme Learning Machine: Basic Principles
For the input nodes xi=[xi1,xi2,…,xin] and output nodes yi=[yi1,yi2,…,yim](i∈(1,2,…,N)) the single-hidden layer feed-forward neural networks (SLFNs) with M hidden nodes and activation function g(x) can be expressed as(1)∑i=1Mβigwi·xi+bi=yi,where βi=[βi1,βi2,…,βin]T is the vector of weights between hidden layer and the output nodes. wi=[wi1,wi2,…,win]T is the vector of weights between input vectors and hidden layer. In addition, bi is the bias of the ith hidden node. ELM with wide types of activation functions g(x) can get high regression accuracy. Unlike other traditional implementations, the input weights and biases are randomly chosen in extreme learning machine. The output of the hidden layer is written as a matrix H, and (1) can be rewritten as(2)Hβ=Y,where(3)H=gw1x1+b1⋯gwMx1+bM⋮⋮⋮gw1xN+b1⋯gwMxN+bMN×M.
With the theorems proposed in [8, 9], the input weights wi and the hidden layer biases bi are randomly generated without further tuning. It is the main idea of the ELM that training problem is simplified to find a least square solution. According to the Moore-Penrose generalized inverse theory, the output can be calculated by using the following equation:(4)β^=H†Y.
It must be the smallest norm solution among all the solutions. The one step algorithm can produce best generalization performance and learn much faster than traditional learning algorithms. It can also avoid local optimum.
3. The Quasilinear ELM Model Treatment
A quasilinear ELM model can be seen as a SLFN embedded in the coefficients of a linear model. The feature of the quasilinear ELM model is that it has both good approximation abilities and easy-to-use properties. For a nonlinear SISO system described by(5)yt=fXTt+et,where X(t)=[y(t-1),y(t-2),…,y(t-na),u(t),u(t-1),…,u(t-nb)]. X(t) is the regression vector, na, nb are the order of the system. na+nb=d, X(t)∈Rd. e(t) is a stochastic noise with zero-mean.
Assume that f(·) is continuous and differentiable at a small region around X(t)=0. By using Taylor equation [30], f(XT(t)) can be further expanded as(6)yt=f0+f′0Xt+12XTtf′′0Xt+⋯+δn.
Set y0=f(0); Θ(X(t))T=(f′(0)+1/2XTtf′′(0)+⋯). Picking up common factor X(t) from (6), then the following can be obtained:(7)yt=y0+XTtΘXt+et=XTtθ+ΘXt+et,where (f′(0)+1/2(Xt)Tf′′(0)+⋯+δn)T=[a1,t,…,an,t,b0,t,…,bm-1,t]T. It can be seen as the coefficient of nonlinear function f(·).
The quasilinear model has a linear structure ARX model with a functional coefficient Θ(X(t)). It can be separated into a nonlinear part yN(t) and a linear part yL(t) described as(8)yt=yLt+yNt+et(9)yLt=XTθ(10)yNt=XTtΘXt,where Θ=[a1,t,…,an,t,b0,t,…,bm-1,t]T.
For case of near linear system, nonlinear part is the supplement for nonlinear feature, so good regression results can be achieved. For case of the nonlinear system, nonlinear network as an interpolated coefficient Θ(X(t)) can be used to expend the regression space. Equation (10) can be seen as the linear form with a nonlinear coefficient Θ(X(t)), which is actually a problem of function approximation from a multidimensional input space X into a one-dimensional scalar space Θ(X(t)). Using ELM to estimate nonlinear part parameters will be more convenient and concise. Replacing Θ by ELM, the model in (7) can be rewritten as(11)yt=XTt∑i=1MNXt,Ω+et,where N(X(t),Ω)=W2Γ(W1X(t)+B)+θ; then the quasilinear ELM model can be further expressed as(12)yt=XTt∑i=1MWi2ΓWi1Xt+B+XTθ+b+et,where the activation function Γ(·) is chosen as Γ(x)=1/(1+e-x). The whole identification process based on QL-ELM is described in Figure 1, where T(t)=W2Γ(W1X(t)+B), na and nb are orders of the input and output, W1 and W2 are weight matrices of the input and output layer, B is bias vector of hidden nodes, and θ is the parameter of linear part and also can be seen as the bias vector of output nodes. The parameters of two submodels are updated during each iterative process until the ultimate goal to make the error between the output of actual model and the QL-ELM model minimized. The deviation between y(t) and yL(t) is used to update the nonlinear part through ELM learning. The deviation between y(t) and yN(t) is used to update the linear part through recursive least squares.
The process identification based on QL-ELM.
The whole process is to make the error between actual output and model output minimized. In this paper a hierarchical iterative algorithm is considered for quasilinear model.
For linear part, at every iteration, the following RLS is used to minimize the sum of squared residuals e=‖yl-y^l‖2=‖yl-XTθ^‖2 avoiding the problems of local optimal and overfitting.
For nonlinear part, weights of input layer and biases are fixed and the training error e=‖HW2-T‖2 is minimized through ELM learning. Then the weights of output layer are calculated as W2=H†T. The linear parameter θ also can be regarded as noise. ELM method has capability of interference suppression and rapidity. Because the linear form of the model disperses the complexity of the nonlinear process, the computation of nonlinearity estimation can be simplified at every iteration. It means that less hidden nodes (M) are required to avoid the overfitting problem in some extent. Using the QL-ELM model to identify the reversible model system can improve the identification accuracy and system performance.
4. IMC with QL-ELM
In the nonlinear IMC, the nonlinear model and its inverse play an important role. In this study, QL-ELM is employed as both internal model and inverse model controller. The basic structure of QL-ELM based IMC is described as block diagram of Figure 2. There are four parts for the unknown nonlinear discrete control systems. GP is the nonlinear plant; QL-ELM model is employed as inverse model controller (GIMC) and internal model (GM). In particular, the additional filter cannot only increase the physical realization of the controller, but also improve the robustness of the system. It can effectively solve problems caused by model mismatch [31].
Structure of IMC system based on QL-ELM.
4.1. Establishment of Internal Model
For a nonlinear plant described by(13)yk+1=fyk,yk-1,…,yk-na,uk,uk-1,…,uk-nb,where na and nb is the order of output and input vector. The input vector x(k)=[y(k),y(k-1),…,y(k-na),u(k),u(k-1),…,u(k-nb)]T and output vector y(k+1) are used as samples to set up the internal model via QL-ELM. {y(k+1),x(k)}, k∈1,…,N-1 is the training set. The learning of QL-ELM is implemented by the following steps.
Step 1 (initialization).
Choose the order of the regression vector na, nb. Set θ to zero and the number of nodes in hidden layer M and nonlinear parameters W1, W2, B to some small values randomly. The number of iteration is set to n=1.
Step 2.
Update the linear part using deviation yl(k)=y(k)-yn(k) and estimate θL using (9).
Step 3.
Update the nonlinear part using yn(k)=y(k)-yl(k) and estimate θN(W2,k) using (10).
Step 4.
Turn to Step 2, and set n=n+1 until the training error e=‖Y-Y^‖2 reaches minimum.
4.2. Establishment of Inverse Model
The controller of IMC is the inverse model of the nonlinear process which is equivalent to finding the inverse of the system at given frequencies. Therefore the reversibility of the model must be considered in advance.
Theorem 1.
For the above nonlinear system, if the x(k)=[y(k+1),y(k),…,y(k-na),u(k-1),…,u(k-nb)] is monotone function to u(k), the system is reversible. Or for any given two inputs u1(k), u2(k) if f(y(k),…,y(k-na),u1(k),…,u1(k-nb))≠f(y(k),…,y(k-na),u2(k),…,u2(k-nb)) holds, the system is reversible [7].
Assume a SISO nonlinear process is described in (13); the inverse model is established as(14)uk=gyk+1,yk,…,yk-na,uk-1,…,uk-nb,where g(·) is nonlinear function of inverse model. According to Theorem 1 the process of (14) is monotone and reversible. Training the QL-ELM can establish the inverse model of system. The input and output vectors are x(k)=y(k+1),y(k),…,y(k-na),u(k-1),…,u(k-nb) and u(k), respectively. Because the value of y(k+1) is unknown, the output of filter r(k) replaces the value in the above formula. Training process of inverse model is the same as the internal model.
5. Numerical Results
A typical representative of nonlinear system in chemical processes is CSTR system. The system has multiple equilibrium points (stable and unstable ones). Its dynamical behavior exhibits some complex features depending on system parameters. In this study, the dynamic behavior is described by the following differential equations [32, 33]:(15)x˙1=-x1+Da1-x1expx21+x2/φ+d1,x˙2=-1+δx2+B·Da1-x2expx21+x2/φ+δ·u+d2,y=x1,where x1 and x2 represent the dimensionless reactant concentration and reactor temperature, respectively; d1 and d2 denote the system disturbances. The control action u is the cooling jacket temperature. The model parameters are shown in Table 1. The model has three equilibrium points, where (x1,x2)=(0.144,0.886) and (x1,x2)=(0.765,4.705) are stable points and (x1,x2)=(0.445,2.74) is unstable point. The reactant concentration x1 is chosen as the controlled variable. The resulting control problem is nonlinear. Therefore, training of the models has to be restricted to a region where inverse mapping is unique to ensure the reversibility.
Physical parameters for the CSTR model.
Parameters
Nomenclature
Value
B
Heat of reaction
8
δ
Heat transfer coefficient
0.3
Da
Damökher number
0.072
φ
Activated energy
20
The initial condition is set as [x1(0),x2(0)]=[-0.1,0.1], and ut=2*randn(1). The fourth-order Runge-Kutta algorithm is used to calculate this model with the integral step size Δt=0.1. The number of hidden nodes M is 80. In the simulation, modelling error caused by the lack of the training sample will lead to the residual of the control system. Therefore, some steady-state data around the stable point are added as training samples. Results of model based on QL-ELM are compared with those of ELM, SVM, and QL-SVM methods. In detail, the number of hidden nodes in QL-ELM is reduced by 40. The optimal parameters of SVM and QL-SVM with RBF kernel are selected using the cross-validation. For the SVM method, the scale parameters in internal model are set as the penalty factor C=500, the variance in RBF kernel function δ=0.01, and the epsilon in loss function of SVR e=0.001. In the inverse model the parameters are C=10000, δ=0.01, and e=7. For the QL-SVM method the parameters in internal model and inverse model are C=100, δ=0.001, e=0.001 and C=3000, δ=0.001, e=2, respectively.
For the internal model, the order of internal model is set as na=nb=1; 2000 groups of samples are chosen as the training data and 500 groups of samples are chosen as test data.
For the inverse model x(k)=[x1(k-1),x2(k-1),x1(k),x2(k),x1(k+1),u(k-1)]; y(k)=u(k); 2000 groups of data are chosen for inverse model training so as to get the controller, and the remaining 500 groups of data are chosen for the inverse model test.
The performance of modelling is measured by the root mean square error (RMSE), and the indicator can be expressed by(16)P=1N∑k=1Nyk-yk∣θ.
Identification results of the internal model and inverse model with the QL-ELM and its corresponding error are shown in Figures 3 and 7. In addition, results and corresponding modeling error of the comparative ELM method are shown in the Figures 4 and 8, respectively. Validation results of SVM and QL-SVM based internal model are shown in Figure 5(a) and their corresponding errors are shown in Figures 5(b) and 5(c). The enlarged detail of Figure 5(a) is shown in Figure 6. Similarly, validation results of SVM and QL-SVM based inverse model are shown in Figure 9(a) and their corresponding errors are shown in Figures 9(b) and 9(c). The enlarged detail of Figure 9(a) is shown in Figure 10. Obviously, both in the internal model and in inverse model identification, errors of the proposed method are smaller than other methods. Quasilinear method can get a better generalization performance compared with normal method. In addition, ELM method reduces the regression error. Combining with both advantages, the proposed QL-ELM method provides high precision and better generalization. To illustrate the effectiveness of the proposed method, the measurable indicator of different identification methods are listed in Table 2. In the comparisons of the time performance, it is worth mentioning that, for SVM, the time means once running time for SVM with the optimal parameter, without considering the time of parameters adjustment. It is obvious that QL-ELM could provide higher precision and less time consumption than other methods.
Comparison of the identification error and time between different methods.
QL-ELM
ELM
SVM
QL-SVM
RMSE
TIMES
RMSE
TIMES
RMSE
TIMES
RMSE
TIMES
Internal model
0.0017
1.8929 s
0.0026
2.1998 s
0.0025
4.2639 s
0.0019
Inverse model
0.0122
1.8108
0.0582
2.0376 s
1.2303
6.8584 s
0.2659
Identification results of internal model using QL-ELM.
QL-ELM method output and the actual output of internal model
Error of QL-ELM internal model
Identification results of internal model using ELM.
ELM method output and the actual output of internal model
Error of ELM internal model
Comparison results of internal model using SVM and QL-SVM.
Identification results of internal model using SVM and QL-SVM
Test error of internal model using SVM
Test error of internal model using QL-SVM
The details of Figure 5(a).
Identification results of inverse model using QL-ELM.
QL-ELM method output and the actual output of inverse model
Error of QL-ELM inverse model
Identification results of inverse model using ELM.
ELM method output and the actual output of inverse model
Error of ELM inverse model
Comparison results of inverse model using SVM and QL-SVM.
Identification results of inverse model using SVM and QL-SVM
Test error of inverse model using SVM
Test error of inverse model using QL-SVM
The details of Figure 9(a).
5.1. Control under Set-Point Change
The set-point is given as follows: r=0.1088 for k<200, r=0.1200 for 200<k<700, r=0.1334 for 700<k<1700, and r=0.1200 for k<2500. The initial condition of nonlinear CSTR process is designed as [x(1),x(2)]=[0,0], respectively. Besides, the first order filter is F=(1-β)/(1-βz-1), where β=0.9. Simulation results by IMC based on QL-ELM, ELM, and SVM are displayed in Figure 11. It can be easily seen that the proposed method has faster regulation time, smaller overshoot, and smaller steady-state error. Therefore, it is not difficult to conclude that the proposed method is superior to the ELM and SVM method in this control case.
Simulation results of set-point control.
Control output using IMC based on QL-ELM, ELM, and SVM
Error of output using IMC based on QL-ELM, ELM, and SVM
5.2. Tracking Control for a Sinusoidal Wave Input
In this case, the nonlinear system output response to track a desired sinusoidal function is r(k)=0.01sin(2πt/1000)+0.13. The first order filter is F=(1-β)/(1-βz-1), where β=0.9. All the parameters settings for CSTR system are the same as Section 5.1. Comparisons of output for tracking control and error of output by the proposed method and ELM method are displayed in Figure 12. It is clearly seen that the system output can track the desired sinusoidal function perfectly. It can be concluded that the proposed method gives better tracking performance.
Simulation results of tracking control.
Control output using IMC based on QL-ELM, ELM
Error of output using IMC based on QL-ELM, ELM
Although QL-ELM model reveals better approximation performance in all the above experiments, the proposed method still lacks versatility in extremely strong nonlinear process modelling. The shortage is caused by its essential linear structure.
6. Conclusion
Nonlinear control strategy by QL-ELM based IMC is proposed and tested on the concentrate control of SISO CSTR process. The internal model and inverse model controller are learned using QL-ELM regression algorithm to improve the modeling accuracy. The proposed QL-ELM method represents a great flexibility and simplicity via a quasilinear ARX model with ELM coefficient. It cannot only approximate continuous nonlinear function but also separate the nonlinear complexity with less hidden neurons in ELM. Taking advantage of the high accuracy of QL-ELM modeling as well as increased training samples around stable points, the steady-state error is deduced in IMC system. Simulation results reveal the superiority of the proposed method in good tracking ability and stability nonlinear process control.
Conflict of Interests
The authors declare that they have no conflict of interests regarding the publication of this paper.
Acknowledgment
This research was supported by the National Natural Science Foundation of China (Grant no. 61273132, 61104098, 61473024).
DengH.LiH.-X.A novel neural approximate inverse control for unknown nonlinear discrete dynamical systemsYuH.KarimiH. R.ZhuX.Research of smart car’s speed control based on the internal model controlRivalsI.PersonnazL.Nonlinear internal model control using neural networks: application to processes with delay and design issuesWangY.-N.YuanX.-F.SVM approximate-based internal model control strategyRivalsI.PersonnazL.Nonlinear internal model control using neural networks: application to processes with delay and design issuesLiH.-X.DengH.An approximate internal model-based neural control for unknown nonlinear discrete processesHuangY.WuD.Nonlinear internal model control with inverse model based on extreme learning machineProceedings of the International Conference on Electric Information and Control Engineering (ICEICE '11)April 2011Wuhan, China239123952-s2.0-7995991621510.1109/ICEICE.2011.5778265HuangG.-B.ZhuQ.-Y.SiewC.-K.Extreme learning machine: theory and applicationsHuangG.-B.ChenL.Convex incremental extreme learning machineHuangG.-B.ChenL.Enhanced random search based incremental extreme learning machineSongY.ZhangJ.Automatic recognition of epileptic EEG patterns via Extreme Learning Machine and multiresolution feature extractionChenY.ZhaoZ.WangS.ChenZ.Extreme learning machine-based device displacement free activity recognition modelCaoJ.ChenT.FanJ.Fast online learning algorithm for landmark recognition based on BoW frameworkProceedings of the 9th IEEE Conference on Industrial Electronics and Applications201411631168WongP. K.YangZ.VongC. M.ZhongJ.Real-time fault diagnosis for gas turbine generator systems using extreme learning machineWangG.ZhaoY.WangD.A protein secondary structure prediction framework based on the Extreme Learning MachinePearsonR. K.OgunnaikeB. A.IskakovK. T.OralbekovaZ. O.Resolving power of algorithm for solving the coefficient inverse problem for the geoelectric equationBiagiolaS. I.FigueroaJ. L.Identification of uncertain MIMO Wiener and Hammerstein modelsTangY.LiZ.GuanX.Identification of nonlinear system using extreme learning machine based Hammerstein modelWillsA.SchönT. B.LjungL.NinnessB.Identification of Hammerstein-Wiener modelsWangD.DingF.Extended stochastic gradient identification algorithms for Hammerstein-Wiener ARMAX systemsSmithJ. G.KamatS.MadhavanK. P.Modeling of pH process using wavenet based Hammerstein modelZhouL.LiX.PanF.Gradient-based iterative identification for MISO Wiener nonlinear systems: application to a glutamate fermentation processMcLooneS.BrownM. D.IrwinG.LightbodyG.A hybrid linear/nonlinear training algorithm for feedforward neural networksPengH.OzakiT.Haggan-OzakiV.ToyodaY.A parameter optimization method for radial basis function type modelsPengH.OzakiT.ToyodaY.ShioyaH.NakanoK.Haggan-OzakiV.MoriM.RBF-ARX model-based nonlinear system modeling and predictive control with application to a NOx decomposition processHuB.ZhaoZ.LiangJ.Multi-loop nonlinear internal model controller design under nonlinear dynamic PLS framework using ARX-neural network modelHuJ.KumamaruK.InoueK.A hybrid quasi-ARMAX modeling scheme for identification of nonlinear systemsChengY.HuJ.Nonlinear system identification based on SVR with quasi-linear kernelProceedings of the Annual International Joint Conference on Neural Networks (IJCNN '12)June 20121810.1109/IJCNN.2012.62526942-s2.0-84865071893PrasadG.SwidenbankE.HoggB. W.A local model networks based multivariable long-range predictive control strategy for thermal power plantsZhangY.ZhengZ.Based on inverse system of internal model controlProceedings of the International Conference on Computer Application and System Modeling (ICCASM '10)October 2010Taiyuan, ChinaV14–19V14–2110.1109/ICCASM.2010.56224312-s2.0-78649529739ChenC.-T.PengS.-T.Intelligent process control using neural fuzzy techniquesChenC.-T.PengS.-T.Learning control of process systems with hard input constraints