Nonlinear process models are widely used in various applications. In the absence of fundamental models, it is usually relied on empirical models, which are estimated from measurements of the process variables. Unfortunately, measured data are usually corrupted with measurement noise that degrades the accuracy of the estimated models. Multiscale wavelet-based representation of data has been shown to be a powerful data analysis and feature extraction tool. In this paper, these characteristics of multiscale representation are utilized to improve the estimation accuracy of the linear-in-the-parameters nonlinear model by developing a multiscale nonlinear (MSNL) modeling algorithm. The main idea in this MSNL modeling algorithm is to decompose the data at multiple scales, construct multiple nonlinear models at multiple scales, and then select among all scales the model which best describes the process. The main advantage of the developed algorithm is that it integrates modeling and feature extraction to improve the robustness of the estimated model to the presence of measurement noise in the data. This advantage of MSNL modeling is demonstrated using a nonlinear reactor model.
1. Introduction
Process models are a core element in many process operations, such as process control and optimization [1, 2], and the accuracy of these models has a direct impact on the quality of these operations and ultimately on the overall performance of the process. Therefore, it is always sought to improve the accuracy of the models used in these operations. Since fundamental models are not always available, in many cases, it is relied on deriving empirical or semiempirical models from measurement of the process variables. However, data-driven approaches for model estimation are associated with many challenges, which include defining the model structure and accounting for the presence of measurement noise in the data. The objective of this work is to improve the prediction accuracy of the well-known class of nonlinear, but linear-in-the-parameters, process models using multiscale representation to account for the presence of measurement noise in the data.
The presence of measurement noise, even in small amounts, can largely affect the estimated model’s prediction accuracy. Therefore, such noise needs to be filtered for improved model’s prediction. Modeling of prefiltered measured data does not usually provide satisfactory performance [3]. This is because applying data filtering without taking into account the input-output relationship may result in the removal of certain features from the data which are important for the model. Therefore, filtering and modeling need to be integrated for satisfactory model estimation.
Unfortunately, measured data usually have a multiscale character, which means that they contain features and noise that have varying contributions over both time and frequency [4]. For example, an abrupt change in the data spans a wide range in the frequency domain and a small range in the time domain, while a slow change spans a wide range in the time domain and a small range in the frequency domain. Filtering such data using the conventional low pass filters usually does not result in a good noise-feature separation because these filtering techniques classify noise as high-frequency features and filter the data by removing features with frequency higher than a defined frequency threshold. Thus, modeling multiscale data requires developing multiscale modeling techniques that account for this multiscale nature of the data.
Many investigators have used multiscale techniques to improve the accuracy of estimated empirical models [3, 5–9]. For example, the authors in [5] showed how to use wavelet representation to design wavelet prefilters for process modeling purposes. In [3], the author discussed some of the advantages of using multiscale representation in empirical modeling, and in [6], he enhanced the noise removal ability of the Principal Component Analysis (PCA) model by constructing multiscale PCA models, which he also used in process monitoring. Also, the authors in [7–9] used multiscale representation to reduce collinearity and shrink the large variations in the Finite Impulse Response (FIR) model parameters. Furthermore, in [10], the authors used wavelets as modulating function for control-related system identification. Finally, the authors in [11, 12] used wavelet-based representation to enhance the accuracy and parsimony of the Autoregressive with exogenous variable (ARX) model and the Takagi-Sugeno fuzzy model.
In this work, multiscale representation of data is utilized to improve the prediction accuracy of nonlinear models by developing a multiscale nonlinear (MSNL) process modeling algorithm that accounts for the presence of noise in the data and improves the model’s prediction accuracy. The developed MSNL modeling algorithm integrates modeling and data filtering by constructing multiple nonlinear models at multiple scales using the scaled signal approximations of the input and output data and then selecting, among all scales, the model that provides the optimum prediction and maximum noise-feature separation.
The rest of this paper is organized as follows. In Section 2, the problem statement and the formulation and estimation of the linear-in-the-parameter nonlinear model are introduced, followed by a description of the wavelet-based multiscale representation of data in Section 3. Then, in Section 4, the formulation, estimation, and algorithm for MSNL modeling are described. Then, in Section 5, the performance of the developed MSNL modeling algorithm is illustrated and compared with that of the time-domain method. Finally, the paper is concluded with few remarks in Section 6.
2. Problem Statement
In this work, we address the problem of empirically (from measurement of the input and output variables) estimating linear-in-the-parameters nonlinear models which are less affected by the presence of measurement noise in the data. Given measurements of the input-output data, that is, {u(k)}k=1,…,n and {y(k)}k=1,…,n, where the output is assumed to be contaminated with additive zero mean Gaussian noise (i.e., y=ỹ+ε, where ε~N(0,σε2), and the superscript “~” represents the noise-free variables), it is desired to construct a linear-in-the-parameter nonlinear model of the form
y(k+1)=∑i=0mβifi(zi(k),θi),
where y(k+1) is the process output at time step (k+1), zi(k)={y(k),…,y(k-pi),u(k),…,u(k-qi)}, fi(·) is the ith nonlinear basis function (i∈[1,m]), where these basis functions are assumed to be known, and βi is the model parameter corresponding to the ith basis function, which is a function of the parameter vector θi. Also, pi and qi are the number of lagged outputs and inputs used in the model, respectively.
2.1. Nonlinear Model Estimation
The nonlinear model shown in (1) can also be represented in matrix form as
Y=Xβ,
where
Y=[y(k+1)y(k)y(k-1)⋯]T,X=[f1(z1(k),θ1)f2(z2(k),θ2)⋯fm(zm(k),θm)f1(z1(k-1),θ1)f2(z2(k-1),θ2)⋯fm(zm(k-1),θm)f1(z1(k-2),θ1)f2(z2(k-2),θ2)⋯fm(zm(k-2),θm)··⋯·],
and the parameter vector β=[β1β2⋯βm]T which can be estimated using ordinary least squares (OLSs) regression as follows:
{β̂}OLS=(XTX)-1XTY.
Note that since OLS minimizes the output prediction mean squares error in the estimation of the model parameters, it implicitly assumes that all variables in the information matrix, X, are noise-free. In the nonlinear model given in (2), however, past outputs are also a part of the information matrix. Therefore, the accuracy of the model prediction can be affected by the presence of measurement noise in the data, especially in large amounts. This is because OLS will provide biased estimate of the parameters, which degrades the accuracy of the model’s prediction. More details about bias and its effect on the model’s prediction can be found in [13]. In this paper, an alternative modeling approach will be developed to reduce the effect of measurement noise in the data on the prediction accuracy of the estimated model. This approach will utilize multiscale wavelet-based representation of data, which is introduced next.
3. Multiscale Representation of Data
A proper way of analyzing real data requires their representation at multiple scales. This can be achieved by expressing the data as a weighted sum of orthonormal basis functions, which are defined in both time and frequency, such as wavelets. Wavelets are a computationally efficient family of multiscale basis functions. A signal can be represented at multiple resolutions by decomposing the signal on a family of wavelets and scaling functions. The signals in Figures 1(b), 1(d), and 1(f) are at increasingly coarser scales compared to the original signal in Figure 1(a). These scaled signals are determined by projecting the original signal on a set of orthonormal scaling functions of the form
ϕjk(t)=2-jϕ(2-jt-k)
or equivalently by filtering the signal using a low pass filter of length r, h=[h1h2··hr], derived from the scaling functions [14]. On the other hand, the signals in Figures 1(c), 1(e), and 1(g), which are called the detail signals, capture the differences between any scaled signal and the scaled signal at the finer scale.
A schematic diagram of data representation at multiple scales.
These detail signals are determined by projecting the signal on a set of wavelet basis functions of the form
ψjk(t)=2-jψ(2-jt-k)
or equivalently by filtering the scaled signal at the finer scale using a high pass filter of length r, g=[g1g2··gr], derived from the wavelet basis functions. Therefore, the original signal can be represented as the sum of all detail signals at all scales and the scaled signal at the coarsest scale as follows:
x(t)=∑k=1n2-JaJkϕJk(t)+∑j=1J∑k=1n2-jdjkψjk(t),
where j, k, J, and n are the dilation parameter, translation parameter, maximum number of scales (or decomposition depth), and the length of the original signal, respectively [15, 16].
Fast wavelet transform algorithms of O(n) complexity for a discrete signal of dyadic length (of length 2j where j is a positive integer) have been developed [14]. For example, the wavelets and scaling functions coefficients at a particular scale (j), dj and aj, can be computed in a compact fashion by multiplying the scaling coefficient vector at the finer scale, aj-1, by the matrices, Gj and Hj, respectively, that is,
aj=Hjaj-1,dj=Gjaj-1,
where
Hj=[h1·hr000h1·hr0·····00h1·hr]n2-j×n2-j+1,Gj=[g1·gr000g1·gr0·····00g1·gr]n2-j×n2-j+1.
Note that the length of the scaling and detail signals decreases dyadically at coarser resolutions (higher j). In other words, the length of scaled signal at scale (j) is half the length of scaled signal at the finer scale, (j-1). This is due to downsampling, which is used in discrete wavelet transform. Just as an example to illustrate the multiscale decomposition procedure and to introduce some terminology, consider the following discrete signal, Yo, of length (n) in the time domain (i.e., j=0):
Yo=[yo(1)yo(2).yo(k).yo(n)]T.
The scaled signal approximation of Yo at scale (j), which can be written as
Yj=[yj(1).yj(k).yj(n2-j)]T
can be computed as follows:
Yj=HjYj-1=HjHj-1⋯H1Yo.
Note that this decomposition algorithm is batch, that is, it requires the availability of the entire data set beforehand. An online wavelet decomposition algorithm has also been developed and used in data filtering [17].
4. Multiscale Nonlinear Process Modeling
In this section, the feature extraction abilities of multiscale representation of data are utilized to construct multiscale nonlinear models that are less affected by the presence of noise in the data. The main idea is to decompose the input-output data at multiple scales and construct a nonlinear model at each scale using the scaled signal approximations of the data. Then, among all scales, select the optimum nonlinear model that provides the best prediction and noise-feature separation.
The ability of multiscale representation to extract important features from data can be verified by computing the signal-to-noise ratio (SNR) of the scaled signal at multiple scales. Theoretically, the SNR at any scale can be computed as follows:
SNR(j)=var(ỹj)var(ỹj-yj),
where ỹj is the noise-free scaled signal representation of the data at scale (j). It can be empirically illustrated that the SNR of the scaled signals peaks at some intermediate scale, which can be explained as follows. At very fine scales, high-frequency noise gets filtered out, which decreases the noise content and increases the SNR. However, at very coarse scales, important features start getting removed, which decreases the signal content and decreases the SNR. Therefore, there is an intermediate scale at which the SNR peaks. This observation will be used later to estimate the optimum modeling scale. Another characteristic of multiscale representation is that correlated noise gets decorrelated at multiple scales [4]. This gives another advantage to multiscale models over conventional ones.
4.1. MSNL Model Representation
Having computed the scaled signal approximations of the input-output data at multiple scales as described in Section 3, nonlinear models of the form
yj(k+1)=∑i=0mβijfij(zij(k),θij)
can be constructed at each scale (j) using these scaled signals of the input and output data, uj and yj. Note here that the basis functions used at scale(j), that is, fij(zij(k),θij), are not the same basis function used in the time domain, fi(zi(k),θi). This is because of the different transformations used to compute the scaled signals at different scales. Note, however, that the form of the model’s basis functions at any scale can be defined in a similar fashion to that used to dilate the wavelet and scaling functions shown in (5) and (6). For example, for the basis function fi(zi(k),θi) in the time domain, the equivalent basis function at any scale (j) is
fij(zij(k),θij)=2-jfi(2-jzi(k),θij).
This effect of basis function dilation at multiple scales is illustrated in Figure 2, which shows that for a given time-domain (j=0) nonlinear basis function, fi0(·), the dilated basis function, fij(zij(k),θij), is stretched at coarser scales (larger j) to account for the dyadic downsampling used in multiscale representation.
Dilation of basis functions at multiple scales.
Now, the nonlinear model at scale (j) can be written by combining (15) and (16) as follows:
yj(k+1)=∑i=0mβij2-jfi(2-jzi(k),θij).
4.2. MSNL Model Estimation
The nonlinear model at scale (j), shown in (17), can also be written in matrix form as
Yj=Xjβj,
whereYj=[yj(k+1)yj(k)yj(k-1)⋯]Tβj=[β1jβ2j⋯βmj]T,Xj=[f1j(z1j(k),θ1j)f2(z2j(k),θ2j)⋯fmj(zmj(k),θmj)f1j(z1j(k-1),θ1j)f2(z2j(k-1),θ2j)⋯fmj(zmj(k-1),θmj)f1j(z1j(k-2),θ1j)f2(z2j(k-2),θ2j)⋯fmj(zmj(k-2),θmj)··⋯·].Therefore, the model parameters at scale (j) can be estimated using OLS as shown in (4) for the time domain model.
4.3. MSNL Modeling Algorithm
Based on the above discussion on the nonlinear model representation and estimation at multiple scales, the following algorithm is proposed for nonlinear multiscale (MSNL) identification:
decompose the input/output data using the wavelet-filter of choice at multiple scales;
at each scale and using the input/output scaled signals,
define the structure of the model’s basis functions at each scale given the basis functions in the time domain using (16),
express the model in matrix form as shown in (18),
estimate the nonlinear model using OLS and predict the output at each scale,
compute the output prediction SNR as follows: SNR(j)=var(ŷj)/var(ŷj-yj), where ŷj is the predicted output at scale (j),
select the optimum model among all scales by choosing the one with the maximum output prediction SNR;
for the optimum model, reconstruct the predicted output back to the time domain.
5. Illustrative Example
In this section, the advantages of the MSNL modeling algorithm are illustrated through a simulated example. The model used in this example relates the concentration of the inlet stream to a stirred tank reactor (input) to the exit stream concentration (output). Let the inlet and exit stream concentrations of a species “A” be CAi and CA, respectively, and let the flow rates in and out of the reactor be q (see Figure 3). If species “A” is converted into “B” in the reactor according to the following reaction, 2A→B, with a reaction rate of r=krCA2, and the reactor volume is constant and equals V, the rate of change of species “A” can be expressed using mass balance as follows:
dCAdt=qVCAi-qVCA-krCA2.
Discretizing the model shown in (20) with a sampling time interval of Δt, we obtain the following nonlinear discrete model:
CA(k+1)=(qΔtV)CAi(k)+(1-qΔtV)CA(k)-krΔtCA2,
where CAi and CA are the model input and output, respectively. Assuming the following values for the parameters: q=1 L/s, kr=5×10-5 mole/(L.s), V=100 L, and Δt=1 s, the above model is used to generate data by applying a PRBS input changing between 0 and 2, and the output, which is assumed to be noise-free, is then contaminated with zero mean Gaussian noise. Different levels of noise (standard deviation σε=0.05, 0.1, and 0.15, which approximately correspond to output signal to noise ratios of 50, 10, and 5) are used to test the robustness of the MSNL modeling algorithm. A 500-sample input-output data set, where σε=0.15, is shown in Figure 4.
A schematic diagram of a stirred tank reactor.
A 500-sample input-output data set.
The performance of the MSNL algorithm is compared to the time-domain counterpart by comparing their prediction mean square errors with respect to the noise-free output, that is,
MSE=1n∑in(ŷi-ỹi)2,
where ŷ and ỹ are the predicted and noise-free outputs, respectively. Note that such comparison is possible in this simulated example because the noise-free output is known. Also, in this example, the Haar wavelet and scaling functions are used in multiscale representation of the data. Please note that smoother wavelet filters (e.g., Daubechies) may provide better performance in the case of smoother data.
To make statistically valid conclusions about the performances of the various modeling methods, a Monte Carlo simulation of 1000 realizations is performed, and the results are presented in Tables 1 and 2. Table 1 shows that MSNL models achieve a noticeable improvement over those estimated using the raw data (at the time domain) and that this improvement increases for larger noise contents. This improvement can also be visually seen from Figure 5, which shows the advantage of constructing nonlinear models at multiple scales.
Comparison between the prediction means square errors of the multiscale and time-domain modeling methods.
Modeling Method
Prediction MSE (×103)
σε=0.05
σε=0.1
σε=0.15
NL modeling
2.3
10.8
20.5
MSNL modeling
1.2
3.9
6.6
Comparison between the prediction means square errors at multiple scales.
Scale
Prediction MSE (×103)
σε=0.05
σε=0.1
σε=0.15
j = 0
2.3 (0)
10.8 (0)
20.5 (0)
j = 1
1.3 (24)
5.9 (1)
11.2 (0)
j = 2
1.2 (76)
3.6 (61)
6.5 (0)
j = 3
3.6 (0)
4.7 (38)
6.3 (79)
j = 4
8.6 (0)
9.2 (0)
10.0 (21)
j = 5
17.5 (0)
17.8 (0)
18.2 (0)
MSNL modeling performance at multiple scales, where σε=0.15.
Figure 5 shows the change in the models’ accuracies at different scales. It can be seen that prediction accuracy improves at coarser scales but up to a certain level beyond which the quality of estimated model deteriorates. In other words, the accuracy improves from the time domain up to scale 3, but starting at scale 4, the quality of the model deteriorates. This can also be noted from Table 2, which reports the MSE of the estimated models at different scales and shows that there is an intermediate scale at which MSNL modeling is the best. This is because at very coarse scales, features in the data (which are important to the model) get eliminated and thus affect the model’s quality. That is why it is very important to select the optimum scale for modeling. Table 2 also presents (in parentheses) the percentages at which each scale is selected as optimum using the SNR criterion and shows that the optimum scale increases or gets coarser (higher j) for higher noise levels. This makes sense because, for higher noise levels, more filtering is needed for good noise-feature separation. Also, note that a multiscale model estimated at a particular scale (j) will update its prediction every 2j samples. This is because of the dyadic upsampling performed during the reconstruction of the model prediction, that is, one time sample prediction at scale (j) corresponds to 2j samples in the time domain.
6. Conclusions
In this paper, the noise-feature separation capabilities of multiscale representation of data are exploited to improve the estimation and prediction accuracy of the linear-in-the-parameters nonlinear model, by presenting a multiscale nonlinear (MSNL) modeling algorithm with enhanced robustness to measurement noise in the data. The MSNL modeling algorithm integrates modeling and filtering by decomposing the input-output data and using the scaled signal approximations to construct different nonlinear models at different scales. Then, among all scales, the model with the largest output prediction SNR is selected as the optimum MSNL model. Finally, the performance of the MSNL algorithm is illustrated through a simulated example, which clearly shows the improvement in output prediction, especially for larger noise contents.
Acknowledgment
The authors would like to gratefully acknowledge the financial support of Qatar National Research Fund (QNRF).
QinS. J.BadgwellT.An overview of industrial model predictive control technology199793316232256BraatzR. D.MijaresG.1995Miami Beach, Fla, USAAICHE MeetingBakshiB. R.Multiscale analysis and modeling using wavelets1999133-44154342-s2.0-0000307067BakshiB. R.StephanopoulosG.Representation of process trends-IV. Induction of real-time patterns from operating data for diagnosis and supervisory control19941843033322-s2.0-0027949580PalavajjhalaS.MotardR. L.JosephB.Process identification using discrete wavelet transforms: design of prefilters19964237777902-s2.0-0030104416BakshiB. R.Multiscale PCA with application to multivariate statistical process monitoring1998447159616102-s2.0-0032118892RobertsonA. N.ParkK. C.AlvinK. F.Extraction of impulse response data via wavelet transform for structural system identification199812012522602-s2.0-0031672660NikolaouM.VuthandamP.FIR model identification: parsimony through kernel compression with wavelets19984411411502-s2.0-0031696282NounouM. N.Multiscale finite impulse response modeling20061932893042-s2.0-3174443494010.1016/j.engappai.2005.09.007CarrierJ. F.StephanopoulosG.Wavelet-based modulation in control-relevant process identification19984423413602-s2.0-0032000632NounouM. N.NounouH. N.Improving the prediction and parsimony of ARX models using multiscale estimation2007737117212-s2.0-3404726785510.1016/j.asoc.2005.12.004NounouM. N.NounouH. N.Multiscale fuzzy system identification20051577637702-s2.0-2184446565410.1016/j.jprocont.2005.03.005LjungL.1987Reading, Mass, USAAddison-WesleyMallatS. G.A theory for multiresolution signal decomposition: the wavelet representation19891176746932-s2.0-002470009710.1109/34.192463DaubechiesI.Orthonormal bases for compactly supported wavelets1988417909996StrangG.Wavelets and dilation equations. A brief introduction19893146146272-s2.0-0024921922NounouM. N.BakshiB. R.On-line multiscale filtering of random and gross errors without process models1999455104110582-s2.0-0032943498