In real application scenarios, the inherent impreciseness of sensor readings, the intentional perturbation of privacy-preserving transformations, and error-prone mining algorithms cause much uncertainty of time series data. The uncertainty brings serious challenges for the similarity measurement of time series. In this paper, we first propose a model of uncertain time series inspired by Chebyshev inequality. It estimates possible sample value range and central tendency range in terms of sample estimation interval and central tendency estimation interval, respectively, at each time slot. In comparison with traditional models adopting repeated measurements and random variable, Chebyshev model reduces overall computational cost and requires no prior knowledge. We convert Chebyshev uncertain time series into certain time series matrix; therefore noise reduction and dimensionality reduction are available for uncertain time series. Secondly, we propose a new similarity matching method based on Chebyshev model. It depends on overlaps between two sample estimation intervals and overlaps between central tendency estimation intervals from different uncertain time series. At the end of this paper, we conduct an extensive experiment and analyze the results by comparing with prior works.
1. Introduction
Over the past decade, a large amount of continuous sensor data was collected in many applications, such as logistics management, traffic flow management, astronomy, and remote sensing. In most cases, these applications organize the sequential sensor readings into time series, that is, sequences of data points ordered by temporal dimension. The problem of processing and mining time series with incomplete, imprecise, and even error-prone measurements is of major concern in recent studies [1–6]. Typically, uncertainty occurs due to the impreciseness of equipment and methods during physical data collection period. For example, the inaccuracy of a wireless temperature sensor follows a certain error distribution. In addition, intentional deviation brought by privacy-preserving transformation also causes much uncertainty. For example, the real time location information of some VIP may be perturbed [7, 8].
Managing and processing uncertain data were studied in the traditional database area during the 80s [9] and have been borrowed in the investigation of uncertain time series in recent years. Two widely adopted methods are introduced in modeling uncertain time series. First, a probability density function (pdf) over the uncertain values represented by a random variable is estimated in accord with a priori knowledge, among which the hypotheses of Normal distribution are ubiquitous [10–12]; however, the hypotheses of Normal distribution are quite limited in many applications; the uncertain time series data with Uniform or Exponential distribution is frequently found in some other applications, for example, Monte Carlo simulation of power load and evaluation of reliability of electronic components [13, 14]. Second, the unknown data distribution is summarized by repeated measurements (i.e., sample or observations) [15]; the accurate estimation of data distribution is obtained by large amount of repeated measurements; however, it causes high computational cost and more storage space.
In this paper, we propose a new model for uncertain time series by combining the two methods above and use descriptive statistics (i.e., central tendency) to resolve the uncertainty. On this basis, we present an effective matching method to measure the similarity between two uncertain time series, which is adaptive to distinct error distributions. Our model estimates the sample value range and the central tendency range derived from Chebyshev inequality, extracting the sample estimation interval and central tendency estimation interval drawn from repetitive measurements at each time slot. Unlike traditional similarity matching methods of uncertain time series based on the measurement of distance, we adopt the overlap between sample estimation intervals and that between central tendency estimation intervals to evaluate similarity. If both estimation intervals from two uncertain time series at corresponding time slot have a chance of being equal, the extent of similarity is larger as compared to the case in which they never be the same.
The rest of this paper is organized as follows. In Section 3 we propose the model of Chebyshev uncertain time series. Section 4 is on the preprocessing of uncertain time series based on Chebyshev model. Section 5 describes the process of similarity match with new method. Section 6 addresses the experiments. At last, Section 7 draws a conclusion.
To sum up, we list our contributions as follows:
We propose a new model of uncertain time series based on sample estimation interval and central tendency estimation interval derived from Chebyshev inequality and convert Chebyshev uncertain time series into certain time series matrix for dimensionality reduction and noise reduction.
We present an effective method to measure the similarity between two uncertain time series within distinct error distributions without a priori knowledge.
We conduct extensive experiments and demonstrate the effectiveness and efficiency of our new method in similarity matching between two uncertain time series.
2. Related Work
The problem of similarity matching for certain time series has been extensively studied over the past decade; however the similar problem arises for uncertain time series. Aßfalg et al. first propose a probabilistic bounded range query (PBRQ) [15]. Formally, let D be a set of uncertain time series and let Tu be an uncertain time series as query input; let ϵ be a distance bound and let τ be a probability threshold. The PBRQϵ,τ(Tu,D) is given by(1)PBRQϵ,τTu,D=Tu′∈D∣PrDISTTu,Tu′≤ϵ≥τ.
Dallachiesa et al. proposed the method called MUNICH [16]; the uncertainty is represented by means of repeated observations at each time slot [15]. An uncertain time series is a set of certain time series in which each certain time series is constructed by choosing one sample observation for each time slot. The distance between two uncertain time series is defined as the set of distances between all combinations from one certain time series set to the other. Notice that the distance measures adopted by MUNICH are based on Lp-norm and DTW distances; if p=2, the Lp-norm is Euclidean distance; the naive computation of the result set is not practical. Large result space causes exponential computational cost.
PROUD [12] processes similarity queries over uncertain time streams. It employs the Euclidean distance and models the similarity measurement as the sum of the differences of time series random variables. Each random variable represents the uncertainty of the value of corresponding time slot. The standard deviation of the uncertainty and a single observation for each time slot are prerequisites for modeling uncertain time series. Sarangi and Murthy propose a new distance measurement DUST. It is derived from the Euclidean distance and under the assumption that all time series values follow some specific distribution [11]. If the error of the time series values at different time slot follows Normal distribution, DUST is equivalent to the weighted Euclidean distance. Compared to the MUNICH, it does not need multiple observations and thus is more efficient. Inspired by the moving average, Dallachiesa et al. propose a simple similarity measurement that previous studies had not considered; it adopts Uncertain Moving Average (UMA) and Uncertain Exponential Moving Average (UEMA) filters to solve the uncertainty from time series data [16]. Although the experimental results show that they outperform the sophisticated techniques that have been proposed above, a priori knowledge of the error standard deviation is indispensable.
Most of the above techniques are based on the assumption that the values of time series are independent of one another. Obviously, this assumption is a simplification. Adjacent values in time series are correlated to a certain extent. The effect of correlations is studied in [16] and the research shows that there is a great benefit if the correlations are taken into account. Likewise, we implicitly embed correlations into estimation intervals in terms of repetitive observation values, adopting the degree of overlap to evaluate the similarity of uncertain time series. Our approach reduces overall computational cost and outperforms the existing methods on accuracy; new model requires no prior knowledge and makes dimensionality reduction available for uncertain time series.
3. Chebyshev Uncertain Time Series Modeling
As shown in [15], let T=(X1,X2,…,Xn) be an uncertain time series of length n; Xt∈T is a random variable represented by a set Xt={vt,1,vt,2,…,vt,s} of s measurements (i.e., random sample observations), vt,i∈Rd. s is denoted as sample size of T. Distribution of the points in Xt is the uncertainty at time slot t. The larger sample size s is, the more accurate data distribution is estimated. However computational cost is prohibitive. To solve the problem, we present a new model for uncertain time series by considering Chebyshev’s inequality below.
Lemma 1.
Let X (integrable) be a random variable with finite expected value E(X)=μ and finite nonzero variance D(X)=σ2. Then, for any real number ɛ>0,(2)PX-μ≤ɛ≥1-σ2ɛ2.
Formula (2) (Chebyshev’s inequality) [17] is the lower bound of probability of X-EX≤ɛ; on condition that μ and σ2 are known, the distribution information need not be considered. Real number ɛ has an important influence on the determination of the lower bound. For an appropriate ɛ, the probability of possible values of random variable falling in the boundaries satisfies desired threshold. The estimation of possible value range is as follows.
Theorem 2.
Given a random variable X with the finite expected value E(X)=μ and finite nonzero variance D(X)=σ2, if the ɛ in inequality (2) equals 4σ, then(3)PX∈μ-4σ,μ+4σ≥0.9375no matter which probability distribution X obeys.
The above proof shows that when ɛ equals 4σ, the probability of X within interval [μ-4σ,μ+4σ] exceeds 0.9; nearly all possible measurements fall in the interval. We substitute the random variable X with [μ-4σ,μ+4σ] to express the uncertainty.
According to the probability distribution of X, possible value range description of uncertainty is insufficient; a central or typical value is another feature for a probability distribution; it indicates a center or location of the distribution, called central tendency [18]. The most common measure of central tendency is arithmetic mean (mean for short), so the central tendency of a random sample set C in form of mean X¯C is defined below.
Given a random sample set C drawn from X with μ and σ2, C={Xc1,Xc2,…,Xcm}, each sample satisfies i.i.d. hypothesis; then(5)X¯C=1m∑i=1mXci.
As a random variable, the expectation E(X¯C) and variance D(X¯C) are evaluated below:(6)EX¯C=E1m∑i=1mXci=1m·mμ=μ,(7)DX¯C=D1m∑i=1mXci=1m2D∑i=1mXci=1m2∑i=1mDXci=1m2·mσ2=σ2m.Analogously, for central tendency variable X¯C, in accord with Lemma 1, the corresponding estimation interval can be obtained.
Theorem 3.
Given a random variable X with μ and σ2, a random sample set C={Xc1,Xc2,…,Xcm} drawn from the population of X, for the variable X¯C with μ and σ2/m, if the ɛ in inequality (2) equals 4σ/m, then(8)PX¯C∈μ-4σm,μ+4σm≥0.9375.
In summary, the sample estimation interval [μ-4σ,μ+4σ] of X is the range of possible measurements and central tendency estimation interval [μ-4σ/m,μ+4σ/m] is the range of central tendency of X. The uncertainty of X is represented by a combination of the two intervals at each time slot. Uncertain time series can be defined below.
Definition 4.
For an uncertain time series T=(X1,X2,…,Xn) of length n, each element Xt is a random variable with μt and σt2, X¯Ct is the central tendency of random sample set Ct from the population corresponding to Xt, and an Chebyshev uncertain time series TChe is defined below:(10)TChe=μ1-4σ1,μ1+4σ1,μ1-4σ1m,μ1+4σ1m,t1,μ2-4σ2,μ2+4σ2,μ2-4σ2m,μ2+4σ2m,t2,…,μn-4σn,μn+4σn,μn-4σnm,μn+4σnm,tn,where m is the cardinality of random sample set C. Consider the Chebyshev uncertain time series above; μt and σt are difficult to be obtained because of the unidentified distribution of population. We choose two statistics to estimate the μt and σt; one is the arithmetic mean of C, mentioned in (5); the other is the sample standard deviation SC, calculated by the following equation:(11)SC=SC2=1m-1∑i=1mXci-X¯C2,(12)ESC2=E1m-1∑i=1mXci-X¯C2=E1m-1∑i=1mXci2-mX¯C2=1m-1∑i=1mEXci2-mEX¯C2=1m-1∑i=1mμ2+σ2-mσ2m+μ2=σ2.
Equations (12) and (6) show that X¯C and SC are unbiased estimator for μ and σ. μt and σt in Definition 4 can be replaced with X¯Ct and SCt; TChe is rewritten as follows.
Definition 5.
Given a sample set Ct={Xc1t,Xc2t,…,Xcmt} at time slot t, TChe is represented as follows:(13)X¯C1-4SC1,X¯C1+4SC1,X¯C1-4SC1m,X¯C1+4SC1m,t1,X¯C2-4SC2,X¯C2+4SC2,X¯C2-4SC2m,X¯C2+4SC2m,t2,…,X¯Cn-4SCn,X¯Cn+4SCn,X¯Cn-4SCnm,X¯Cn+4SCnm,tn.
According to the descriptions above, the expression at each time slot can be transformed into a vector. It consists of four elements (except time value), namely, X¯Ct-4SCt, X¯Ct-(4SCt/m), X¯Ct+(4SCt/m), and X¯Ct+4SCt, in ascending order, denoted as vt; consider(14)vt=X¯Ct-4SCt,X¯Ct-4SCtm,X¯Ct+4SCtm,X¯Ct+4SCtT.
Definition 6.
An uncertain time series TChe of length n can be rewritten in terms of matrix with the following formula:(15)VChe=v1,v2,…,vn.Additionally, it can be expanded as follows:(16)TSX,lTSX¯,lTSX¯,uTSX,u=X¯C1-4SC1,X¯C2-4SC2,…,X¯Cn-4SCnX¯C1-4SC1m,X¯C2-4SC2m,…,X¯Cn-4SCnmX¯C1+4SC1m,X¯C2+4SC2m,…,X¯Cn+4SCnmX¯C1+4SC1,X¯C2+4SC2,…,X¯Cn+4SCn,where TSX,l is the lower bound sequence of random variable Xt composed of X¯Ct-4SCt, TSX¯,l is referred to as lower bound sequence of variable X¯t, TSX¯,u is named X¯t upper bound sequence, and the upper bound sequence of Xt is denoted as TSX,u, illustrated in Figure 1. Four certain time series constitute an uncertain time series based on Chebyshev model.
The Chebyshev uncertain time series model.
4. Uncertain Time Series Preprocessing4.1. Outlier Elimination from Sample Set
In the process of the sample collection, the occurrence of outliers is inevitable. As an abnormal observation value, it is distant from others [19]. This may be ascribed to undesirable variability in the measurement or experimental errors. Outliers can occur in any distribution; naive interpretation of statistics such as sample mean and sample variance derived from sample set that include outliers may be misleading. Excluding outliers from sample set enhances the effectiveness of statistics. The definition of an outlier O can be formalized below.
Definition 7.
Given a sample set Ct={Xc1t,Xc2t,…,Xcmt} at time slot t, Ct is sorted in ascending order. The sorted elements constitute a sample sequence, denoted as (V1,V2,…,Vm). Q1 and Q3 are the lower and upper quartiles, respectively; then we could define an outlier to be any sample outside the range:(17)Q1-kQ3-Q1,Q1+kQ3-Q1for a nonnegative constant k, which adjusts the granularity of excluding outliers.
4.2. Exponential Smoothing for Noise Reduction
In the area of signal processing, noise is a general term of unwanted (and, in general, unknown) modifications during signal capture, storage, transmission, processing, or conversion. To recover the original data from the noise-corrupted signal, the filters applied to noise reduction are ubiquitous in the design of signal processing systems. An Exponential smoothing filter assigns exponentially decreasing weights to the sample in time order and is effective [20–22]. In this subsection, we use exponential smoothing to process the noise in time series data. Given an certain time series Y, Y(t-1) is the observation at time slot t-1, ES is a smoothed sequence associated with Y, and ES(t) is the smoothed value at time slot t. If the first sample is chosen in raw time series as initial value and an appropriate smoothing factor is picked, all values composed of smoothed sequence ES are available iteratively. The single form of exponential smoothing is given in formula (18)ES0=Y0,ESt=αYt-1+1-αSt-1.The raw time series begins at time t=0; smoothing factor α falls in interval [0,1]. On the basis of the equation, the exponential smoothing of an uncertain time series modeled in Chebyshev matrix (Definition 6) is defined as follows:(19)ESChe0=TSX,l0TSX¯,l0TSX¯,u0TSX,u0,(20)ESChet=αTSX,lt-1TSX¯,lt-1TSX¯,ut-1TSX,ut-1+1-αESChet-1.For example, a raw time series is chosen from the ECG200 dataset in UCR time series collection [23]; after the disturbance by standard deviation 0.2, it is modeled as Chebyshev uncertain time series illustrated in Figure 2; tiny fluctuations around four lower and upper bound sequences reflect the existence of noise. We perform the exponential smoothing against the uncertain time series, choosing the first sample of each bound sequence as initial value and setting the smoothing factor α to 0.3. Note that higher value of α actually reduces the level of smoothing; in the limiting case with α=1 the output series is just the same as the original series. After triple exponential smoothing, the uncertain time series become clearer, because triple exponential smoothing takes into account seasonal changes as well as trends, illustrated in Figure 3.
Illustration of Chebyshev uncertain time series before smoothing.
Illustration of Chebyshev uncertain time series smoothed.
4.3. Dimensionality Reduction Using Wavelets
In the process of analysis and organization of high-dimensional data, the difficulty is the problem of “curse of dimensionality” coined by Bellman and Dreyfus [24]. When the dimensions of the data space increase, data size soars, and thus the available data becomes sparse. Extracting these valid sparse data as feature vectors in lower dimension feature space is the essence of dimensionality reduction. Time series, as the special high-dimensional data, is under the influence of curse of dimensionality as well. We adopt wavelets frequently used in dimension reduction to deal with the time series data [25–27].
Daubechies [28] finds that wavelet transforms can be implemented using a pair of Finite Impulse Response (FIR) filters, called a Quadrature Mirror Filter (QMF) pair. These filters are often used in the area of signal processing as they lend themselves to efficient implementation. Each filter is represented as a sequence of numbers. The filter lends this the length of this sequence. The output of a QMF pair consists of two separate components: a high-pass and a low-pass filter, which correspond to high-frequency and low-frequency output, respectively. Wavelet transforms are considered to be hierarchical since they operate stepwise. The input on each step is passed through the QMF pair. Both high-pass and low-pass component of the QMF output are in half of the length of the input. The high-pass component is naturally associated with details while the low-pass component concentrates most of the energy or information of the data. The low-pass component is used as further input; hence the length of the input is reduced by a factor of 2 at each step. The single step is illustrated in Figure 4, where n refers to the length of signal sequence in general, not some concrete value.
QMF wavelet transform for dimensionality reduction.
For example, as shown in Figure 3, we choose Haar wavelet to build QMF pair; the low-pass output is a dimension-reduced uncertain time series whose length shortens from 270 to 135, illustrated in Figure 5; the sequence of QMF pair based on Haar wavelet is defined as follows:(21)gn=-22,22,hn=22,22.Note that the low-pass output is obtained through the convolution of h[n] and the uncertain time series to be reduced in dimension; in the same manner, the convolution of g[n] and the uncertain time series is the high-pass output.
Illustration of smoothed Chebyshev uncertain time series after wavelet dimensionality reduction.
5. Similarity Match Processing
We present a new matching method based on Chebyshev uncertain time series. As shown in Definition 5, without loss of generality, we utilize two variables Mi, Ni from different uncertain time series M and N at time slot i to specify the matching procedure. Let [XMi,l,XMi,u] and [XNi,l,XNi,u] be the sample estimation interval from M and N at time slot i in Figure 6(a). If the two intervals overlapped as shown in Figure 6(b), Mi and Ni have possibility of taking identical value from the overlap intersection set; with the increasing of overlap in Figures 6(c) and 6(d) (expressed by the double arrow solid lines), the possibility increases gradually. Thus, Mi and Ni become more similar in terms of the range of samples. The above analysis outlines the similarity measure based on the overlap of sample estimation intervals qualitatively; then we analyze the process quantitatively. The lengths of two sample estimation intervals at identical time slot are different. As shown in Figure 6, let LMi and LNi be the length of sample estimation intervals of Mi and Ni, respectively:(22)LMiX=XMi,u-XMi,l,(23)LNiX=XNi,u-XNi,l.LopiX denote the length of overlap between Mi and Ni illustrated in Figures 6(b) and 6(c):(24)LopiX=XMi,u-XNi,l.In Figure 6(d), LopiX equals(25)LopiX=XNi,u-XNi,l.If the two observations intervals are not overlapped in Figure 6(a), the problem arises. In fact, it should be marked; we put a negative symbol into formula like this(26)LopiX=-XNi,l-XMi,u.If LopiX≤0, the two observation intervals have no overlap, and the lower LopiX is, the farther two intervals become. Let Overlap Ratio be the ratio of the length of overlap to length of observation intervals to quantify the degree of overlap, denoted as rop; thus,(27)ropMiX=LopiXLMiX,ropNiX=LopiXLNiX,where each of them falls in (-∞,1] (only when the length of overlap equals the length of observations interval, ropX equals 1 in Figure 6(d), ropNiX=LopiX/LNiX=1).
The illustration of similarity degrees.
We combine ropMiX and ropNiX and construct a single quantity called Overlap Degree of sample estimation interval, denoted as dopX, so that it measures the overlaps linearly. Here is the definition(28)dopMi,NiX=2ropMiX·ropNiXropMiX+ropNiX,where dopMi,NiX also belongs to (-∞,1]. The sum of dopMi,NiX denotes the degree of overlap between the two uncertain time series M and N such that(29)DOPM,NX=∑i=1ndopMi,NiX.
We will further discuss the similarity between Mi and Ni. As illustrated in Figure 7, even if two sample estimation intervals at time i are entirely overlapped, it is difficult to determine whether the two variables have similarity or not to a certain degree, because of a variety of overlapping between central tendency estimation intervals IX¯,Mi and IX¯,Ni. In other words, the degree of overlap between IX¯,Mi and IX¯,Ni determines the degree of similarity between Mi and Ni on condition of identical sample estimation intervals. As shown in Figure 7(c), the two variables, compared to the case in Figures 7(a) and 7(b), are more similar obviously; the larger overlapping is, the more similar two variables are. If central tendency estimation intervals have no overlap or a little and sample estimation intervals overlap to some extent, the estimation of similarity cannot be obtained. With regard to the above cases, only DOPM,NX is not sufficient to measure the similarity; we need further to measure the similarity between two variables with central tendency estimation intervals.
The situations of entire overlapping.
As illustrated in Figure 8, there are three cases of overlapping. Let LopiX¯ be the overlap between two central tendency estimation intervals. In Figure 8(a), for the estimation intervals [X¯Mi,l,X¯Mi,u] and [X¯Ni,l,X¯Ni,u], the lengths of estimation interval LMiX¯ and LNiX¯ are represented as(30)LMiX¯=X¯Mi,u-X¯Mi,l,LNiX¯=X¯Ni,u-X¯Ni,l.With no overlapping between them, the LopiX¯ is denoted as(31)LopiX¯=-X¯Ni,l-X¯Mi,u.In Figure 8(b), [X¯Mi,l,X¯Mi,u] and [X¯Ni,l,X¯Ni,u] have overlap as described below:(32)LopiX¯=X¯Mi,u-X¯Ni,l.In Figure 8(c), [X¯Mi,l,X¯Mi,u] contains [X¯Ni,l,X¯Ni,u]; the overlap is represented as follows:(33)LopiX¯=X¯Ni,u-X¯Ni,l.Analogous to ropX, the Overlap Ratio of X¯ estimation interval between Mi and Ni is defined:(34)ropMiX¯=LopiX¯LMiX¯,ropNiX¯=LopiX¯LNiX¯.The Overlap Degree of X¯, namely, dopX¯ between Mi and Ni, is depicted below:(35)dopMi,NiX¯=2ropMiX¯·ropNiX¯ropMiX¯+ropNiX¯.
The illustration of overlap between X¯ intervals.
We sum up ropMi,Ni of the two uncertain time series M and N in length of n; the sum indicated by DOPM,NX¯ is represented as (36)DOPM,NX¯=∑i=1nropMi,NiX¯.
In conclusion, we combine the DOPX and DOPX¯ to evaluate the degree of similarity between two uncertain time series, which is signified by DOS and expressed as follows: (37)DOSM,N=αDOPM,NX+1-αDOPM,NX¯.α is the factor in the range of [0,1]; in different applications, DOPX and DOPX¯ refer to different weights; here set α=1/2. Consider DOSM,N∈(-∞,n] (n is length of uncertain time series).
6. Experimental Validation
In this section, we examine the effectiveness and efficiency of the new method proposed in this paper. Firstly, we introduce the uncertain time series value generation and experimental datasets; then we analyse the results of the experiments. All the methods are implemented in MATLAB and C++, and the experiments are run on a PC with 3.1 GHz CPU and 4 GB of RAM.
6.1. Uncertainty Model and Assumption
As described in Definition 5, an uncertain time series T is a time series including sample estimation interval and central tendency estimation interval derived from a set of observations at each time slot. Given a time slot i, the value of uncertain time series modeled as(38)Ti=di+ei,where di is the true value and ei is the error. In general, the error ei could be drawn from distinct probability distribution; this is why we treat Ti as a random variable at the time i.
6.2. Experimental Setup
Inspired by [11, 12, 15], we use real time series datasets of exact values and subsequently introduce uncertainty with uncertainty model through perturbation. In our experiments we consider Uniform, Normal, and Exponential error distributions with zero mean and vary standard deviation within interval [0.2,2.0].
We selected 19 real datasets from the UCR classification dataset collection [23]; they represent a wide range of application areas: 50words, Adiac, Beef, CBF, Coffee, ECG200, Lighting2, SyncCtrl, Wafer, FaceFour, FaceAll, Fish, Lighting7, GunPoint, OliveOil, OSULeaf, SwedLeaf, Trace, and Yoga. The training and testing sets are reconfigured, and we acquired the time series sets as Table 1.
Details of time series sets.
Dataset
Quantity
Length
50words
450
270
Adiac
390
176
Beef
470
30
CBF
500
128
Coffee
500
28
ECG200
199
96
Lighting2
121
637
SyncCtrl
120
300
Wafer
6164
152
FaceFour
112
350
FaceAll
560
131
Fish
349
463
Lighting7
318
73
GunPoint
199
150
OliveOil
570
30
OSULeaf
441
427
SwedLeaf
1125
128
Trace
200
270
Yoga
300
427
6.3. Accuracy
On the purpose of evaluating the quality of the results, we use the two standard measures of recall and precision. Recall is defined as the percentage of the truly similar uncertain time series that are found by the algorithm. Precision is the percentage of similar uncertain time series identified by the algorithm, which are truly similar. Accuracy is measured in terms of the harmonic mean of recall and precision to facilitate the comparison. The accuracy is defined as follows:(39)Accuracy=2recall∗precisionrecall+precision.
As mentioned in [11], an effective similarity measure on uncertain data allows us to reason about the original data without uncertainty. For the sake of validating new method, we conduct experiments from different aspects.
In the first experiment, we examine the effectiveness of our approach for different error standard deviations and error distributions. In Figure 9, the results from different error distributions are averaged over all datasets and shown at various error standard deviations. The accuracy decreases linearly with increasing error standard deviation from 0.2 to 2 and the performance with Uniform distribution is better than the other two distribution performances. Bigger standard deviations produce more uncertainty to time series data.
Accuracy with three error distributions averaged over all datasets.
Next, we verify the effectiveness for different datasets. In Figure 10, each time series from each dataset is perturbed with different error, that is, Normal, Uniform, and Exponential; combining 20% accuracy of the match in standard deviation 1 with 80% accuracy of the match in standard deviation 0.4 as the accuracy of relative small standard deviations on each dataset, most of datasets perform well (accuracy reaches 80% or so, some come to 90%), with SyncCtrl being the best performer (accuracy = 96%), except Beef, OliveOil, and SwedLeaf, which will be explained below. Similarly, the trend is verified also with Uniform and Exponential error distributions.
Accuracy of 19 datasets on three error distributions with accuracy of 0.4 and 1.0 mixed deviation.
Figure 11 summarizes the performance of each dataset in relative big standard deviations of error, integrating the 20% accuracy of match in standard deviation 2 with 80% accuracy in standard deviation 1.4. As with the increasing of standard deviation, the accuracy of all datasets decreases. With Normal error, the accuracy of Adiac drops the most fast, nearly 50% (from 81% to 33%), and the tendency is also held with Exponential error distribution. Coffee, FaceFour, SyncCtrl, and yoga are exceptions; the increasing standard deviations have no significant impact on their accuracy. With Uniform error, the accuracy of Fish drops the most fast, up to 30.4%, the accuracy of Adiac drops 25.8%, and ECG200 decreases 14.4%; the accuracy of other datasets falls lightly. With Exponential error, most datasets drop fast and the most fast dataset is Adiac, up to 41%. In conclusion, the Uniform error impacts all datasets lightly with the increasing standard deviation, compared to the Normal and Exponential error.
Accuracy of 19 datasets on three error distributions with accuracy of 1.4 and 2.0 mixed deviation.
As mentioned above, the datasets Beef, OliveOil, and SwedLeaf have poor performance, but Coffee, FaceFour, syncCtrl, and yoga perform well in Figures 10 and 11. We find that all of these are partially related to the average absolute value of respective datasets which are disturbed. As shown in Figure 12, we compute the average absolute values of all disturbed datasets; the AAVs (average absolute values) of Beef and OliveOil are 0.0956 and 0.3337, respectively, smaller than others. The AAV of disturbed Coffee is 18.0541, which is the biggest among all datasets; the other three datasets are also big ones. In other words, for large AVVs it is difficult to be impacted with small uncertainty even though standard deviation of error comes to 2. On the contrary, Beef and OliveOil are easier to be impacted even if standard deviation of error is 0.2. However, SwedLeaf is different; it may be ascribed to the wave form, which we will explore in future research. Considering the impact of the size of observation samples, it is important for two kinds of estimation intervals which stem from observation samples. As described above, all experiments results are based on m=16 observation samples. We describe how the results come to be if the size of observation sample gets large. In Figure 13(a), with the Normal error, the accuracy of three sizes of observation sample is shown at various standard deviations. The result of 64 samples is the best; 32 samples result is better than 16 samples. At relative small standard deviations (0.2–0.8), the results of three sizes are of little difference; with the deviation growing, the differences gradually become more observable. The results of Uniform and Exponential distributions are similar to Normal and are reported in Figures 13(b) and 13(c). The differences with Uniform error among three sizes are smaller than the other two distributions.
Average absolute value (AAV) of each dataset disturbed data.
Comparison of accuracy with different sample size.
Accuracy of 16, 32, and 64 sample observations with Normal error
Accuracy of 16, 32, and 64 sample observations with Uniform error
Accuracy of 16, 32, and 64 sample observations with Exponential error
In Figure 14(a) we compare our approach with other techniques under Normal error distribution, namely, PROUD, DUST, Euclidean distance, UMA, and UEMA, referring to the methodology proposed in [16]. The results demonstrate that our approach is more effective than other techniques with three distribution errors. With 0.2 error deviations, UEMA and UMA outperform others; PROUD performs slightly better than DUST and Euclidean, but with larger error standard deviation its accuracy drops slightly below DUST and Euclidean. This trend is also kept with Uniform and Exponential distribution, illustrated in Figures 14(b) and 14(c).
Comparison of accuracy with existing methods.
Normal error distribution
Uniform error distribution
Exponential error distribution
We also compare the performance of execution time for our approach with other techniques mentioned above. Because the results of three distributions are analogous, the Normal distribution is drawn as an example to show the trend of the results. Figure 15 shows the CPU time per query for Normal error distribution with varying error standard deviation from 0.2 to 2. It shows that the varying standard deviations for error do not impact the performance of these techniques basically. The performance of our approach is slightly better than DUST, UMA, and UEMA. The best time performer is Euclidean. Note that we do not apply PROUD to wavelet synopses; this may be the reason why it does not perform well.
Average CPU time per query for Normal error distribution with varying deviation.
In Figure 16, we describe the CPU time per query for Normal error distribution with varying time series length between 50 and 1000. The time series of different length are obtained by reconstitution of raw datasets. The figure shows that the execution time increases linearly to the time series length. The results of our approach are better than DUST and PROUD; Euclidean gets the best performance.
Average CPU time per query for Normal error distribution with varying length.
7. Conclusion
In this paper, we propose a new model of uncertain time series and a new approach that measures the similarity between uncertain time series. It outperforms the state-of-the-art techniques, most of which employ the distance measure to evaluate the similarity.
We validate the new approach with three kinds of error distributions and the standard deviations of error span the range from 0.2 to 2; meanwhile, we compare the new approach with the techniques previously proposed in the literature. Our experiments were based on 19 authentic datasets. The results demonstrate that overlap measuring, based on observations interval and central tendency, outperforms the other complex alternatives. If the expected value of the error in the experiments is considered to be zero, the average of these samples may be a good estimate for unknown values at each time slot; it characterizes the center of data distribution.
In the future, we will make a deeper exploration of the modeling of uncertain time series data when the expected value of the error is zero. We will extend our work to index technique about uncertain time series. We will explore the influence of wave characteristics of time series data and the management of volume uncertain time series.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
OrangM.ShiriN.An experimental evaluation of similarity measures for uncertain time seriesProceedings of the 18th International Database Engineering and Applications Symposium (IDEAS '14)July 2014ACM26126410.1145/2628194.26282072-s2.0-84906815265CeriottiM.CorràM.D'OrazioL.DoriguzziR.FacchinD.GunǎŞ. T.JesiG. P.CignoR. L.MottolaL.MurphyA. L.PescalliM.PiccoG. P.PregnolatoD.TorgheleC.Is there light at the ends of the tunnel? Wireless sensor networks for adaptive lighting in road tunnelsProceedings of the 10th ACM/IEEE International Conference on Information Processing in Sensor NetworksApril 20111871982-s2.0-79959303014KrishnamurthyL.AdlerR.BuonadonnaP.ChhabraJ.FlaniganM.KushalnagarN.NachmanL.YarvisM.Design and deployment of industrial sensor networks: experiences from a semiconductor plant and the North SeaProceedings of the 3rd ACM International Conference on Embedded Networked Sensor Systems (SenSys '05)November 2005New York, NY, USAACM647510.1145/1098918.10989262-s2.0-84905854254MitM. S.SlacJ. B.MicrosoftD. D.Requirements for science data bases and SciDBProceedings of the 4th Biennial Conference on Innovative Data Systems Research (CIDR '09)January 2009Asilomar, Calif, USADanS.HoweB.ConnollyA.Embracing uncertainty in large-scale computational astrophysicsProceedings of the 3rd VLDB Workshop on Management of Uncertain Data (MUD '09)August 2009Lyon, France6377TranT. T. L.PengL.LiB.DiaoY.LiuA.PODS: a new model and processing algorithms for uncertain data streamsProceedings of the International Conference on Management of Data (SIGMOD '10)June 201015917010.1145/1807167.18071872-s2.0-77954736324LinJ.KeoghE.WeiL.LonardiS.Experiencing SAX: a novel symbolic representation of time seriesChenQ.ChenL.LianX.Indexable PLA for efficient similarity searchProceedings of the 33rd International Conference on Very Large Data BasesSeptember 2007VLDB EndowmentAggarwalC. C.ZhaoY.AggarwalC.YuP. S.On wavelet decomposition of uncertain time series data setsProceedings of the 19th ACM International Conference on Information and Knowledge Management (CIKM '10)October 2010Toronto, Canada129138SarangiS. R.MurthyK.DUST: a generalized notion of similarity between uncertain time seriesProceedings of the 16th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD '10)July 201038339210.1145/1835804.18358542-s2.0-77956195015YehM.-Y.WuK.-L.YuP. S.ChenM.-S.PROUD: a probabilistic approach to processing similarity queries over uncertain data streamsProceedings of the 12th International Conference on Extending Database Technology: Advances in Database Technology (EDBT '09)March 2009ACM68469510.1145/1516360.15164392-s2.0-70349101158ThalassinakisE. J.DialynasE. N.A Monte-Carlo simulation method for setting the underfrequency load shedding relays and selecting the spinning reserve policy in autonomous power systemsTongL.-I.ChenK. S.ChenH. T.Statistical testing for assessing the performance of lifetime index of electronic components with exponential distributionAßfalgK.KriegelH.-P.KrögerP.RenzM.Probabilistic similarity search for uncertain time seriesDallachiesaM.NushiB.MirylenkaK.PalpanasT.Uncertain time-series similarity: return to the basicsStoreyJ. D.LovricM.False discovery ratesWeisbergH. F.Central tendency and variabilityGrubbsF. E.Procedures for detecting outlying observations in samplesJonesR. H.Exponential smoothing for multivariate time seriesBrockwellP. J.DavisR. A.ChatfieldC.KeoghX.The UCR Time Series Classification/Clustering2006, http://www.cs.ucr.edu/~eamonn/time_series_data/BellmanR. E.DreyfusS. E.Applied dynamic programmingPopivanovI.MillerR. J.Similarity search over time-series data using waveletsProceedings of the 18th IEEE International Conference on Data Engineering2002San Jose, Calif, USAIEEE Computer Society21222110.1109/ICDE.2002.994711StruzikZ. R.SiebesA.The haar wavelet transform in the time series similarity paradigmChanK.-P.FuA. W.-C.Efficient time series matching by waveletsProceedings of the 15th International Conference on Data Engineering (ICDE '99)March 1999Sydney, AustraliaIEEE12613310.1109/ICDE.1999.7549152-s2.0-0032688141DaubechiesI.Ten lectures on wavelets