Prediction intervals (PIs), within which future observations of time series are expected to fall, are a powerful method for uncertainty modeling and forecasting. This paper presents the construction of optimal PIs using an enriched extreme learning machine (ELM)-based method. While quality evaluation indices for PIs on reliability and sharpness of prediction results have been defined in the literature, this paper proposes a new PIs evaluation index, robustness, which focuses on the forecasting error. Combined with the above three indices, a more comprehensive objective function is then formed for optimal PIs construction. The paper also proposes an efficient hybrid quantum-behaved particle swarm optimization method with bacterial foraging mechanism to optimize the parameters in the ELM model. The effectiveness of the additional robustness index and the proposed improved ELM approach in determining higher quality PIs is demonstrated by applying them to PIs constructions for the cases of prediction in different datasets.
Key Science and Technology Foundation of SGCC5216A015001M1. Introduction
To make optimal water resource allocation, precise runoff prediction is always needed and is one of the most important issues in the field of hydrology [1]. In the literature, most of forecasting methods are focused on developing accurate deterministic point forecasting methods for runoff time series [2–4]. In actual case applications, single point prediction result is more popular than prediction interval because of its convenience to implement. However, for the point forecasting method, the predicted runoff value is provided without any information about associated uncertainties. During the process of decision making and operational planning, it is important to know how well the predictions match the real targets and how large the risk of unmatching is. Prediction Intervals (PIs) are excellent tools for the quantification of uncertainties associated with point forecasting [5, 6]. By definition, a PI is an interval consisting of lower and upper bound, in which the predicted value will fall with a certain probability [7]. Recently, Prediction Intervals (PIs) have been widely accepted to quantify uncertainties associated with point forecasts [8, 9].
According to the existing interval forecasting methods, the probability density of the point forecasting error is always used to generate PIs [10, 11]. Without any prior knowledge of the point forecasting, a novel method in [12] adopts the two outputs of Neural Networks (NNs) to directly construct upper and lower bounds of PIs. Traditional NNs are widely used to construct PIs owing to outstanding generalization performance and approximation ability [13–15]. However, conventional NNs-based method causes inevitable limitations, such as overtraining [16]. As a new learning algorithm for training traditional feed forward neural networks, Extreme Learning Machine (ELM) [17] has been applied to construct PIs for its iterative-free learning mechanism [18, 19]. The performance of ELM has also been shown to have faster learning speed and better generalization ability than traditional NNs in [20–22]. In these ELM-based prediction methods, PIs with associated confidence levels are generated through minimizing the PIs evaluation functions and optimizing the parameters in the ELM to produce high quality PIs. But the problem to determine the optimal PI still remains to be solved.
For the optimal PIs, higher reliability and sharpness are both expected [23]. Previously, a number of objective functions for training the prediction model have been proposed aiming to assess the overall quality of PIs with a single value. The Coverage Width-based Criterion (CWC) [12] based PIs objective function gives more weight to the property of reliability based on the exponential penalty term. Considering that the reliability is usually defined as the fundamental feature and determines the validity of PIs, the authors of [12] then modified the CWC in [24, 25] by regarding the reliability as the hard constraint. However, this method would cause the value of sharpness becomes larger than it needs to be, since the hard reliability constraint is difficult to be satisfied without sacrificing the sharpness. Hence, PIs construction still has room to be improved, with respect to PIs evaluation framework and optimization objective function.
In this paper, a new PIs evaluation index and an objective function have been formulated to comprehensively account for the properties of obtained PIs. Usually, reliability and sharpness are considered as main required properties of interval forecasting. However, the PIs error information for samples with their target values beyond the bounds is a key measure to assess the potential risk of operational planning. Regarding the actual values beyond the bounds of PIs, it will cause the overload or loss load. Hence, lower error values are also needed to be considered in the objective function of PIs optimization. An additional PIs evaluation index, which focuses on the forecasting error information, is therefore developed and defined as robustness. Combined with the reliability and sharpness indices, a more comprehensive objective function is formulated for training the ELM to obtain optimal PIs. To minimize the new constructed objective function of PIs optimization, this paper proposes the integration of the bacterial foraging mechanism into quantum-behaved particle swarm optimization method to form a hybrid intelligent optimization method for the determination of the parameters in the prediction model.
The rest of this paper is organized as follows. Section 2 describes the required PIs evaluation indices and the proposed objective function. Section 3 describes the implementation of proposed PIs construction approach based on ELM and hybrid intelligent optimization method. Case studies results and comparisons of proposed approach with benchmarks are presented in Section 4. Finally, Section 5 concludes the whole paper.
2. Evaluation Framework for Optimal PIs
Reliability and sharpness are widely considered as the main required properties of optimal PIs [12–14]. In this section, an additional PIs evaluation index that focuses on the interval forecasting error is defined as robustness. A novel objective function for PIs optimization is then formulated to comprehensively account for the reliability, sharpness, and robustness properties of constructed PIs.
2.1. Reliability
According to the PIs definition [12], a PI with lower bound Lα(xi) and upper bound Uα(xi) bracket the target value ti with nominal confidence level α can be expressed as follows:(1)Pti∈Lαxi,Uαxi=1001-α%
Reliability is referred to as the statistical consistency of PI coverage probability and nominal confidence level. To assess the reliability of constructed PIs, PI coverage probability (PICP) is proposed in [5] to describe the probability that target values will be covered by the upper and lower bounds. The PICP is defined as follows:(2)PICP=1Nt∑i=1Ntξiwhere Nt is the size of the entire test samples, and ζi is the indicator of PICP. If the future target value ti is covered between the lower bound Li and upper bound Ui, then ζi=1, otherwise ζi=0. ζi can be expressed as(3)ξi=1ifti∈Li,Ui0ifti∈Li,Ui
2.2. Sharpness
Sharpness is referred to as the ability of PIs to concentrate the probabilistic forecasts information about future outcome [5]. For the optimal PIs, the target ti is expected to lie within the bounds of PIs with higher reliability. This can be easily achieved if we just take the PICP into account, which will result in the lacking of useful forecasting information for decision-maker. The PI normalized average width (PINAW) index is applied in [12] to quantitatively measure the sharpness of PIs, which is defined as(4)PINAW=1NtR∑i=1NtUi-Liwhere R is the maximum range of targets.
2.3. Robustness
Robustness is referred to as the ability of PIs to resist the probabilistic forecasts error during execution. On the basis of satisfactory reliability requirement and narrower interval width, this paper proposes that the forecasting errors for samples with target values beyond PIs bounds should also diminish towards zero as closely as possible. According to this, the Average Width Error (AWE) is proposed as a key measure to assess the potential risk of operation, and it is defined by(5)AWE=1αNtR∑i=1NtEi(6)Ei=ti-Uiifti>Ui0ifLi≥ti≥UiLi-tiifti<Liwhere α·Nt means the size of nominal error samples, and Ei is the indicator of width error information for each sample.
2.4. Objective Function
A number of objective functions have been proposed in the literature [12, 18, 24, 25], aiming to assess the overall quality of PIs based on the reliability and sharpness evaluation index. The Coverage Width-based Criterion (CWC) [12] gives more weight to the variation of PICP and if it is lower than the nominal confidence level of Eq. (1), the corresponding exponential term penalty will be accounted for. By using this objection function, it is hard to decide the coefficient for penalty term. For the Constrained CWC (CCWC) [24], the PICP is regarded as the hard constraint but it also results in large width of the PIs. For the interval score criterion (ISC) [18], the score of each prediction point is calculated according to the modified sharpness information. Different from the above objection function, the proposed objective function focuses on the best combination of reliability, sharpness, and robustness. Taking the criterion of robustness into account together with that of reliability and sharpness, a new objective function for PIs optimization can be defined as(7)minF=1+γPICPσ·PINAW+AWEwhere σ is the penalty coefficient to ensure the reliability, and γPICP is a function of PICP. If PICP is less than the PI nominal confidence level, γPICP=1 and the penalty item will be accounted for to ensure the reliability. Otherwise, γPICP=0 and the optimization process will maximize the sharpness and robustness of constructed PIs.
An examination of Eq. (7) shows that this objective function gives the reliability index a higher priority over sharpness and robustness but complemented by the other two indices. The objective function aims to obtain high-quality PIs with smaller values for PINAW and AWE based on satisfied confidence level. Compared to the existing objective functions, the new objective function provides a more comprehensive PIs evaluation for the PIs construction method presented in the next section.
3. Hybrid Intelligent Optimization Method for Optimal PIs Construction
To construct PIs for the runoff time series, the ELM-based PIs construction method can be applied. The construction of PIs based on ELM model is described in Section 3.1 and the theory and implementation of hybrid intelligent optimization method in optimizing the objective function are described in Sections 3.2 and 3.3 below, respectively.
3.1. Construction of PIs Based on Extreme Learning Machine
As a novel learning algorithm for training single hidden-layer feedforward neural networks (SLFNs), ELM generates the hidden node parameters randomly and determines the output weight matrix analytically [16]. For N distinct samples {xi,ti}i=1N, SLFNs with K hidden nodes and activation function g(x) are mathematically modeled as [17]:(8)∑j=1Kβjgjwj·xi+bj=yi,i=1,…,Nwhere wj is the weight vector connecting the j-th hidden neuron and the input nodes, βj is the weight vector connecting the j-th hidden neuron and the output nodes, and bj is the threshold of the j-th hidden node. If SLFNs can approximate these N samples with zero error, it means that(9)∑i=1Nyi-ti=0
i.e., there exist βj, wj and bj such that(10)∑j=1Kβjgjwj·xj+bj=ti,i=1,…,N
The above equation can then be written as [17](11)Hβ=T(12)H=gjw1·x1+b1⋯gjwK·x1+bK⋮⋯⋮gjw1·xN+b1⋯gjwK·xN+bKN×Kwhere H is the hidden layer output matrix, and T is the matrix of targets. Unlike standard NNs where all parameters need to be tuned, in ELM, the input weights and hidden biases are randomly assigned and fixed. Then the hidden layer output matrix H can be determined, and the approximation of w∗, b∗ and β∗ can be obtained such that(13)Hω1∗,…,ωK∗,b1∗,…,bK∗β∗-T=minβHω1,…,ωK,b1,…,bKβ-T
With fixed input weights and hidden layer biases of ELM, training an SLFN is simply equivalent to finding the smallest norm least-squares solution of the above linear system:(14)β^=H†Twhere H† is the Moore-Penrose generalized inverse of the hidden layer output matrix H.
To construct PIs for the runoff time series, the ELM-based interval forecasting method with the PIs generated at the output layer of the neural network is illustrated in Figure 1. In which, the first and second outputs of the ELM model correspond to the upper and lower bound of the PIs separately [6].
The structure of ELM model for runoff interval forecasting.
3.2. An Efficient Hybrid Intelligent Optimization Method for PIs Optimization
The optimization for the above ELM-based PIs construction aims to obtain the minimum objective function values of Eq. (7) with respect to the best output weights β in the ELM model. To minimize the constrained optimization function and optimize the parameters of ELM, conventional optimization methods, such as Dynamic Programming (DP), Particle Swarm Optimization (PSO) algorithm, always suffer from the problem of being trapped into local optima [26–30]. Inspired by quantum mechanics, a new version of PSO named Quantum-behaved Particle Swarm Optimization (QPSO) [31] was proposed due to its guaranteed characteristic of global convergence. However, it still needs a local search mechanism to make a balance between the exploitation and exploration [32]. This paper proposes the integration of the bacterial foraging mechanism [33] into QPSO to form an efficient hybrid intelligent optimization method for the determination of the parameters in the ELM prediction model based on the above objective function.
3.2.1. The Description for Quantum-Behaved Particle Swarm Optimization Method
The QPSO algorithm assumes that there is a quantum delta potential well on each dimension at the local attractor point Pd [31](15)Pd=φ1dPid+φ2dPgdφ1d+φ2dwhere i=1,2,⋯,M indicates the particles of population, d= 1,2,⋯,D indicates the dimensions of each particle, Pid is the best position of ith particle, Pgd is the position of global best particle, and φ1d and φ2d are random vectors with range of (0,1).
In QPSO, every particle has quantum behavior with its state formulated by wave function [34]. The probability density functions of particle’s position can be deduced from the Schrödinger equation, and then the measurement of particle’s position from the quantum state to the classical one can be implemented by using the Monte Carlo simulation method. The position of the i-th particle is updated as follows:(16)Xid=Pd+δ·mbest-Xid·ln1u,1>h>0.5Pd-δ·mbest-Xid·ln1u,0<h≤0.5where h and are two random numbers distributed uniformly with range of (0,1), and δ is suggested to decrease the value of δ linearly from δmax to δmin generally; mbest is the mean best position, which is defined as the mean of Pid positions of all particles.(17)mbest=∑i=1MPiM=∑i=1MPi1M,∑i=1MPi2M⋯∑i=1MPiDM
3.2.2. The Description for Hybrid Intelligent Optimization Method
It is evident from Eq. (16) that each component of the particle’s updated position is determined by the local attractor Pd and the disturbing part. Like the original PSO algorithm, convergence of the particles’ positions to their local attractors can guarantee the convergence of particles [28]. As a result, the local attractors in Eq. (16) will gather toward the global best value, which in turn makes the particle’s current position converge to the global best position. The local attractors in Eq. (15) can be translated into(18)Pd=φ1dφ1d+φ2dPid-Pgd+Pgd
It can be seen that the global best position guides the movement of local attractors. During the process of iteration, if the global best position is trapped into a local optimal point, it will mislead the particle’s current position convergence, resulting in premature convergence. Aiming at this problem, this paper develops a Hybrid Quantum-behaved Particle Swarm Optimization (HQPSO) algorithm by introducing the bacterial foraging mechanism to update the position of global best position. As investigated in [35, 36], one of the major driving forces of Bacterial Foraging Optimization (BFO) is the chemotactic movement. However, the chemotaxis employed by BFO usually results in sustained oscillation, especially on flat fitness landscapes, due to the fixed chemotactic step size.
To make a balance between the global searching capability and local searching capability, this paper proposes a dynamic approximation control strategy to update the chemotactic step size. For the particles, the loss of global searching capability means it only flies within a small space, which will result in premature convergence, while the loss of local searching capability means that the possible flying cannot lead to perceptible effect on its fitness, which will result in a slow convergence speed [37]. To overcome these shortages, the proposed dynamic approximation control strategy aims to leave a large search area in the early iteration process and shrink the search range adaptively in the later iterations.
Suppose Pg,d(s) represents the global best particle at s-th bacterial foraging operations, C(s) is the size of the chemotactic step taken in the random direction specified by the tumble, the update of global best position based on the bacterial foraging mechanism and dynamic approximation step control can be expressed as(19)Pg,ds+1=Pg,ds+CsΔsΔTsΔss=1,2,⋯,Nc(20)C0=Xdmax-Xdmin(21)Cs+1=λ·Cswhere Δ indicates a vector in the random direction whose elements lie in [-1,1], Nc indicates the number of chemotactic steps, Xd is the vector of global best position with respect to the d-th dimension, Xdmax and Xdmin are the bounds of the d-th dimension, and λ is an range parameter that used to control the decrease rate of chemotactic step size.
3.3. Implementation of the HQPSO-Based PIs Construction
In this section, the proposed HQPSO algorithm is applied to minimize the objective function of optimal PIs construction. During the optimization process, the output weight βj(j=1,2,…,K) of ELM is designed as the decision variable and the objective function value of each particle is designed to evaluate the quality of obtained PIs. The dimensions of each particle [xi1,xi2,…,xiK] in HQPSO algorithm are K, and the value of each particle Xid represents the value of output weight βj(j=1,2,…,K). The steps of optimal PIs construction are given as follows:
Normalization. Normalize the runoff database to [-1,1], and then construct the PIs optimal model.
Initialization. The particle value [xi1,xi2,…,xiK] is randomly initialized and the initial bound is [-1,1].
Construct PIs and calculate fitness F. Construct PIs based on the parameters βj(j=1,2,…,K) and then calculate the objective function value F for each particle.
Update the value of particles. The parameters βj of each particle are updated by (16).
Update the value of local attractor point. Construct a new ELM-based model by using the updated β, and then calculate the objective function value for new particles. If this new objective function value is smaller than that of Pi, then Pi is updated by this new particle. What is more, if it is smaller than Pg, then Pg is updated.
Update the value of global best particle. By using the bacterial foraging mechanism, the position of global best particle can be updated as in Algorithm 1.
Loop. If the maximum iterations have not been reached, go to step (4). Otherwise, the iteration process is finished and the optimal output weight matrix of the ELM is obtained.
(b) Tumble: Generate a random vector ∆, with each element ∆d belongs to a random number on [-1,1].
(c) Move: Update Pg,ds+1 according to (19) and calculate its new fitness F.
(d) Swim: Let m=0 (counter for swimming length), while m is smaller than the maximum swimming length Ns, let m=m+1.
If F is smaller than fpg, then update pg. Else, set m to be the maximum swimming length.
(e) Loop: If d<D, d=d+1, go to step (b). In this case, continue the foraging operation for next dimension variable.
(f) Loop: If s<Nc, s=s+1, update the chemotactic step size C(s) according to (21) and go to step (b). Else, terminate the life of
bacterial foraging operation.
4. Results and Discussion4.1. Datasets
The effectiveness of the additional robustness index and the proposed ELM optimization approach in determining higher quality PIs is demonstrated by applying them to PIs constructions for the predictions of runoff time series. The runoff datasets are from Zhexi hydropower station, which is located in Hunan province. The hourly inflow runoff data measured in 2015 is adopted in this research, out of which the first six months is used for training and the rest of the data are used for testing. All these data are normalized to [-1,1] before constructing interval forecasting model.
For training of the proposed prediction intervals construction method, as described in Section 3.1, ELM generates the hidden node parameters w and b randomly. The only job left for the users is to determine the output weights β and the inputs of ELM model. In this paper, the inputs of ELM are chosen based on the partial autocorrelation function (PACF) analyses [38], which is a useful tool to analyze the correlation between the candidate variables and the historical datasets. After normalizing the training data, the PACF values of inflow runoff data are calculated and shown in Figure 2. It can be seen that the lag orders of the inflow runoff time series are three and the inputs of ELM model are [xi-1,xi-2,xi-3]. The number of neurons in hidden layers is set as seven according to the Kolmogorov theorem (i.e. K=2n+1).
PACF analysis of original data. (a) Original runoff data, (b) PACF values of runoff data.
The training process during the PIs optimization shows the convergence behavior of the constructed PIs for global best particle, which aims to find the best values for the output weights of the ELM. To evaluate the performance of proposed algorithm, three methods including PSO, QPSO, and HQPSO are applied to minimize the objective function in (7). In the case studies, the PINC is set to 90%, and the coverage probability for training phase is set 1-2% higher than the nominal confidence level as pointed out in [11]. The penalty coefficient σ is set to 10. The major parameters for the three optimization methods are given in Table 1. The number for iterations and population size in three methods are set to 500 and 100, respectively.
Parameters for optimization methods.
Method
Parameter
Value
PSO
c1
2
c2
2
wmax
0.9
wmin
0.5
QPSO
δmax
0.9
δmin
0.5
HQPSO
Nc
15
Ns
5
λ
0.5
4.2. Performance of HQPSO-Based PIs Optimization
For the training process of PIs optimization, convergence properties for all the case studies are shown in Figure 3. The y-coordinate of Figure 2 means the optimization results of PIs construction according to (7). It is shown that the proposed HQPSO algorithm has much better global searching ability and convergence property than PSO and QPSO method. For the beginning of iteration, owing to the introduction of local searching mechanism, the fitness of global best particle with HQPSO decreases rapidly, which ensures the efficiency of the swam search. For the later evolution process, owing to the decline of population diversity, there will appear premature convergence. Among three optimization methods, the PSO performs worst since it traps into a local optimum easily. Compared to PSO, QPSO has a better global searching ability, but it makes only a little improvement from 300 to 500 iterations due to the poor local searching ability. Conversely, the HQPSO in the same iteration range still has the global best particle decreasing in the fitness value as shown in Figure 3.
Convergence curves of different algorithms for runoff interval forecasting.
The original QPSO and HQPSO method employ the same parametric setup, except with the difference of the chemotactic step size and swimming length in the bacterial foraging mechanism. The chemotactic step size C(s) was kept at 0.1 in the classical BFO [29]. For the dynamic approximation control strategy, the chemotactic step size shows exponential decline as the bacterial foraging iterative process advances. For PIs optimization, the optimization range for output weight β is set to be [-100,100]. To obtain optimum accuracy, the chemotactic steps in the later iterations are required to be reduced to nearly 0.001. Hence, the value of λ can be calculated by a given value of Nc. Figure 4 illustrates the relationship between the objective function and the number of generations for different chemotactic steps Nc. As evident from the results, when the chemotactic step size is fixed at 0.1, the objective function decreases rapidly at the beginning, but it suffers from premature convergence. In the results of dynamic step size control strategy, for larger chemotactic steps Nc, the objective function converges faster.
Performance value for different chemotactic steps.
Figure 5 illustrates the characteristics between PIs optimization objective function and the number of generations for different swimming lengths Ns of the bacteria. As evident, when the swimming length is bigger, the objective function decreases faster and converges faster. However, the larger chemotactic steps and bigger swimming lengths will lead to higher computational time. From Figure 5, it can also observed that, for longer iteration process, e.g., 500 generations and above, the small change of the values for chemotactic steps and swimming lengths does not affect the optimization results, and therefore the values for Ns and Nc can be set at smaller values to keep the computational time down. Conversely, the values can be at larger for a shorter iteration process, e.g., 200 generations.
Performance value for different swimming lengths.
4.3. Discussions on Different Objective Functions
The effectiveness of ELM has already been demonstrated to have faster learning speed and better generalization ability than traditional NNs in [20–22]. In this section, the effectiveness of proposed PIs objective function in (7) is demonstrated by comparing it with CWC, constrained CWC (CCWC), and the interval score-based criterion (ISC). The merits of using CWC or ISC have previously been compared with conventional interval forecasts methods, e.g., exponential smoothing and quantile regression. Hence, this paper focuses on the effectiveness of proposed PIs objective function. The PIs are constructed with 90% and 80% nominal confidence level, respectively. The numerical results of different case studies are given in Table 2 including the reliability index PICP, the sharpness index PINAW, and the robustness index AWE. At two different confidence levels, it can be seen from Table 2 that all these methods with different objection functions can provide fairly satisfactory coverage probability. The reason for this is obvious since all of these cost functions take the reliability index as the primary requirement of forecasting. For the performance of sharpness and robustness, it can be seen that the PINAWs of the proposed objective function are the smallest for all the case studies, indicating the highest sharpness of the obtained PIs. For CWC and CCWC, the penalty function is only related to the value of PICP, and the hard PICP constraint in CCWC is difficult to be satisfied without sacrificing the width of the PIs, which makes the values of PINAW become larger than they need to be. Besides, the CWC and CWCC objective functions do not take the error information into account and they just focus on the narrower width of PIs, which will lead to much larger values for AWE. For ISC, the generated PIs demonstrate fair robustness due to the introduction of interval score, but the interval score cannot quantitatively distinguish the contributions of robustness and sharpness, and they generally have larger width than the present proposed approach indicating a lower sharpness.
PIs evaluation results with different confidence levels.
Methods
90% confidence level
80% confidence level
PICP(%)
PINAW(%)
AWE(%)
PICP(%)
PINAW(%)
AWE(%)
ELM-CWC
90.63
10.65
1.98
81.28
7.53
3.48
ELM-CCWC
91.23
11.30
1.17
83.72
8.51
3.29
ELM-ISC
90.80
9.67
1.30
80.45
6.96
2.57
Proposed approach
90.15
7.88
1.02
80.16
5.01
2.03
The obtained PIs for runoff datasets with PINC=90% and PINC=80% are displayed in Figure 6, where the actual measured data are covered by the constructed PIs in a great percentage. According to Figure 6, it can be observed that the lower and upper bounds have a good performance in following the change of real test samples, which demonstrates that the proposed approach is effective in construction of optimal PIs.
PIs with PINC with different confidence level for runoff datasets.
90% confidence level
80% confidence level
4.4. Performance Variation for Different Application Dataset
The effectiveness of the proposed ELM optimization approach in determining higher quality PIs is demonstrated by applying it to PIs constructions for the predictions of load demand. The load demand data are from Tasmania regional market in Australian National Electricity Market (ANEM) [39]. The chosen time periods are from Jan 2016 to June 2016 with half an hour trading interval, out of which the first three months is used for training the ELM model and the rest of the data are used for testing the prediction performance of the proposed algorithm. Half-hour ahead PIs are implemented for load demand. The numerical results of different case studies are given in Table 3 in terms of the reliability index PICP, the sharpness index PINAW, and the robustness index AWE. At two different confidence levels, it can be seen from Table 3 that the proposed method generally has fairly satisfactory coverage probability and lower values for PINAW index and AWE index (i.e., higher sharpness and robustness) than traditional methods.
PIs evaluation results with different confidence levels for load demand data.
Methods
90% confidence level
80% confidence level
PICP(%)
PINAW(%)
AWE(%)
PICP(%)
PINAW(%)
AWE(%)
ELM-CWC
90.65
15.07
3.28
80.65
11.42
4.61
ELM-CCWC
91.19
16.23
3.15
79.26
13.92
4.42
ELM-ISC
89.88
13.95
1.83
80.42
12.08
3.01
Proposed approach
90.56
13.28
1.66
80.93
10.62
2.56
The obtained PIs for load demand data with PINC=90% are displayed in Figure 7, where the actual measured data are covered by the constructed PIs in a great percentage. According to Figure 7, it can also be observed that the lower and upper bounds have a good performance in following the change of real test samples for all case studies. Therefore, experimental testing results demonstrate the effectiveness of the proposed ELM-based interval forecasting approach in improving the quality of obtained PIs with a combination of higher reliability, sharpness and robustness.
PIs with PINC with different confidence level for load demand data.
90% confidence level
80% confidence level
5. Conclusions
The ELM based PIs construction method has been applied and extended for PIs optimization in this paper. For the training of ELM model, a new evaluation index that focuses on the PIs error information has been developed to evaluate the robustness of PIs. A novel objective function for PIs optimization has also been formulated to comprehensively account for the properties of reliability, sharpness, and robustness. To solve the proposed nonlinear objective function in the training phase of the ELM, an improved QPSO with the bacterial foraging mechanism has been proposed to minimize the proposed objective function and optimize the parameters of the ELM model. To make a balance between the exploitation and exploration of the search space, this paper also develops a dynamic approximation control strategy to update the chemotactic step size. The effectiveness of proposed method has been validated by using it to construct optimal PIs for different datasets. The results have illustrated that the proposed ELM-based method can provide much higher quality interval forecasting information.
Data Availability
Data generated by the authors or analyzed during the study are available from the following options: (1) the data related to this research is available upon request by contacting the corresponding author. (2) All data generated or analyzed during the study are included in the published paper.
Conflicts of Interest
The authors declare that there are no conflicts of interest regarding the publication of this paper.
Acknowledgments
The authors are grateful to Qingpu Li and Jing Luo for their contribution in proofreading this manuscript and giving advise on English writing. This work was supported by the Key Science and Technology Foundation of SGCC (Grants no.5216A015001M).
PengD.QiuL.FangJ.ZhangZ.Quantification of climate changes and human activities that impact runoff in the Taihu Lake basin, China20162016Mathematical Problems in Engineering2194196JakubcováM.MácaP.PechP.Parameter estimation in rainfall-runoff modelling using distributed versions of particle swarm optimization algorithm20152015968067AbdollahiS.RaeisiJ.KhalilianpourM.AhmadiF.KisiO.Daily mean streamflow prediction in perennial and non-perennial rivers using four data driven techniques201731154855487410.1007/s11269-017-1782-72-s2.0-85026914520HuangC.NewmanA. J.ClarkM. P.WoodA. W.ZhengX.Evaluation of snow data assimilation using the ensemble Kalman filter for seasonal streamflow prediction in the western United States20172116356502-s2.0-8501129054510.5194/hess-21-635-2017HwangJ. T. G.DingA. A.Prediction Intervals for Artificial Neural Networks1997924387487572-s2.0-004042829710.1080/01621459.1997.10474027Zbl1090.62559ZhangG.WuY.WongK. P.XuZ.DongZ. Y.IuH. H.-C.An Advanced Approach for Construction of Optimal Wind Power Prediction Intervals2015305270627152-s2.0-8502794344010.1109/TPWRS.2014.2363873ChatfieldC.Calculating interval forecasts19931121211352-s2.0-21144464610ShrivastavaN. A.KhosraviA.PanigrahiB. K.Prediction interval estimation of electricity prices using pso-tuned support vector machines201511232233110.1109/TII.2015.23896252-s2.0-84926435343NiQ.ZhuangS.ShengH.WangS.XiaoJ.An Optimized Prediction Intervals Approach for Short Term PV Power Forecasting20171010166910.3390/en10101669ChenL.ZhangY.ZhouJ.SinghV. P.GuoS.ZhangJ.Real-time error correction method combined with combination flood forecasting technique for improving the accuracy of flood forecasting20155211571692-s2.0-8494913452010.1016/j.jhydrol.2014.11.053ZhangJ.ChenL.SinghV. P.CaoH.WangD.Determination of the distribution of flood forecasting error2015752138914022-s2.0-8500024677310.1007/s11069-014-1385-zKhosraviA.NahavandiS.CreightonD.AtiyaA. F.Lower upper bound estimation method for construction of neural network-based prediction intervals20112233373462-s2.0-7995218659110.1109/TNN.2010.2096824TaorminaR.ChauK.-W.ANN-based interval forecasting of streamflow discharges using the LUBE method and MOFIPS2015454294402-s2.0-8494107139410.1016/j.engappai.2015.07.019KasiviswanathanK. S.SudheerK. P.Methods used for quantifying the prediction uncertainty of artificial neural network based hydrologic models2017317165916702-s2.0-8500184348910.1007/s00477-016-1369-5YeL.ZhouJ.ZengX.GuoJ.ZhangX.Multi-objective optimization for construction of prediction interval of hydrological models based on ensemble simulations20145199259332-s2.0-8490677284910.1016/j.jhydrol.2014.08.026WanC.XuZ.PinsonP.DongZ. Y.WongK. P.Probabilistic forecasting of wind power generation using extreme learning machine2014293103310442-s2.0-8489956660310.1109/TPWRS.2013.2287871HuangG. B.ZhuQ. Y.SiewC. K.Extreme learning machine: theory and applications2006701–348950110.1016/j.neucom.2005.12.1262-s2.0-33745903481WanC.XuZ.PinsonP.DongZ. Y.WongK. P.Optimal prediction intervals of wind power generation2014293116611742-s2.0-8489330326110.1109/TPWRS.2013.2288100ChenX.DongZ. Y.MengK.XuY.WongK. P.NganH. W.Electricity price forecasting with extreme learning machine and bootstrapping2012274205520622-s2.0-8486798454910.1109/TPWRS.2012.2190627ZhaoF.LiuY.HuoK.ZhangZ.Radar Target Classification Using an Evolutionary Extreme Learning Machine Based on Improved Quantum-Behaved Particle Swarm Optimization20172017137273061CaoJ.LinZ.Extreme learning machines on high dimensional and large data applications: a survey2015201513103796YeomC.-U.KwakK.-C.Short-term electricity-load forecasting using a tsk-based extreme learning machine with knowledge representation201710102-s2.0-85044354388PinsonP.NielsenH. A.MøllerJ. K.MadsenH.KariniotakisG. N.Non-parametric probabilistic forecasts of wind power: required properties and evaluation200710649751610.1002/we.2302-s2.0-34548156412QuanH.SrinivasanD.KhosraviA.Short-term load and wind power forecasting using neural network-based prediction intervals201425230331510.1109/tnnls.2013.22760532-s2.0-84893641491QuanH.SrinivasanD.KhosraviA.Particle swarm optimization for construction of neural network-based prediction intervals20141271721802-s2.0-8488843707710.1016/j.neucom.2013.08.020FengZ.-K.NiuW.-J.ChengC.-T.LiaoS.-L.Hydropower system operation optimization by discrete differential dynamic programming based on orthogonal experiment design20171267207322-s2.0-8501593998910.1016/j.energy.2017.03.069LiX.XingK.ZhouM.WangX.WuY.Modified Dynamic Programming Algorithm for Optimization of Total Energy Consumption in Flexible Manufacturing Systems201899991-1515van den BerghF.EngelbrechtA. P.A study of particle swarm optimization particle trajectories2006176893797110.1016/j.ins.2005.02.003MR2212427Zbl1093.681052-s2.0-31944448941NanL.ZengX.DuY.DaiZ.ChenL.Shared Variable Extraction and Hardware Implementation for Nonlinear Boolean Functions Based on Swarm Intelligence201820187104764MaoB.XieZ.WangY.HandroosH.WuH.A Hybrid Strategy of Differential Evolution and Modified Particle Swarm Optimization for Numerical Solution of a Parallel Manipulator201820189815469SunJ.FengB.XuW.Particle swarm optimization with particles having quantum behaviorProceedings of the Congress on Evolutionary Computation2004Portland, Ore, USA3253312-s2.0-4344586511SunJ.FangW.PaladeV.WuX.XuW.Quantum-behaved particle swarm optimization with Gaussian distributed local attractor point20112187376337752-s2.0-8005498243310.1016/j.amc.2011.09.021Zbl1244.65089PassinoK. M.Biomimicry of bacterial foraging for distributed optimization and control200222352672-s2.0-003660898710.1109/MCS.2002.1004010FengZ.-K.NiuW.-J.ChengC.-T.Multi-objective quantum-behaved particle swarm optimization for economic environmental hydrothermal energy system scheduling20171311651782-s2.0-8501915817010.1016/j.energy.2017.05.013Ravikumar PandiV.BiswasA.DasguptaS.PanigrahiB. K.A hybrid bacterial foraging and differential evolution algorithm for congestion management20102078628712-s2.0-84864670598ChenY. P.LiY.WangG.ZhengY. F.XuQ.FanJ. H.CuiX. T.201783831-17XieX. F.ZhangW. J.YangZ. L.Adaptive particle swarm optimization on individual level2Proceedings of the 6th International Conference on Signal Processing2002Beijing, ChinaIEEE1215121810.1109/icosp.2002.1180009HaganM. T.BehrS. M.The time series approach to short term load forecasting1987237857912-s2.0-0023399522Australian Energy Market Operator (AEMO) Informationhttp://data.wa.aemo.com.au/#load-summary