Considering the randomness and volatility of wind, a method based on B-spline neural network optimized by particle swarm optimization is proposed to predict the short-term wind speed. The B-spline neural network can change the division of input space and the definition of basis function flexibly. For any input, only a few outputs of hidden layers are nonzero, the outputs are simple, and the convergence speed is fast, but it is easy to fall into local minimum. The traditional method to divide the input space is thoughtless and it will influence the final prediction accuracy. Particle swarm optimization is adopted to solve the problem by optimizing the nodes. Simulated results show that it has higher prediction accuracy than traditional B-spline neural network and BP neural network.
1. Introduction
Wind power is a kind of clean, free, and renewable natural resource. In wind power generation systems, the randomness and volatility of wind will affect the quality of power and the reliability of power system. Improving the prediction accuracy of short-term wind speed is of great significance for the real-time scheduling of power, reliability improving of power supply, and cost lowering of the wind power generation [1–3].
The common methods of wind speed forecasting include time series method, support vector machine (SVM) method, and wavelet analysis method. Time series method is adopted in [4, 5], making full use of the sample data and the prediction model is simple which can reduce the calculation time, but it does not take the correlation among the sample data into account, and the estimation of model orders is quite complicated. In [6, 7] SVM is adopted, SVM can make indivisible vector in low dimension space map into divided function in high dimension space by choosing appropriate kernel function, the generalizing ability is improved and the local minimization problem is solved, and it has higher prediction accuracy than time series method. The multiresolution analysis character of wavelet transform and the generalizing ability of SVM are combined to realize the short-term wind speed forecasting in [8]. The method can change the nonlinear phenomenon very well and improve the prediction accuracy, but the design process is rather complex.
In view of the various defects of prediction methods above in practical application, a prediction method based on B-spline neural network optimized by particle swarm optimization (PSO-BSNN) is proposed. Firstly, the wind speed data are used to calculate correlation integral function [9, 10] and prove that the discrete time series is chaotic. By phase space reconstruction [11, 12], the high dimension chaotic attractors are recovered. BSNN can change the division of input space and the definition of basis function flexibly [13, 14]. For any input, only a few outputs of the hidden layers are nonzero, so the outputs are simple and convergence speed is fast, but traditional method which divides the input space into linear space is thoughtless and will influence the final prediction accuracy. PSO is an intelligent search method. It has strong global search ability and is also easy to be realized [15–20]. It is used to optimize the nodes of BSNN. Simulated results show that PSO-BSNN has higher prediction accuracy than BSNN and BPNN.
2. Phase-Space Reconstruction
A series of continuous sample data are collected from a field of wind farm in Colorado State. Set the sample interval of wind speed as 10 min; there will be 6 sample data in each hour. Let the 6 data be averaged; then the mean value of each hour is got. The discrete sample data of 10 days are recorded as x-=x-1,x-2,…,x-nn=240. For better usage of nonlinear nature of neural network transfer function, the sample data are normalized into the interval [0,1], and the normalization formula can be expressed as follows:(1)xi=x-i-x-minx-max-x-min,0≤i≤n,where x-i is sample data, x-min is minimum of all sample data, x-max is the maximum of all sample data, xi is normalization data in the interval [0,1], and n is the length of time sequence. The normalized wind speed v is displayed in Figure 1.
Wind speed time series.
2.1. Takens Theorem
Takens has proved that an appropriate embedding dimension m can be found which satisfies m≥2d+1, and d is the dimension of dynamic system. The principle of the dynamic system can be recovered by phase-space reconstruction. If τ is the delay time and m is embedding dimension, then x=x1,x2,…,xn can be described as Xi, after phase-space reconstruction. Xi is phase point of the restructured time series, which can be described as follows:(2)Xi=xi,xi+τ,xi+2τ,…,xi+m-1τ∈Rm,i=1,2,…,M,where M=n-m-1τ is the number of phase point.
2.2. Parameter Selection
The selection of embedding dimension m and delay time τ is very important for the time series x to be reconstructed, which will influence the final prediction accuracy. The saturation correlation dimension method is given to calculate embedding dimension m, and the correlation integral function can be stated as follows:(3)Cr=1M2∑i,j=1Mθr-ai,j,r>0,where r is searching radius, ai,j=Xi-Xj, and θ· is Heaviside unit function:(4)θr-ai,j=0,r-ai,j≤0,1,r-ai,j>0.
For a proper scope of r, the attractor dimension is e=lnCr/lnr; the relation curve between lnr and lnCr is obtained as in Figure 2. The relation curve between m and e is obtained as in Figure 3.
The relation curve between lnr and lnCr.
The relation curve between e and m.
It can be seen from Figure 3 that when m=5, the attractor dimension e reaches the maximum, so m=5 is the saturation correlation dimension. The logarithm value of correlation integral function is less than zero; namely, Cr∈0,1 and then the change trend of Cr is decay exponentially, which indicates that the time series x is chaotic rather than noise series.
Mutual information method is used to determine the delay time τ. The information analysis of time series Xi can be defined as the entropy function:(5)HXi=-∑l=0m-1Pxi+lτlogPxi+lτ,where H· is entropy function and P· is probability density function.
If Xi and Xj are discrete random variables, its joint entropy function can be expressed as follows:(6)HXi,Xj=-∑l=0m-1∑μ=0m-1Pxi+lτ,xj+μτlogPxi+lτ,xj+μτ.
Mutual information function is(7)Iτ=HXi+HXj-HXi,Xj.
The relation curve between τ and Iτ is displayed in Figure 4.
The relation curve between τ and Iτ.
Mutual information function Iτ signifies the correlation between xt and xt+τ. If Iτ=0, they are completely irrelevant, and when Iτ reaches the minimum for the first time, the value τ is the delay time required. It can be seen from Figure 4, and τ=4.
3. B-Spline Neural Network
BSNN is designed based on spline interpolation theory and it can change the division of input space and definition of the basis function flexibly. For any input, only a few outputs of the hidden layers are nonzero so that the outputs are simple and convergence speed is faster. It can be used to realize the nonlinear function approximation with high precision.
3.1. Basis Function of BSNN
According to (2), BSNN prediction model is structured as shown in Figure 5, where X=x0,…,xl,…,xm-1∈Rm is the input vector and y is the output of the network and corresponds to the mean value of wind speed in the next hour.
Prediction model of BSNN.
(1) The Division of Input Space. The input space has to be divided before the basis function being defined. Ul is a finite interval for xl:(8)Ul=xl:xlmin≤xl≤xlmax,0≤l≤m-1.
Then interval Ul can be divided as follows:(9)xlmin<λl,1≤⋯λl,ρ⋯≤λl,ql<xlmax,where λl,ρ is the ρth internal node, and the external node can be defined as follows:(10)⋯λl,-1≤λl,0≤xlmin,xlmax≤λl,ql+1≤λl,ql+2⋯.
The interval Ul can be divided into ql+1 subinterval by all of the nodes as follows:(11)Ul,ρ=xl:xl∈λl,ρ,λl,ρ+1,ρ=0,1,…,ql-1,xl∈λl,ρ,λl,ρ+1,ρ=ql.
(2) The Multivariable Basis Function of BSNN. Let αxl indicate k-order basis function for xl, its ρth basis function is Nρkxl which is defined by λl,(ρ-kη),λl,(ρ-kη+1),…,λl,ρ, η is expansion factor, and the recurrence relations can be expressed as follows:(12)Nρkxl=xl-λl,ρ-kηλl,ρ-η-λl,ρ-kηNρ-ηk-1xl+λl,ρ-xlλl,ρ-λl,ρ-k-1ηNρk-1xl,Nρ1xl=0,xl∈λl,ρ-η,λl,ρ,1,xl∉λl,ρ-η,λl,ρ.
If the number of internal nodes is 2 and expansion factor η=1, the basis function graph of 3 degrees BSNN for single variable xl is shown in Figure 6.
The basis function of BSNN.
The number of basis function for X is(13)q=∏l=0m-1ql.
The basis function of X is got as the result that single variable basis functions combine with each other:(14)α0,1,…,m-1X=∏l=0m-1αxl=∏l=0m-1∏ρ=1qlNρkxl.
3.2. The Mapping of BSNN
The network from input to output can be divided into two parts.
X→αX.
In this section, it realizes the nonlinear mapping from input layer to hidden layer and set αX=α1X,…,αrX,…,αqXT∈Rq, where αrX is output value of the rth basis function for X. According to the positivity and tightness of basis function, the number of output nonzero values is b=(kη)m, αX belongs to the interval [0,1], and the other output values are zero. If we let η=1, b=km is got.
αX→y.
In this section, it realizes linear mapping from hidden layer to output layer:(15)y=ωtαX,
where ω(t)=ω0(t),…,ωr(t),…,ωq(t) is the weight value of the network. The weight value can be updated as follows:(16)ωt+1=ωt+βyd-yαTXαTXαX,where t is the number of iteration, yd is the expected value, y is the actual value, and β0<β<2 is learning rate and ensures that the iterative algorithm can converge. The verification is as follows.
Set εt=yd-y=yd-ωtαX; then(17)Δεt=εt+1-εt=-ωt+1-ωtαX=-ΔωtαX.
Set the Lyapunov function as Vt=ε2t; then(18)ΔVt=Vt+1-Vt=ε2t+1-ε2t=εt+1-εt2εt+εt+1-εt=Δεt2εt+Δεt=-ΔωtαX2εt-ΔωtαX=-βεt2εt+βεt=-β2-βε2t.
Since 0<β<2, then ΔVt<0, and the parameters can converge.
4. B-Spline Neural Network Optimized by PSO
The internal nodes λl,ρρ=1,…,ql of BSNN have a great relationship with the prediction accuracy. The traditional method which divides the input space into linear space is thoughtless and it is difficult to get higher approximation accuracy. PSO is used to optimize the parameters, which can improve the accuracy and avoid falling into local minimum.
4.1. The Basic Principle of PSO
PSO stems from the enlightenment of birds foraging behavior, which is mainly used to solve the optimization problem. Each particle is a point with a certain speed in the solution space and it also has the related individual fitness which corresponds to the objective function; the effectiveness of the solution is determined by the fitness function that is defined on the basis of optimized goals. Particles are following the current optimum particles in solution space and finding the optimal solution by iteration.
Set the current position as ps=ps1,…,psd,…,psD, the current speed as vs=vs1,…,vsd,…,vsD, and the best position that the particle has experienced as pbest=pbest1,…,pbestd,…,pbestD in D dimension search space. Set the number of all particles as N; the best position of all the particles experienced is gbests=gbests1,…,gbestd,…,gbestD. The particles can update themselves by tracking individual extremum pbest and global extremum gbest during each iteration. In the process of finding the two extremums, the particles update their own speed and position as follows:(19)vsdt+1=wvsdt+c1r1pbestdt-psdt+c2r2gbestdt-psdt,psdt+1=psdt+vsdt+1,where c1,c2 are acceleration constant; vsdt is the speed of the dth particle; psdt is the position of the dth particle; r1,r2 are random number and they belong to the interval 0,1; w is weight value and is used to balance global searching and local searching, and w can update itself as follows:(20)w=wmax-ttmaxwmax-wmin,where wmax is the maximum of w and wmin is the minimum of w; t is the current number of iteration and tmax is the maximum of iteration.
In order to prevent the particle far from the search space, the speed of each particle will be limited in a maximum vmaxvmax>0; if the updated speed exceeds the specified speed, then set the speed of the particle as vmax, vmax=δpmax, where δ belongs to the interval 0,1 and pmax is the maximum of psd.
4.2. PSO-BSNN Algorithm Implementation
The traditional method for BSNN to divide the internal nodes of input space is thoughtless. If the internal nodes cannot be determined accurately, the BSNN is easy to fall into local minimum and then the prediction accuracy is influenced. PSO is a kind of intelligent search method; the internal nodes can be optimized by using PSO algorithm and then get the optimal location. The dimension of each particle vector is D, and each vector is composed of internal and external nodes λl,ρ of BSNN. For the single input variable xil, the number of internal nodes is 2 and the external nodes is 4; then the vector dimension D=6. The optimization process is as follows.
(1) Initialization: the basic initial parameters include the position ps=ps1,…,ps6=λil,-1,λil,0,λil,1,λil,2,λil,3,λil,4, speed of each particle vs, the population of particles N, the largest iteration number tmax, and the acceleration constants c1,c2.
(2) Fitness Evaluation: the error between expected value and actual value is selected as objective function which is used to measure the effectiveness of optimization. The mean square error eMSE is adopted as the fitness function:(21)f=eMSE=1N∑i=1Nyid-yi20≤i≤N,where N is the number of sample date, yid is the expected value for the ith particle, and yi is the actual output value for the ith particle.
(3) Update the parameters according to formula (17) and limit the speed according to vmax.
(4) Calculate the fitness evaluation value, comparing it with the optimal fitness evaluation value; if better, update the particle location; otherwise do not change.
(5) Judge whether the result satisfies the termination condition. The termination condition is whether it reaches the maximum number of iterations or satisfies mean square error. If it does, then it is over, or return to step (2).
5. Simulation Results and Analysis
Let the largest iteration number tmax=100, the acceleration constant c1=c2=3, and the population of particles N=30, and let r1=0.3, r2=0.6, and the initial position and speed of each particle are got randomly in the defined interval. Adopting 3 orders B-spline neural network, the number of neural network input is 5. The internal node number of each input variable is 2 and expansion factor is 1; then the nodes number of hidden layer units is 35 and the nonzero output nodes number of hidden layer is 25. There is only one output unit.
Three kinds of prediction error are adopted as criteria for evaluation:(22)eRMSE=1n∑i=1nyi-yi∗2,eMAE=1n∑i=1nyi-yi∗,eMRE=1n∑i=1nyi-yi∗yi,where eRMSE is root mean square error and it is used to measure the deviation between prediction value and real value, eMAE is mean absolute error that reflects the error of prediction value deviating from real value, and eMRE is mean relative error that reflects the reliability of mean absolute error value.
According to the wind speed date of 10 days in Figure 1 that is provided from the wind form in Colorado State, the first data of 7 days is used as training sample to train neural network and the last date of 3 days is used as test sample to examine the results of prediction. Simulation results based on PSO-BSNN, BSNN, and BPNN are displayed in Figure 7. They all adopt 3-layer network structures.
Prediction result of 3 days.
The prediction error is listed in Table 1.
Forecasting error of each prediction model.
Prediction model
eRMSE/(m/s)
eMAE/(m/s)
eMRE/%
PSO-BSNN
0.0319
0.0348
0.0826
BSNN
0.0700
0.0664
0.1555
BPNN
0.1060
0.1017
0.2267
It can be seen from Table 1 that the prediction model based on BSNN has better performance than the prediction model based on BPNN, eRMSE and eMAE have fallen by 0.2455 m/s and 0.2614 m/s, respectively, eMRE has fallen by 14.54%, and the prediction effects have some improvements. Comparing with the prediction model based on BSNN, the prediction model based on PSO-BSNN enhances the prediction performance largely, because the input space of BSNN has been optimized by particle swarm. It gets higher prediction accuracy than the prediction model based on BPNN, eRMSE and eMAE have fallen by 0.4198 m/s and 0.4233 m/s, respectively, and eMRE has fallen by 36.50%.
Now BPNN is set 4-layer network structures, and the simulation result is shown in Figure 8.
The prediction result of 3 days with 4-layer BPNN.
The prediction error is listed in Table 2.
Forecasting error of BPNN prediction model.
Prediction model
eRMSE/(m/s)
eMAE/(m/s)
eMRE/%
BPNN
0.0810
0.0697
0.1603
It can be seen from Table 2 that the simulation result based on 4-layer BPNN structure is very similar to BSNN, but still poorer than the prediction result based on PSO-BSNN.
The prediction of 3-day wind speed date belongs to short-term wind speed prediction. Now, the wind speed date of 10 hours from Sinkiang in China is used in prediction. The first data of 7 hours is used as training sample and last wind speed date of 3 hours is used as test sample. It belongs to super short-term wind speed prediction. The sample interval of wind speed is still set as 10 min, there will be 6 sample data in each hour, and the number of sample dates of 3 hours is 18. The simulation result is shown in Figure 9.
Prediction result of 3 hours.
The prediction error is listed in Table 3.
Forecasting error of each prediction model.
Prediction model
eRMSE/(m/s)
eMAE/(m/s)
eMRE/%
PSO-BSNN
0.0419
0.0448
0.0926
BSNN
0.0912
0.1064
0.1555
BPNN
0.1377
0.1410
0.2431
It can be seen from Table 3 that comparing with BPNN, in the simulation results based on BSNN, eRMSE and eMAE have fallen by 0.0465 m/s and 0.0346 m/s, respectively, and eMRE has fallen by 8.76%. Comparing with BSNN, in the simulation results based on PSO-BSNN, eRMSE and eMAE have fallen by 0.0493 m/s and 0.0616 m/s, respectively, and eMRE has fallen by 6.29%.
Very short-term wind speed prediction results based on 4-layer BP neural network structure are shown in Figure 10.
Prediction result of 3 hours with 4-layer BPNN.
The prediction error is listed in Table 4.
Forecasting error of BPNN prediction model.
Prediction model
eRMSE/(m/s)
eMAE/(m/s)
eMRE/%
BPNN
0.1023
0.1152
0.1825
It can be seen from Table 4 that the super short-term wind speed prediction based on 4-layer BPNN is very similar to BSNN, but still poorer than the prediction result based on PSO-BSNN.
According to the simulation results of short-term and super short-term wind speed prediction, the prediction model based on PSO-BSNN always gets better results than BSNN and BPNN.
6. Conclusions
A prediction model based on BSNN optimized by PSO is present in this paper. BSNN is designed based on B-spline interpolation theory, it can change the input space division flexibly, and the traditional method to divide the input space into linear space is thoughtless which will influence the final prediction accuracy. PSO is adopted to solve the problem. Simulation results show that the PSO-BSNN has better performance than BSNN and BPNN, and the forecasting accuracy is improved effectively.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
BeluR.KoracinD.Wind characteristics and wind energy potential in western Nevada200934102246225110.1016/j.renene.2009.02.0242-s2.0-66149173225XieK.BillintonR.Energy and reliability benefits of wind energy conversion systems2011367198319882-s2.0-7995150910610.1016/j.renene.2010.12.011AlexiadisM. C.DokopoulosP. S.SahsamanoglouH. S.ManousaridisI. M.Short-term forecasting of wind speed and related electrical power1998631616810.1016/s0038-092x(98)00032-22-s2.0-0032122184YonaA.SenjyuT.ToshihisaF.KimC.Very short-term generating power forecasting for wind power generators based on time series analysis20134218118610.4236/sgre.2013.42022BossanyiE. A.Short-term wind prediction using Kalman filters19859118MohandesM. A.HalawaniT. O.RehmanS.HussainA. A.Support vector machines for wind speed prediction200429693994710.1016/j.renene.2003.11.0092-s2.0-0442296729Salcedo-SanzS.Ortiz-GarcíaE. G.Pérez-BellidoÁ. M.Portilla-FiguerasA.PrietoL.Short term wind speed prediction based on evolutionary support vector regression algorithms2011384405240572-s2.0-7865069865310.1016/j.eswa.2010.09.067KisiO.ShiriJ.MakarynskyyO.Wind speed prediction by using different wavelet conjunction models20112318920810.1260/1759-3131.2.3.189LeungH.LoT.WangS.Prediction of noisy chaotic time series using an optimal radial basis function neural network20011251163117210.1109/72.9501442-s2.0-0035441349KarunasingheD. S. K.LiongS.-Y.Chaotic time series prediction with a global model: artificial neural network20063231–49210510.1016/j.jhydrol.2005.07.0482-s2.0-33646572227ZhangJ.LamK. C.YanW. J.GaoH.LiY.Time series prediction using Lyapunov exponents in embedding phase space200430111510.1016/s0045-7906(03)00015-62-s2.0-0141917450MaH.HanC.Selection of embedding dimension and delay time in phase space reconstruction200611111114KechroudA.PaulidesJ. J. H.LomonovaE. A.B-spline neural network approach to inverse problems in switched reluctance motor optimal design20114710417941822-s2.0-8005348208110.1109/TMAG.2011.2151183CoelhoL. D. S.PessôaM. W.Nonlinear identification using a B-spline neural network and chaotic immune approaches20092382418243410.1016/j.ymssp.2009.01.0132-s2.0-67651030437PoliR.KennedyJ.BlackwellT.Particle swarm optimization200711335710.1007/s11721-007-0002-0MarinakisY.MarinakiM.DouniasG.A hybrid particle swarm optimization algorithm for the vehicle routing problem201023446347210.1016/j.engappai.2010.02.0022-s2.0-77950517300BrandstätterB.BaumgartnerU.Particle swarm optimization—mass-spring system analogon2002382997100010.1109/20.9962562-s2.0-0036493540BotzheimJ.CabritaC.KóczyL. T.Genetic and bacterial programming for B-spline neural networks design2007112220231EberhartR. C.ShiY.Particle swarm optimization: developments, applications and resources1Proceedings of the IEEE Congress on Evolutionary ComputationMay 2001Seoul, South KoreaIEEE818610.1109/CEC.2001.934374GudiseV. G.VenayagamoorthyG. K.Comparison of particle swarm optimization and backpropagation as training algorithms for neural networksProceedings of the IEEE Swarm Intelligence Symposium (SIS '03)April 2003Indianapolis, Ind, USA11011710.1109/SIS.2003.1202255