This paper proposed a novel radial basis function (RBF) neural network model optimized by exponential decreasing inertia weight particle swarm optimization (EDIW-PSO). Based on the inertia weight decreasing strategy, we propose a new Exponential Decreasing Inertia Weight (EDIW) to improve the PSO algorithm. We use the modified EDIW-PSO algorithm to determine the centers, widths, and connection weights of RBF neural network. To assess the performance of the proposed EDIW-PSO-RBF model, we choose the daily air quality index (AQI) of Xi’an for prediction and obtain improved results.
1. Introduction
The radial basis function (RBF) neural network is a novel and effective feed-forward neural network [1], which has good performance of best approximation and global optimum. It has been broadly used in considerable applications, such as function approximate, classification, regression problems, prediction, signal processing, and other problems [2–6]. The RBF neural network architecture has three layers composed of input layer, hidden layer, and output layer. The input layer is composed of input vectors. Before the input vectors are input to the network, data processing should be done, such as normalization processing. This processing also can be done in the input layer.
The hidden layer is composed of hidden neurons, the number of which is determined by the issues described. The RBF networks are different from other types of neural networks mainly in the hidden neurons [7, 8]. Each hidden neuron has a radial basis function which is a center symmetric nonlinear function with local distribution. The radial basis function consists of a center position and a width parameter. Once the center and width are determined, the input vectors are mapped to the hidden space by the mapping f:RI→RJ. Suppose that the input layer has I input units; the hidden layer has J radial basis functions. The output of the jth hidden neuron is expressed as
(1)hj(X)=φ(-∥X-cj∥dj)j=1,2,…,J,
where X is the overall input sample as X=(X1,X2,…,XN)T, and each Xn is the I-dimension input vector expressed as Xn=(xn1,xn2,…,xnI). cj and dj are, respectively, the center and the width of the jth hidden neuron, and cj is an I-dimension vector as cj=(cj1,cj2,…,cji,…,cjI). ∥·∥ is Euclidean norm usually taking 2-norm. φ(·) is the radial basis function. It can take a variable of formula expression such as B-Spline RBF [9], thin-plate Spline RBF [10], Cauchy RBF [11], and Gaussian RBF [12]. Among them the Gaussian function is the most used, as in
(2)φ(r)=exp(-r22),
where r is the variable of radial basis function φ(·).
The output layer implements the mapping f:RJ→RK. K is the number of output neurons. The output function is a linear combination of the outputs of the radial basis functions through connection weights which connect the hidden layer and the output layer, which is shown as
(3)yk=∑j=1Jωjkhj(X)k=1,2,…,K,
where ωjk is the connection weight between the jth hidden layer and the kth output of the network.
RBF neural network contains three groups of parameters which are centers cji, widths dj, and connection weights ωjk. To optimize the RBF parameters, many optimization algorithms have been proposed, such as orthogonal least squares (OLS) algorithm [13], Expectation-Maximization (EM) algorithm [14], gradient descent algorithm [15], K-means clustering algorithm [16], Genetic algorithm (GA) [17], ant colony optimization (ACO) algorithm [18] and particle swarm optimization (PSO) algorithm [19, 20], and support vector machine (SVM) and extreme learning machine (ELM) [21]. Compared to other algorithms, PSO algorithm has many advantages: stable convergence, few parameters, and fast convergence speed. Many researchers have successfully applied the PSO algorithm in the learning and structure improvement of the RBF neural network for application problems.
The particle swarm optimizing (PSO) algorithm is put forward by Eberhart and Kennedy in 1995 [22], which is initially motivated by the intelligent collective behavior of birds in the foraging process. In PSO algorithm, each bird also called a particle has a position and a velocity and searches for the optimal solution by updating the position and the velocity. In the following years, many researchers introduce “inertia weight” and propose many dynamic variations of PSO based on the inertia weight [23–26]. Different inertia weight strategies imply different incremental changes in velocity per time step which means exploration of new search areas in pursuit of a better solution. In this paper, we propose an Exponential Decreasing Inertia Weight PSO (EDIW-PSO) algorithm to get the optimal parameters of RBF network.
This paper establishes a RBF network model based on EDIW-PSO algorithm. Section 2 introduces the basic PSO algorithm and several variants of inertia weight. Section 2.3 gives the improved EDIW-PSO algorithm based on an Exponential Decreasing Inertia Weight strategy. Section 3 presents the methodology of the proposed EDIW-PSO-RBF structure. Section 4 shows an experiment using this methodology comparing with other three models. The last section summarizes the conclusions of this study.
PSO algorithm is a parallel evolutionary computation algorithm. In PSO, each potential solution to an optimization problem is treated as a bird, which is also called a particle. The set of particles, also known as a swarm, is flown through the D-dimensional search space of the problem. Each particle changes its own position and velocity based on the experiences of the particle itself and those of its neighbors. In the searching process, every particle is connected to and able to share information with every other particle in the swarm and the swarm communication topology is known as a global neighborhood described in [27]. This information sharing mechanism keeps the overall consistency to get the global solution for the overall swarm.
The position and velocity of the ith particle in D-dimensional solution space are denoted as Li=(li1,li2,…,liD) and Vi=(vi1,vi2,…,viD), respectively, where i=1,2,…,m is the number of the swarm, lid∈[ld,ud], d=1,2,…,D, and ld and ud are the lower and upper bounds of the dth dimension.
According to a preset fitness function, we obtain the personal best position (also named as the local best fitness) of the ith particle denoted as pi=(pi1,pi2,…,piD) and the global best position (also named as the global best fitness) found so far of all particles of the swarm denoted as pg=(pg1,pg2,…,pgD). At each iterative, the ith particle updates its position and velocity as follows:
(4)Vit+1=ω^Vit+c1r1(pit-Lit)+c2r2(pgt-Lit),Lit+1=Lit+Vit+1,
where c1 and c2 are acceleration factors and positive constants, r1 and r2 are random numbers ranging from 0 to 1, and ω^ is the inertia weight on the interval [0,1] keeping the memory of the old velocity vector of the same particle. When ω^ is a constant [22], it can lead to a static PSO, and when ω^ is varying iteratively, it leads to a dynamic PSO.
2.2. Several Variants of Inertia Weight
The inertia weight determines the proportion of the current particle velocity. Large inertia weight can lead to large speed and strong searching ability of particles, but the global optimal solution may be missed. In contrast, small inertia weight makes the particle have strong development capability, but it needs long search time for fine tuning the local optimal solution.
By changing the inertia weight dynamically, the search capability is dynamically adjusted. Many researchers have proposed several variants of PSO according to the impact of ω^. Most of the PSO variants use time-varying inertia weight strategies in which the value of the inertia weight varies with the iteration numbers. These methods can be either linear or nonlinear. In 1998, Shi and Eberhart introduced a Linearly Decreasing Inertia Weight (LDIW) strategy to get better inertia weight ω^ as the following formula [23]:
(5)ω^(t)=ω^min+(ω^max-ω^min)tT,
where t is the number of current iterative steps, T is the maximum number of iterative steps the PSO is allowed to continue, and ω^max is the initial inertia weight and ω^min is the final inertia weight.
Chatterjee and Siarry propose a nonlinear decreasing inertia weight (NDIW) strategy to modulate inertia weight adaptation with time for improved performance of PSO algorithm [24]. The proposed adaption of ω^(t) is given as
(6)ω^(t)=ω^min+(ω^max-ω^min){(t-T)nTn},
where n is the nonlinear modulation index. With n=1, the system becomes a special case of linearly adaptive inertia weight with time, as proposed by Shi and Eberhart [23]. After implementing the performance of this strategy for the famous Sphere function, they draw a conclusion that n=1.2 was a typical value and could be suitably determined for each case individually. When n=1.2, during the early iterations a higher value of ω^ facilitates taking larger steps in the solution space, and during the later iterations ω^ is decreased more rapidly than the linear case, which is very suitable to determine the optimal region among the already discovered promising suboptimal regions.
In [25], Arumugam and Rao propose a global-local best inertia weight (GLbestIW) method in which the inertia weight is neither set to a constant value nor set as linearly decreasing time-varying function. The inertia weight is determined by the ratio of the global best fitness and the local best fitness in each iteration. The GLbestIW is given by the following equation:
(7)ω^i(t)=1.1-pgtpit,
where pgt is the global best fitness at tth iterative and pit is the local best fitness of the ith particle at tth iterative.
2.3. The Proposed Inertia Weight Variant
Larger ω^ is conducive to find the global best solution as soon as possible in the early iterative steps but may lead to miss the global best solution easily in later iterative steps. However, smaller ω^ means longer time to provide slower updating for fine tuning a local exploration. Hence in the early iterative steps, larger ω^ is needed for coarse global exploration, but in later iterations ω^ should decrease for fine tuning the local exploration. Appropriate inertia weight can help find the best solution with the least number of iterative steps.
A larger inertia weight facilitates global exploration and a smaller inertia weight tends to facilitate local exploration to fine-tune the current search area. In order to balance the global and local exploration, we present a new Exponential Decreasing Inertia Weight (EDIW) strategy. In this strategy, the inertia weight is exponential decreasing with the increase of iterative step t. The proposed adaption of ω^(t) is given as
(8)ω^(t)=ω^min+(ω^max-ω^min)*exp(-ctT),
where c is controlling parameter to control the convergence rate of the inertia weight, c>0. When t=0, ω^(t)=ω^max. When t=T, ω^(t) can be expressed by the following equation:
(9)ω^(t)=ω^min+(ω^max-ω^min)*exp(-c).
When t=T, we obtain different values of inertia weight ω^ by the proposed adaption by setting varying sets of {c,ω^max,ω^min}. The results are shown in Table 1. According to Table 1, when t=T, for same ω^max and ω^min, small c makes ω^ be still far from the final inertia weight ω^min, and the value of ω^ is more and more close to ω^min with the increase of c. When c≥6, ω^ is nearly equal to ω^min. So the condition of c≥6 can ensure that ω^(t) varies fully in the given range of [ω^min, ω^max] when t ranges from 1 to T.
Values of inertia weight ω^ by the proposed adaption when t=T, for varying sets of {c, ω^max, ω^min}.
c
ω^max
ω^min
ω^max
ω^min
ω^max
ω^min
ω^max
ω^min
ω^max
ω^min
ω^max
ω^min
0.7
0.2
0.9
0.2
0.7
0.3
0.9
0.3
0.7
0.4
0.9
0.4
1
0.3839
0.4575
0.4472
0.5207
0.5104
0.5839
2
0.2677
0.2947
0.3541
0.3812
0.4406
0.4677
3
0.2249
0.2349
0.3199
0.3299
0.4149
0.4249
4
0.2092
0.2128
0.3073
0.3110
0.4055
0.4092
5
0.2034
0.2047
0.3027
0.3040
0.4020
0.4034
6
0.2012
0.2017
0.3010
0.3015
0.4007
0.4012
7
0.2005
0.2006
0.3004
0.3005
0.4003
0.4005
8
0.2002
0.2002
0.3001
0.3002
0.4001
0.4002
9
0.2001
0.2001
0.3000
0.3001
0.4000
0.4001
10
0.2000
0.2000
0.3000
0.3000
0.4000
0.4000
When c takes different values, different decreasing effects will be got. For convenience to compare the different decreasing effects, we set ω^max=0.9, ω^min=0.2, and T=1000. The decreasing curves of inertia weight ω^ attained by the proposed EDIW strategy for varying c are shown in Figure 1.
The decreasing curves of inertia weight ω^ attained by the proposed EDIW strategy for varying c.
According to Figure 1, for the same c, the rate of descent of ω^ gradually declines as the the increase of iterative step and begins to flatten in the later iterative steps. For different c, the rate of descent of ω^ gradually increases as the increase of c in the early iterative steps. Smaller c can ensure that ω^ does not decrease so fast in the early iterations but will be still far from ω^min in the final iteration. For example, when c=1 and t=1000, the inertia weight ω^ is 0.4575 which is far from ω^min (0.2). Larger c is conducive to making ω^ decrease fast to discover the promising suboptimal regions but may lead to the algorithm prematurely to begin the local search. For example, when c=10 and T=1000, the inertia weight ω^ begins to flatten at the iterative step t=400. Parameter c should be appropriately selected in EDIW-PSO algorithm. Based on the analysis above, we can choose the value of c∈(6,8). During the early iterations ω^ decreases more rapidly than the linear case, which is very suitable for the algorithm to discover promising suboptimal regions. During the later iterations ω^ is fine-tuned, which is conducive to determining the optimal region among the already discovered optimal regions.
3. The Proposed RBF Model by EDIW-PSO Algorithm
In this section we use the proposed EDIW-PSO algorithm to determine the optimal structure of the RBFNN and establish the EDIW-PSO-RBF network model. The position vector L of each particle is needed to be optimized, which represents RBF centers cji, widths dj, and connection weights ωjk, where i=1,2,…,I, j=1,2,…,J, and k=1,2,…,K. Therefore the dimension of L of each particle in EDIW-PSO algorithm is D=I·J+J+J·K. We map each L to the RBFNN and obtain the prediction output. The fitness function of EDIW-PSO algorithm is defined in terms of the relative mean square error (RMSE) between the prediction output values and the actual values in the network training process. Thereby, we can minimize the fitness value of the network by the powerful search performance of EDIW-PSO algorithm. The fitness function is denoted by formula
(10)f=e=1N∑n=1N∑k=1K(ynk-y^nky^nk)2,
where y^nk and ynk are the actual value and the prediction value of the kth output neuron in the nth sample, and N is the number of training samples, while K is the number of output neurons.
The iteration process of the improved EDIW-PSO-RBF learning algorithm can be described clearly as follows.
Step 1.
Initialize the relative parameters, including the size of swarm M, the boundary of position Lmax and velocity Vmax, the acceleration factors c1 and c2, and the maximum iterative steps T. Initialize t=1; for each particle, select two D-dimensional vectors randomly to initialize the position and velocity of this particle, respectively.
Step 2.
Map the position vector Li of each particle to the parameters of RBFNN.
Step 3.
Calculate the fitness value of each particle according to formula (10). Set the current position of each particle as the personal best fitness pi. Then find the minimum fitness value as the global best fitness pg of the whole swarm.
Step 4.
Update the inertia weight ω^ according to formula (8). Modify the particle velocity Vi and position Li according to formula (4).
Step 5.
Map the new position vector Li of each particle to the parameters of RBFNN, input training data, and train RBFNN.
Step 6.
Recalculate the fitness values of the new particles and modify pit and pgt. For each particle, if the current fitness value is better than the previous local best, then set the current fitness value to be the local best; or keep the previous local best. For the global swarm, if the best value of all current local best is better than the previous global best, then update the value of the global best; or keep the previous global best.
Step 7.
Judge whether the particle satisfies the conditions: t=T. If the condition is met, go to Step 8; otherwise, let t=t+1, and go back to Step 4.
Step 8.
Record the global best value pg; exit the iteration.
Step 9.
Use the optimal structure of RBFNN to perform prediction problem.
Apply the above 9 steps until the terminal conditions hold. The flow chart is as in Figure 2.
The flowchart of EDIW-PSO-RBF learning algorithm.
4. Experiment
In order to ensure the prediction accuracy of the proposed EDIW-PSO-RBF model, we choose the daily air quality index (AQI) of Xi’an [28] for the time series prediction. In recent days, the air pollution affects peoples’ travel and life. Daily AQI is a dimensionless index quantitatively describing air quality, which is calculated by the following six indicators: sulfur dioxide (SO2), nitrogen dioxide (NO2), particulate matter (PM10: particle size is less than or equal to 10 microns), particulate matter (PM2.5: particle size is less than or equal to 2.5 microns), carbon monoxide (CO), and ozone (O3). Among them, SO2, NO2, and CO are all the 24-hour average density; O3 is the 8-hour moving average density. We choose 400 sets of data from 2013.1.1 to 2014.2.5 as train data and 5 sets of data from 2014.2.6 to 2014.2.10 as test data. Normalization processing is done with all the sets of data before it is used in the model. We adopt mapminmax function to normalize the data to a range of [-1,1] as the following formula:
(11)y=(ymax-ymin)x-xminxmax-xmin+ymin,
where x is the original data before normalization, xmin and xmax are the minimum value and the maximum value before normalization, respectively, y is the data after normalization, and ymin and ymax are the minimum value and the maximum value after normalization, and, respectively, they are −1 and 1.
In this section, we assess the effectiveness of the proposed EDIW-PSO-RBF model comparing with other three inertia weight variants, which are LDIW-PSO-RBF model [23], NDIW-PSO-RBF model [24], and GLbestIW-PSO-RBF model [25]. Firstly, we build a RBF network consisting of 7 input neurons and 1 output neuron. The 7 input neurons consist of the six indicators and the air quality index, and the 1 output neuron is the next day’s air quality index. The number of neurons in hidden layer is determined to be 10. Therefore, the dimension of each particle in the modified PSO algorithm is D=7×10+10+10×1=90.
Secondly, several parameters in the PSO simulation must be specified. In the proposed EDIW-PSO-RBF model, the value of c is set to be 8. In the NDIW-PSO-RBF model, the nonlinear modulation index n is set to be 1.2. In all the four models, the acceleration factors c1 and c2 are fixed to be 2. The minimum velocity Vmin and minimum position Lmin of every particle both are set to be −1. Meanwhile, the maximum velocity Vmax and maximum position Lmax of every particle both are set to be 1. The maximum number of iterations is set to be 1000. The population size is set as 50.
To assess the performance of the four different models, mean square error (MSE), relative mean square error (RMSE), and mean absolute percentage error (MAPE) are used as criteria, defined as
(12)MSE=1N∑n=1N(yn-y^n)2,RMSE=1N∑n=1N(yn-y^ny^n)2,MAPE=1N∑n=1N|yn-y^ny^n|×100%,
where y^n and yn denote the actual value and the network output value at the nth day, respectively.
The actual daily air quality index (AQI) of Xi’an from January 2 in 2013 to February 5 in 2014 are shown in Figure 3. In this experiment, the proposed EDIW-PSO method, the LDIW-PSO method, the NDIW-PSO method, and the GLbestIW-PSO method are adopted to train RBFNNs. The global best fitness values of the four methods in the training process are given in Figure 4, which shows that the proposed EDIW-PSO method has the fastest convergence rate and the lowest fitness value in the four PSO methods.
The actual daily air quality index (AQI) of Xi’an from January 2 in 2013 to February 5 in 2014.
The best global fitness value for the AQI of Xi’an. Blue short dash line shows the best global fitness value by the improved EDIW-PSO-RBF model; red dashdotted line shows the best global fitness value by the GLbestIW-PSO-RBF model; green solid line shows the best global fitness value by the LDIW-PSO-RBF model; black solid line shows the best global fitness value by the NDIW-PSO-RBF model.
By the EDIW-PSO, when the fitness reaches its minimum, the values of the optimal parameters of RBFNN: centers cji, widths dj, and connection weights ωjk are shown as in Table 2. Figure 5 shows the trained output curve by four methods for the daily air quality index (AQI) of Xi’an. Figure 6 shows the plots of trained absolute errors between the trained outputs and the actual values by four methods. And the values of MSE, RMSE, and MAPE (%) of trained ouput by four methods for the daily air quality index (AQI) of Xi’an are shown in Table 3. From Table 3, the three errors MSE, RMSE, and MAPE (%) of trained output by EDIW-PSO are, respectively, 2.5389*e2, 0.1350, and 2.8644, which are all the smallest among the four methods.
The parameters of RBFNN by EDIW-PSO: centers cji, widths dj, and connection weights ωjk.
j
1
2
3
4
5
6
7
8
9
10
cj1
1.0000
−0.9849
−0.9569
0.7735
0.5874
1.0000
−0.4653
−1.0000
0.7688
−0.3863
cj2
0.5752
−0.1652
−0.9023
−0.9643
0.6898
0.4439
0.4378
0.4779
0.0860
−0.8635
cj3
0.4994
0.2723
−1.0000
−0.1271
0.7500
0.5629
0.2198
−0.9292
0.7286
−0.9566
cj4
0.5250
−0.2811
0.7298
0.9254
−0.1321
0.3404
−0.3397
−0.1867
0.5373
−0.6491
cj5
−0.0776
−1.0000
0.1190
0.4846
0.2787
0.1364
−0.8673
0.9034
1.0000
0.0959
cj6
−0.4532
0.9372
1.0000
0.5370
0.2975
0.9214
0.4095
−1.0000
−0.0813
−0.2635
cj7
−0.0124
1.0000
−0.4612
0.0190
−0.1061
−0.2451
0.7454
0.1273
0.1925
0.0609
dj
−0.0156
−0.9724
0.0728
0.0588
−0.5133
0.8992
1.0000
−0.0857
0.7017
−0.9959
ωj
−0.0289
−0.7936
−0.5968
0.8891
−0.5265
−0.0812
0.3979
0.2548
−0.9623
0.8871
The values of MSE, RMSE, and MAPE (%) of trained ouput by four models for the daily air quality index (AQI) of Xi’an.
Error
LDIW-PSO
NDIW-PSO
GLbestIW-PSO
EDIW-PSO
MSE (*e2)
3.6565
2.6560
3.5800
2.5389
RMSE
0.2239
0.1846
0.1935
0.1350
MAPE (%)
3.2143
2.9898
3.1684
2.8644
The trained output curve by four models for the daily air quality index (AQI) of Xi’an.
Trained output by LDIW-PSO-RBF
Trained output by NDIW-PSO-RBF
Trained output by GLbestIW-PSO-RBF
Trained output by EDIW-PSO-RBF
Plots of trained absolute errors between the trained outputs and the actual values by four models for the daily air quality index (AQI) of Xi’an.
Trained error by LDIW-PSO-RBF
Trained error by NDIW-PSO-RBF
Trained error by GLbestIW-PSO-RBF
Trained error by EDIW-PSO-RBF
Based, respectively, on the trained optimal parameters of RBFNN by the four methods, we predict the daily air quality index (AQI) of Xi’an of following five days from 2014.2.6 to 2014.2.10. Table 4 shows the predicted outputs by using the LDIW-PSO-RBFNN model, the NDIW-PSO-RBFNN model, the GLbestIW-PSO-RBFNN model, and the EDIW-PSO-RBFNN model. Table 5 shows the values of MSE, RMSE, and MAPE (%) of predicted output by the four models according to formula (12). By EDIW-PSO-RBF model, the three errors MSE, RMSE, and MAPE (%) are, respectively, 4.9985*e2, 0.0080, and 7.5166, which are all the smallest among the four models.
The results of predicted ouput by four models for the daily air quality index (AQI) of Xi’an of the following five days (2014.2.6–2014.2.10).
Day
Actual
LDIW-
NDIW-
GLbestIW-
EDIW-
PSO
PSO
PSO
PSO
1
231.2500
224.3392
222.8994
223.3160
234.1575
2
222.0000
247.3886
245.6876
243.5894
236.7476
3
280.0000
240.9040
250.5690
249.1804
241.5187
4
215.0000
246.8951
248.3747
254.2518
241.8096
5
247.5000
249.7290
246.5483
243.1647
238.9121
The values of MSE, RMSE, and MAPE (%) of predictied ouput by four models for the daily air quality index (AQI) of Xi’an.
Error
LDIW-PSO
NDIW-PSO
GLbestIW-PSO
EDIW-PSO
MSE (*e2)
6.4862
5.2236
6.0768
4.9985
RMSE
0.0111
0.0096
0.0113
0.0080
MAPE (%)
8.8246
8.1400
8.8342
7.5166
5. Conclusion
In this paper, we present and discuss an improved EDIW-PSO-RBF model to solve prediction problem. Based on the EDIW-PSO algorithm, we optimize the centers, widths, and connection weights of radial basis function (RBF) neural network. Furthermore, EDIW-PSO-RBF model is applied to the daily air quality index (AQI) prediction comparing with LDIW-PSO-RBF model, NDIW-PSO-RBF model, and GLbestIW-PSO-RBF model. In the processing of optimizing RBF, the value of fitness obtained by the proposed EDIW-PSO method is smaller than the values obtained by other three methods. And for the train data and the prediction test data, the MSE, RMSE, and MAPE (%) obtained by the improved EDIW-PSO-RBF model are all smaller than the values of which by the other three models. From the simulation results it can be observed that the PSO with the proposed inertia weight, EDIW, provides better result compared to the other PSO methods with LDIW and NDIW as well as the GLbestIW. The proposed EDIW-PSO-RBF model can be satisfactorily used in other prediction problems.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
Acknowledgment
This work is fully supported by the National Natural Science Foundation Grant no. 61275120 of China.
BroomheadD. S.LoweD.Radial basis functions, multi-variable func tional interpolation and adaptive networksRoyal Singals and Radar Establishment Malvern (UNITED KINGDOM), 1988HuangG.SaratchandranP.SundararajanN.A generalized growing and pruning RBF (GGAP-RBF) neural network for function approximationAlbrechtS.BuschJ.KloppenburgM.MetzeF.TavanP.Generalized radial basis function networks for classification and novelty detection: self-organization of optimal Bayesian decisionHanH.ChenQ.QiaoJ.An efficient self-organizing RBF neural network for water quality predictionLiM. B.HuangG. B.SaratchandranP.SundararajanN.Performance evaluation of GAP-RBF network in channel equalizationZhangA.ChenC.KarimiH. R.A new adaptive LSSVR with on-line multikernel RBF tuning to evaluate analog circuit performanceWalczakB.MassartD. L.Local modelling with radial basis function networksHerreraL. J.PomaresH.RojasI.GuillénA.RubioG.UrquizaJ.Global and local modelling in RBF networksSaranliA.BaykalB.Complexity reduction in radial basis function (RBF) networks by using radial B-spline functionsPunziA.SommarivaA.VianelloM.Meshless cubature over the disk using thin-plate splinesZhangJ. Z. H.LiH. Y.A reconstruction approach to CT with cauchy RBFs networkFigueiredoM. A. T.On gaussian radial basis function approximations: interpretation, extensions, and learning strategies2Proceedings of the IEEE 15th International Conference on Pattern Recognition2000Barcelona, Spain61862110.1109/ICPR.2000.906151ChenS.CowanC. F. N.GrantP. M.Orthogonal least squares learning algorithm for radial basis function networksLangariR.WangL.YenJ.Radial basis function networks, regression weights, and the expectation: maximization algorithmKarayiannisN. B.Reformulated radial basis neural networks trained by gradient descentGrabustsP. S.A study of clustering algorithm application in RBF neural networksBillingsS. A.ZhengG. L.Radial basis function network configuration using genetic algorithmsManC.LiX.ZhangL.Radial basis function neural network based on ant colony optimizationProceedings of the International Conference on Computational Intelligence and Security Workshops (CIS '07)December 2007Harbin, China59622-s2.0-52249088170ChenS.HongX.LukB. L.HarrisC. J.Non-linear system identification using particle swarm optimisation tuned radial basis function modelsWuD.WarwickK.MaZ.GassonM. N.BurgessJ. G.PanS.AzizT. Z.Prediction of parkinson's disease tremor onset using a radial basis function neural network based on particle swarm optimizationHuangG. B.ZhouH.DingX.ZhangR.Extreme learning machine for regression and multiclass classificationEberhartR.KennedyJ.New optimizer using particle swarm theoryProceedings of the 6th International Symposium on Micro Machine and Human ScienceOctober 1995Nagoya, Japan39432-s2.0-0029517385ShiY.EberhartR.A modified particle swarm optimizerProceedings of the 1998 IEEE International Conference on Evolutionary Computation (ICEC '98)May 199869732-s2.0-0031700696ChatterjeeA.SiarryP.Nonlinear inertia weight variation for dynamic adaptation in particle swarm optimizationArumugamM. S.RaoM. V. C.On the performance of the particle swarm optimization algorithm with various inertia weight variants for computing optimal control of a class of hybrid systemsKaiyouL.YuhuiQ.YiH.A new adaptive well-chosen inertia weight strategy to automatically harmonize global and local search ability in particle swarm optimizationProceedings of the 1st International Symposium on Systems and Control in Aerospace and Astronautics (ISSCAA '06)January 20069779802-s2.0-33750932978ChenR.WangC.Project scheduling heuristics-based standard PSO for task-resource assignment in heterogeneous gridhttp://www.xianemc.gov.cn/