Electrical load forecasting plays a key role in power system planning and operation procedures. So far, a variety of techniques have been employed for electrical load forecasting. Meanwhile, neural-network-based methods led to fewer prediction errors due to their ability to adapt properly to the consuming load's hidden characteristic. Therefore, these methods were widely accepted by the researchers. As the parameters of the neural network have a significant impact on its performance, in this paper, a short-term electrical load forecasting method using neural network and particle swarm optimization (PSO) algorithm is proposed, in which some neural network parameters including learning rate and number of hidden layers are determined in order to forecast electrical load using the PSO algorithm precisely. Then, the neural network with these optimized parameters is used to predict the short-term electrical load. In this method, a three-layer feedforward neural network trained by backpropagation algorithm is used beside an improved gbest PSO algorithm. Also, the neural network prediction error is defined as the PSO algorithm cost function. The proposed approach has been tested on the Iranian power grid using MATLAB software. The average of three indices beside graphical results has been considered to evaluate the performance of the proposed method. The simulation results reflect the capabilities of the proposed method in accurately predicting the electrical load.

Load forecasting is an effective and crucial process in the management and operation of power systems which can lead to significant cost savings when accurately calculated. Also, very important decisions are made based on the forecasted load, the economic consequences of which are notable [

Load forecasting can be divided into three categories: short-term, mid-term, and long-term forecasting [

Due to the great ability in nonlinear relationships modelling between inputs and outputs, artificial neural networks are increasingly used in load forecasting [

Optimizing neural network architecture design, including determining the number of input variables, the number of input nodes, and the number of hidden neurons to enhance prediction performance, is an important issue in intelligent systems [

Reference [

The rest of this paper is organized as follows: A brief overview of neural networks and PSO algorithms is presented in Sections

An artificial neural network is derived from the way of information process in human biological systems and consists of an interconnected group of elements called neurons [

Feedforward neural network with a hidden layer.

An artificial neural network with an input layer, one or more hidden layers, and an output layer is called multilayer perceptron network. Each layer consists of several neurons, each of which is connected in a layer to its adjacent layers through some weights. Weight and bias are the two adjustable parameters in neural networks. Tuning the neural networks parameters is done in a process called training algorithm [

Swarm intelligence algorithms are classified to several old-style methods such as ant colony optimizer (ACO) [

PSO algorithm is an iterative optimization method in which a population is produced for the search process at first called particles. Then these particles travel a multidimension space formed by each particle [

Benefits of PSO over other metaheuristic approaches are computational feasibility and effectiveness. PSO shows its uniqueness such as easy implementation and consistency in performance [

-Only a fitness function to measure the ‘‘quality" of a solution instead of complex mathematical operations like gradient, Hessian, or matrix inversion is required. This reduces the computational complexity and relieves some of the restrictions that are usually imposed on the objective function like differentiability, continuity, or convexity.

As it is a population-based algorithm, it is less sensitive to a good initial solution.

Easily incorporates with other optimization tools to form hybrid ones.

It has the ability to escape local minima, since it follows probabilistic transition rules.

More interesting PSO advantages can be emphasized when compared to other members of evolutionary methods like GA, HHO, GWO, and so forth as follows:

Easily programmed and modified with basic mathematical and logic operations.

Inexpensive in terms of computation time and memory.

Less parameter tuning is required.

It works with direct real valued numbers, which omits the need to do binary conversion of classical canonical genetic algorithm [

Different PSO algorithms have been known up to now, among which gbest algorithm is more popular. In this approach, the whole population is considered as a unique neighborhood for that particle during the gaining experience process. In order to optimize the search procedure, the best particle shares its coordinates information with other particles [

In this algorithm, the ^{th} particle velocity, _{i}^{k} and ^{th} particle in ^{th} iteration, respectively, _{1} and _{2} are constants called personal learning factor and social learning factor, respectively.

Updating the particle position is done according to the following equation: _{i}^{k+1} is the new particle position, _{i}^{k} is the previous particle position, and

The old types of PSO algorithm had some undesirable dynamic characteristics including velocity restrictions to control the particle path. In this paper, by applying the limitation on factors according to (3) and (4), the possibility of the dynamic characteristic control on the particle swarms and making a balance between local search and global search is provided [

In this paper, in order to predict the short-term electrical load, a feedforward neural network trained by backpropagation algorithm has been chosen. This network consists of one hidden layer, and the number of neurons in this layer is considered as the optimization parameter. In designing the neural network architecture, the number of neurons in the hidden layer has an important effect on the network performance, making the precision in choosing them. If the number of these layers is chosen to be low, the network gets in trouble in the training step; and if the number of these layers is chosen to be high, the network will face overfitting. Also, the network learning rate between other parameters of the neural network is considered the other optimization parameter. Suitable values for the two optimization parameters are found, utilizing the improved PSO algorithm introduced in Section

The flowchart of the proposed method is reflected in Figure

The flowchart of the proposed method (n is the total number of pieces of test data).

This section provides more details about the proposed method that was presented in the previous section. As stated in Section

In order to improve the performance of the neural network and prevent the neurons saturations phenomena, all used data in neural network are normalized using the following formula:

The first 900 days of 1093 studying days are considered as training data and the remaining 193 are considered neural network test data. The number of neurons in the output layer is 24 and, due to crucial role of the number of the hidden layer neurons, its number is considered as an optimization parameter. The transition function for the output layer and hidden layer is considered tansig and the learning law is considered Levenberg-Marquardt [

The resulting error by the neural network for load forecasting is considered the cost function and, as declared before, two variables of the network, the learning rate and the number of hidden layer neurons, are considered the optimization variables for the PSO algorithm. The improved type of PSO is considered, the parameters of which are as in (

This section demonstrates the effectiveness of the proposed method; it has been simulated in MATLAB software. The computing system is a core i5 system with 1.6 GHz CPU and 4 GB memory.

The chosen test data are adopted to evaluate this network's performance, where the results for some days are according to Figure

Load forecasting for 7 September 2012.

Load forecasting for 24 December 2012.

Load forecasting for 17 September 2012.

Load forecasting for 26 September 2012.

Load forecasting for 1 October 2012.

Load forecasting for 30 September 2012.

The obtained results apparently show the appropriate performance of the introduced approach.

For more investigation, in order to evaluate the forecasting model, MSE, MAPE, and MAE indices are calculated according to Table _{m}(^{th} data actual load value, and _{p}(^{th} data predicted load value.

Error indices.

MAPE | |
---|---|

MSE | |

MAE |

Error calculations results.

MAPE | MSE | MAE |
---|---|---|

0.03388 | 1. 26268 | 0.02191 |

Although these results are acceptable and proper, in this paper, only the load historical data is used, whereas using other impressive factors on load behaviour can reduplicate the approach's performance.

Electrical load forecasting affected the power system operation and planning processes in a way where the correct operation of the power system depends on precision of this prediction. Also, the power system's behaviour, especially its generation units in small or large scale, is affected by this prediction and its deviation from the actual value can impose additional costs to the system. Numerous load forecasting methods have been proposed up to now, where neural-network-based methods are one of them. Due to the nonlinear relationship between load pattern changes and effective parameters on it and complex relations between load pattern changes and these parameters and neural networks' ability to discover them, researchers have accepted them more than other methods. Meanwhile, numerical to neural network parameters have an obvious effect on their performance. So, exploiting algorithms such as PSO algorithm can be helped. This paper proposes an approach for electrical load forecasting using PSO algorithm and neural network with backpropagation algorithm. At first, the PSO algorithm is used to tune some neural network parameters to access the optimized and appropriate model. Then, the neural network with obtained optimized parameters is used for short-term electrical load forecasting. The simulation results indicate the precision and power of the proposed method in short-term electrical load forecasting. For future directions, we will develop a model according to the deep learning techniques [

The data used to support the findings of this study are included within the article.

The authors declare that there are no conflicts of interest regarding the publication of this paper.