Performance Analysis of a Wind Turbine Pitch Neurocontroller with Unsupervised Learning

In this work, a neural controller for wind turbine pitch control is presented.,e controller is based on a radial basis function (RBF) network with unsupervised learning algorithm. ,e RBF network uses the error between the output power and the rated power and its derivative as inputs, while the integral of the error feeds the learning algorithm. A performance analysis of this neurocontrol strategy is carried out, showing the influence of the RBF parameters, wind speed, learning parameters, and control period, on the system response. ,e neurocontroller has been compared with a proportional-integral-derivative (PID) regulator for the same small wind turbine, obtaining better results. Simulation results show how the learning algorithm allows the neural network to adjust the proper control law to stabilize the output power around the rated power and reduce the mean squared error (MSE) over time.


Introduction
Green directives in many countries promote the use of renewable energies to improve the sustainability of worldwide energy systems. Indeed, the number of terawatts produced by clean energies is growing each year [1]. Among clean energies, wind is the second most used natural resource after hydropower, due to its high efficiency. Although a mature technology, there are still many engineering challenges related to wind turbines (WTs) that must be addressed [2].
Depending on the type of wind turbine, different control actions can be applied, namely: the pitch angle of the blades or rotor control, which is used as a brake to maintain the rated power of the turbine once the wind surpasses certain threshold; the yaw angle, which is used to change the attitude of the nacelle to match the wind stream direction; and finally, the generator speed control that seeks to reach the optimal rotor velocity when the wind is below the ratedoutput speed. e WT controller is in charge of managing all of these mechanisms to optimize the efficiency of the system while the safety must be guaranteed under all possible wind conditions. is fact may be even more critical for floating offshore wind turbines (FOWTs) as it has been proved that the control system can affect the stability of the floating device [3,4]. e pitch control of a wind turbine is a complex task itself due to the highly nonlinear behaviour of these devices, the coupling between the internal variables, and because they are subjected to uncertain and varying parameters due to external loads, mainly wind, and in the case of FOWT, waves and currents also. ese reasons have led to explore intelligent control techniques to tackle these challenges [5]. Among traditional control solutions, sliding mode control has been recently applied with successful results, such as in [6], where a PI-type sliding mode control (SMC) strategy for permanent magnet synchronous generator-(PMSG-) based wind energy conversion system (WECS) uncertainties is presented. Nasiri et al. [7] proposed a supertwisting sliding mode control for a gearless wind turbine by a permanent magnet synchronous generator. A robust SMC approach is also proposed in [8], where authors use the blade pitch as control input, in order to regulate the rotor speed to a fixed rated value. In [9], an adaptive robust integral SMC pitch angle controller and a projection type adaptation law are synthesized to accurately track the desired pitch angle trajectory, while it compensates model uncertainties and disturbances.
Regarding intelligent control, fuzzy logic has been widely applied to the wind turbine pitch control. For example, in [10], pitch angle fuzzy control is proposed and compared to a PI controller for real weather characteristics and load variations. Rocha et al. [11] applied a fuzzy controller to a variable speed wind turbine and compared the results with a classical proportional controller in terms of system response characteristics. Rubio et al. [12] presented a fuzzy logicbased control system for the control of a wind turbine installed on a semisubmersible platform. But application of neural networks to turbine pitch control is scarcer, maybe due to the lack of real data to train the network [13]. However, Asghar and Liu [14] designed a neurofuzzy algorithm for optimal rotor speed of a wind turbine. In [15], artificial neural network-based reinforcement learning for WT yaw control is presented. In [5], a passive reinforcement learning algorithm solved by particle swarm optimization is used to handle an adaptive neurofuzzy type-2 inference system for controlling the pitch angle of a real wind turbine. In [16], a robust H∞ observer-based fuzzy controller is designed to control the turbine using the estimated wind speed. Two artificial neural networks are used to accurately model the aerodynamic curves. From a different point of view, in [17], the authors proposed an information management system based on mixed integer linear programming (MILP) for a wind power producer having an energy storage system and participating in a day-ahead electricity market.
In this work, we have focused on the pitch control of a small wind turbine. Based on the neural control strategy proposed in [18], we have extended it to deal with the dynamics of the pitch actuator. Besides, the derivative and the integration of the power error have been added as inputs to the learning algorithm. is way the error variation and the past error values are considered and used to update the weights of the neural network; this helps to accelerate the learning process. e main contribution of this paper is twofold. On the one hand, a radial basis network (RBF) wind turbine pitch controller is designed and implemented. is controller uses the output power to update the weights of the neural network in an unsupervised way. On the other hand, a detailed analysis has been carried out on how the configuration of the neural network, the learning algorithm, and the controller parameters affect the control performance and the evolution of the error. Another advantage of the approach here presented is that, in contrast to traditional controllers which have different control schemes for different wind speed regions, only one controller is used for all operational regions of the wind turbine. e rest of the paper is organized as follows. Section 2 describes the model of the small wind turbine used. Section 3 explains the neural controller architecture and the unsupervised learning strategy. e results for different neural network configuration and learning parameters are analysed and discussed in Section 4. e paper ends with the conclusions and future works.

Wind Turbine Model Description
e model of a small 7 kW wind turbine is developed. e ratio of the gear box is set to 1, so the rotor torque is the same as the mechanical torque of the generator, T m (Nm), given by the following equation [19]: where C p is the power coefficient; ρ is the air density (kg/ m 3 ); A is the area swept by the turbine blades (m 2 ); v is wind speed (m/s); and w is the angular rotor speed (rad/s). e blade swept area can be approximated by A � πR 2 , where R is the radius or blade length. e power coefficient is usually determined experimentally for each turbine. ere are different expressions to approximate C p ; in this case, it has been calculated as a function of the tip speed ratio λ and the blade pitch angle θ (rad): where the values of the coefficients c 1 to c 9 depend on the characteristics of the wind turbine. e pitch angle θ is defined as the angle between the rotation plane and the blade cross section chord, and the tip speed ratio is given by the following equation: From equation (3), it is possible to observe how C p decreases with the pitch angle. Indeed, when θ � 0 (rad), the blades are pitched so the blade is all out and producing at its full potential, but with θ � (π/2) (rad), the blades are out of the wind. e pitch actuator is modelled as a second-order system. is assumption is widely used to model pitch systems in wind turbines and other mechanical actuators [20]. In this case, θ ref is the input of the pitch actuator and θ is its output: us far, the model is focused on the mechanical aspects of the system. But dynamics of the generator combine the mechanical and electrical domains. e relation between the rotor angular speed w and the mechanical torque T m in a continuous current generator is given by the following expressions [21]: 2 Complexity where T em is the electromagnetic torque (Nm), J is the rotational inertia (kg·m 2 ), K f is the friction coefficient (N·m·s/rad), K g is a dimensionless constant of the generator, K ϕ is the magnetic flow coupling constant (V•s/rad), and I a is the armature current (A). e armature current of the generator is then given by the following equations: where L a is the armature inductance (H), E a is the induced electromotive force (V), V is the generator output voltage (V), and R a is the armature resistance (Ω). For simplicity, it is commonly assumed that the load is purely resistive, given by R a . us, V � R L I a , and the output power (W) is P out � R L I 2 a . By the combination of the previous equations (1)-(9), the following expressions summarize the dynamics of the wind turbine.
In this work, we have focused on controlling the output power by means of the pitch angle, so the input control variable is θ ref and the controlled output variable is P out (boldfaced in equations (10)- (15). e state variables are I a , w, θ, and _ θ. e wind turbine parameters used during the simulations are shown in Table 1 [19].

Neural Controller Architecture.
e architecture of the proposed wind turbine neural controller is shown in Figure 1. e error P err is the difference between the power reference signal P ref (rated power) and the power output. e nominal power of this wind turbine is 7 kW. Power error, P err , and its derivative, _ P err , are saturated to maintain their values within a suitable range; the saturated signals are P err S and _ P err S , respectively. ey are the inputs of the radial basis neural network that is used to implement the controller. e output of the neural network RBF o is biased for (π/4) and goes through a saturation block to adapt it to the range [0, (π/2)] (rad). e result of this process is the signal θ ref that will be used as pitch reference of the wind turbine control.
e neural network must learn the control law f c : R 2 ⟶ R, which will be able to stabilize the wind turbine output power around its nominal value. is function is not known beforehand. In other control schemes, the weights of the RBF network are updated using supervised learning. at requires a known input/output dataset to train the neural network.
is way it generates the expected output when it receives a similar input to the ones used for the training. However, in our case, there are no labelled output data to train the network.
If we knew the correct pitch control signal for each P err and _ P err , we would know the appropriate control law, and we would not need a neural network to learn it. For this reason, it is not possible to use supervised learning. at is why in this approach the learning algorithm receives the error signal P err , its derivative, and its integral and combines them to generate the new weights of the neural network. e equations of this neurocontrol strategy are the following: where T c is the control period (s); the maximum and minimum values of the variables, [P err MIN , P err MAX , _ P err MIN , _ P err MAX ] ∈ R 4 , are the constants that will allow to adjust the range of the controller, with the constraints P err MIN < P err MAX and _ P err MIN < _ P err MAX ; f RBF is the RBF function; and f learn denotes the function of the learning algorithm.
e MIN and MAX operators in equations (18), (19), and (22) are applied to maintain the signal value within boundary conditions. e expression MIN(v 1 , MAX (v 2 , v 3 )) sets v 1 as the upper boundary; v 2 as the lower limit; and v 3 as the signal to be saturated. e MAX operator holds v 3 beyond the lower bound. e output of the MAX operator is kept below the upper limit by the MIN operator.

Setting Up RBF.
e aim of the RBF is to compute the bidimensional function f c (P err , _ P err ) ⟶ RBF o which implements the control law that it is able to stabilize P out around P ref . As it is well known, any derivable continuous function can be approximated by the sum of exponential functions. In this work, we take advantage of this property to approximate the control law by the RBF neural network. In order to map the input space to the output space, we discretize the bidimensional input space of the neural network (P err , _ P err ) applying a gridding. Figure 2 shows the ΔX × ΔY grid. e centres of the neurons are initialized to the intersection points of the grid lines. is will set the precision of the error. e number of rows and columns, the horizontal and vertical length of the cells, ΔX and ΔY, respectively, and the number of neurons M are related by the following expressions: where the number of horizontal lines is N x , i.e., N x − 1 rows, and the number of vertical lines is N y . In order to ensure that a horizontal line and a vertical line intersects the point (0, 0), N x and N y must be odd and bigger than 1.
Once ΔX and ΔY are determined, the centre of the neurons is obtained by the following equation: where (c i1 , c i2 ) is the centre of the i neuron. Learning algorithm 4 Complexity e output of the RBF neural network (20) is then given by the following expressions (where the variable t i has been omitted for sake of clarity): where dist is a normalized distance measure, M is number of neurons in the hidden layer, W i is the weight of the i-neuron, and σ i is the width of the i-neuron activation function, which is normally the same for all neurons. e width of the neuron is also related to the error accuracy. e normalized distance (29) is calculated by the 2-D Euclidean distance once each 1-D distance has been normalized to the range

Unsupervised Learning
Algorithm. e parameters to be updated by a learning algorithm in an RBF neural network are the centres of the RBF neurons, the σ i parameters, and the output weights. As explained before, the centres of the neurons are equally distributed in all the input space. In addition, in this work, as is common, it is assumed that the entire input space is equally important when obtaining the output of the network; therefore, the σ i parameters are set in advance to the same value for all the neurons. us, the learning algorithm only has to update the weights.
As said before, many control schemes with RBF neural networks use supervised learning to update the weights but this is not the case. ere are no labelled output data to train the network, so the neural network must learn a control law previously unknown in an unsupervised way. is learning procedure is as follows.
e input space has been pseudodiscretized, placing the RBF neurons at the centres of the grid. Given a network input pair (P err , _ P err ), the neuron closest to this pair will have the biggest contribution to the output value of the mapping. Although it will not be the only neuron that influences the output, the contribution decreases with the distance and increases with the width of the activation function.
If the centres of the neurons are separated enough and the width of the activation function is correctly selected, the contribution of the surrounding neurons may be neglected and all points in the input space are discretized to the centre of its closest neuron. erefore, by updating the weight W i of the i-neuron, it is possible to adjust the output value of the input pair (c i1 , c i2 ), due to the fact that in these points the value of the exponential function is 1. us, the closer the input pair is to the centre of some neuron, the better is the approximation of the RBF function f c . e learning algorithm will be in charge of updating the weights of the network based on the output power errors, adjusting the mapping of the f c function. In the output layer of the neural network, all the partial contributions of the neurons are linearly combined to obtain the output value (28).
In order to illustrate this unsupervised learning procedure, Figure 3(a) shows an example of the initial surface of the weights of the neural network with all set to 1. Figure 3(b) shows the corresponding pitch control law at the output of the (π/4) − bias before learning, and Figure 4(a) presents the control surface after applying the learning strategy, with the final values of the weights. e pitch control law, before and after the learning, is shown in Figures 3 and 4(b), respectively. It is possible to see, as expected, that positive errors increase the weights, bending upwards the surface and thus incrementing the output value of the neural network. is means reducing the pitch angle reference, θ ref , and enlarging the output power.
In this work, we take as starting point the typical supervised learning strategy of an RBF to reduce the error at each iteration, given by equation (27), where T is the expected output value and RBF o is the current output value.
As T is not available, in order to make the power error zero, the term (T − RBF o ) is replaced by P err S . Equation (28) details how the function f learn of equation (21) is then calculated, that is, how the weights of the RBF neural network are modified.
where μ is the learning rate and [K pL , K dL , K iL ] are positive constants. As it may be observed, the exponential term is the same as in equations (25) and (26). e following pseudocode details the unsupervised algorithm which updates the weights of the RBF network (Algorithm 1): Here, M is the number of neurons, W is an array with the weights, Nx is the number of neurons in the x-axis of the grid input space, and Ny is the number of neurons in the y-axis of the grid input space. A learning threshold, minErr, is defined so errors below that value are discarded. e centres of the neurons are represented in the array cNet. e tuning parameters of the learning rate are mu, KP, KD, and KI; the control sampling time is Tc, F is an array with the output of At the beginning of the procedure, all variables are initialized, and the centres of the RBF are calculated. en, the simulation is run each Ts second. e controller is updated each Tc second; therefore, Tc must be larger than Ts. Each control sample time, Tc, the output of the RBF, RBFout, and the WT pith reference, pitchCon, are obtained. If the error is above the threshold, a combination of the error, its derivative, and its integration is calculated (variable errM). en, the array with the increments of the weights, Winc, is obtained from the previous F array, Fold, and the current error measurement, errM.

Performance Analysis of the Neurocontrol Strategy
A performance analysis of this unsupervised neurocontrol strategy has been carried out under different network configurations and varying some parameters of the learning algorithm and of the pitch control law. e software Matlab/ Simulink has been used. e duration of each simulation is 100 s. In order to reduce the discretization error, a variable step size has been used for the simulation experiments, with maximum step size set to 10 ms. e control sample time T c has been fixed to 100 ms. e neurocontroller performance is compared with a PID regulator. In order to make a fair comparison, the PID output has been scaled to adjust its range to [0, (π/2)] and it has been also biased by (π/4). e equation of the biased PID controller is expressed as follows:  Figure 4: Weight surface (a) and pitch control law (b) after learning. 6 Complexity e wind turbine nominal power is 7 kW, and thus the reference P ref � 7000 W. e tuning parameters [K P , K D , K I ] have been determined by trial and error, and their values are [1, 0.2, 0.9], respectively. e parameter minErr of the learning algorithm is set to 15. e performance of the controllers has been evaluated with the MSE, the mean value, and the variance, calculated as where T sim is the simulation time and T s i is the sampling time i that is necessary due to the variable step size that has been used. Figure 5(a) shows the output power when different strategies are applied. e blue line represents the output when the pitch is permanently set to zero, the red one represents the output when pitch angle is set to feather position (90°), the yellow line is the response with the PID, the purple one is the response with the neural controller, and finally the green line represents the rated power. Figure 5(b) shows a zoom of the previous figure to see better the variations of the signals. In this experiment, the wind is randomly generated with a speed between 11.5 and 14 m/s, the RBF has 25 neurons in the hidden layer, σ is set to 0.1, the maximum and minimum values of the parameters [P err MIN , P err MAX , _ P err MIN , _ P err MAX ] are set to [−1000, 1000, −400, 400], and learning rate μ is 0.0001 * 1.5.
As shown in Figure 5, when the pitch is set to zero, the output power is always bigger than the rated power because the blades harness the maximum power of the wind. As expected, when the pitch is fixed to feather, the opposite happens, as the surface which faces the wind is minimum. Another interesting outcome is that the proposed neurocontroller is not only able to stabilize the output power around the nominal value, but its performance is better than the PID, particularly up to 50 s, and it is less oscillatory.
As the output power depends on the wind, different simulations were carried out varying the wind speed. e configuration of the neural network and the learning algorithm is the same as in the previous experiment. Figure 6 shows the influence of the wind speed regarding the mean square power error (MSE). e red bar is the MSE with the neural controller, and the blue one is the MSE with the PID. As expected, the higher the wind, the larger the error. For all the ranges of wind speed, the neurocontrol strategy has been proved to be better than the PID. Table 2 summarizes the detailed results of the simulation experiments with different wind speeds between 12.2 and 12.8 m/s. At a wind speed below 12.2 m/s, the stabilized output power is always lower than 7 kW, even with pitch angle set to 0. With a wind speed over 12.8 m/s, the steady output power is always higher than 7 kW even when the pitch is set to 90°. In all the cases, the error is smaller with the neural controller than with the PID but for 12.5, 12.7, and 12.8 m/s, the mean obtained with the PID is slightly smaller.
A sinusoidal wind signal has been also tested. e average wind speed is 12.5 m/s with an amplitude of 0.6 m/s and a period of 50 s. e result of the experiment is shown in Figure 7. e output power is represented with the same colour code as Figure 5. To show how the RBF learns, Figure 7(a) shows the response for iterations from 1 to 175. In Figure 7(b), the output power for different control strategies described before when the system has already learned is represented. e learning capability of the neurocontroller is shown in Figure 8. e MSE quickly converges in a few iterations. It is also possible to observe an inflection point around iteration 30. From this point on, the learning speed decreases. Indeed, from them, the MSE hardly varies. e frequency of the sinusoidal wind speed signal also influences the results. Figure 9 shows the results for different periods (blue, PID; red, neurocontrol). e minimum MSE is reached for the minimum period; from this value, the MSE grows. At a period of 20 s, a local maximum for the neural controller appears, and the same happens at 35 s for the PID. From then on, the error decreases for both controllers. In all the cases, the error is much smaller with the neurocontroller than with the PID. Table 3 summarizes the results obtained with this experiment. In all cases, the MSE is much smaller with the 8 Complexity neural controller. Moreover, it is possible to observe how the response with the neural controller slightly improves when the period is larger than 20 s: the MSE and the variance decrease, and the mean value remains almost unchanged. Meanwhile, for the PID, there are several local minimums and maximums in the MSE and the variance. ese upward and downward trends also appear in the MSE and the variance evolution. Nevertheless, the influence of the wind period is not so relevant.

Influence of the RBF.
e influence of the configuration of the RBF neural network in the performance of the controller has also been evaluated. Different number of neurons, values of the σ parameter, and several limits have been tested. e wind turbine is subjected to a random wind with mean speed between 11.5 and 14 m/s; lambda is 0.0001 * 1.5, σ is set to 0.1 in this experiment, the lower and upper limits [P err MIN , P err MAX , _ P err MIN , _ P err MAX ] are set to [−1000, 1000, −400, 400], and the number of neurons varies. Figure 10 shows the influence of the number of neurons, M, in the evolution of the MSE. e colour is associated to the number of neurons (see the legends). All the curves of this figure have a similar shape, and the main difference is the slope before the inflection point. It is possible to see that the more the neurons, the higher the slopes. In general, the   error decreases with the number of neurons until the number is so large that the network does not learn. For example, the MSE with 441 neurons is bigger than that with 121.
To evaluate the influence of the width σ of the activation function, the configuration of the RBF network is set to the previous values, with the number of neurons M = 9, 25, and 121 ( Figure 11, from left to right). e σ parameter varies from 0.05 to 0.75. In all the cases, the MSE tends to decrease as σ increases. ere is a sharp drop in MSE during the first iterations for values of σ greater than 0.25. e descent rate also grows with the number of neurons. Figure 12 shows another perspective of the influence of σ in the error. It represents the MSE at iteration 200 for different σ values and different number of neurons. In this figure, it is also possible to see how the MSE decreases with σ until this parameter is around 0.25, where it starts to grow.
is inflexion point does not depend on the number of neurons but the fall before that minimum does depend on the number of neurons (the bigger the number of neurons, the larger the descent rate). e performance of the neurocontroller can be also adjusted by the modification of the saturation limits of the input space. Different sets of values of [P err MIN , P err MAX , _ P err MIN , _ P err MAX ] have been tested, one varying P err MAX and another changing _ P err MAX . In both experiments, the number of neurons M is set to 121, σ is 0.25, and 5 iterations are run. When P err MAX . is changed, the value of _ P err MAX is kept constant to 400, and when _ P err MAX varies, the limit P err MAX is fixed to 1000. e corresponding negative boundaries have the same absolute value. Table 4 shows the variation of MSE, the output power mean, and its variance when P err MAX is modified from 100 to 1500. e MSE and the mean value decrease with P err MAX ; however, the variance grows. is may be due to the fact that bigger values of P err MAX mean bigger variations in the output power and thus larger variance. e influence in the MSE is explained since wider boundaries produce less saturated values and more available information for the learning process. But if the saturation is not reached, too high value of P err MAX may be counterproductive because the spatial distribution of the neurons makes more neurons to be useless. Table 5 summarizes the variation of the MSE, the output power mean, and its variance when _ P err MAX changes from 50 to 7050. Similar to Table 4, the MSE and the mean value are reduced when _ P err MAX increases until a local minimum is reached. It may be also explained for the reduction of the saturated values. However, in this case, the variance also decreases with _ P err MAX .  Figure 13 shows the results for different learning rate μ. Again, the MSE decreases at each iteration. As expected, the descent rate grows with the learning rate. ese results may also be seen in Table 6 (at iteration 5). e output power mean value also decreases with the learning rate. However, the variance grows, and larger values of μ produce bigger increments in the weights of the neural network (28) and thus bigger variations in the pitch reference and also greater changes in the output power.

Influence of the Learning
In the next experiment, K dL and K iL are set to 0 and K pL is varied. e effect of varying K pL is the same than modifying μ due to the fact that both are constants that multiply P err S (although the results are different because K dL � 0.1 in the previous experiment). e results are shown in Table 7.  10 Complexity e MSE and the output power mean value decrease with K pL and the variance grows. Now, K pL and K iL are set to 0 and K dL is varied. e results are shown in Table 8. Initially, the MSE and the output power mean decrease but from K dL � 5, these values grow continuously. An increment of K dL makes the system learn faster. Also, it reacts faster to changes so that MSE can be reduced. However, a very high value amplifies the first ramp that moves the pitch reference to 0. After this point, the system takes a long time to learn as it would need a big downward ramp to recover the initial weight values of the neural network.
is also explains why the variance decreases with K dL . After an initial ramp, the values are almost stable producing only small variations in the weights and thus a small variance.
Finally, K pL and K dL are set to 0 and K iL is varied to test its influence. e results are shown in Table 9. Initially, the MSE and the output power mean decrease with K iL until K iL is equal to 0.1; from this value on, they grow continuously. K iL helps the controller to learn how to reduce the steady-state error, so the MSE decreases when K iL increases. However, if this parameter is too high, the controller becomes sluggish and the MSE grows. e variance also increases with K iL since it makes the controller slower, so higher output values are reached. Keeping these high outputs longer generates a greater variance.      Table 10 shows the results at iteration 5 when the control period varies from 10 to 100 ms. If the control sample time is too small, the neural controller reacts to the noisy component of the wind, and this increases the MSE and the variance. On other hand, a very big control period makes the system too slow and also increases the MSE. erefore, an intermediate value would be the best option. In any case, the performance of the neural controller is much better than the PID response for all the control periods tested.

Conclusions and Future Works
In this work, an intelligent wind turbine pitch control strategy is presented, and the influence of the parameters of the neurocontrol systems is analysed. e pitch controller is      based on an RBF neural network that learns in an unsupervised way. e control goal is to maintain the output power around its rated value, obtaining the appropriate pitch angle reference. e output power errors are introduced both in the neurocontroller and in the learning algorithm. Extensive simulation tests have been carried out on a 7 kW wind turbine, varying different network configuration parameters as well as the wind speed. e performance of the neurocontroller is compared with a tuned PID obtaining better results in all the cases. ese experiments have led to draw some interesting conclusions. Among them, we can highlight the small influence of the wind frequency. However, the learning rate grows significantly with the number of neurons. ere exists an optimum sigma value different for each number of neurons, between 0.2 and 0.4. Another interesting result is how the gains K pL and K dL as well as K iL accelerate the learning, and, in general, low values of these tuning parameters improve the stability. e control sample time has a clear effect on the system response, making it slower or faster.
In future, it would be desirable to test the proposal on a real prototype of a wind turbine. In addition, it would be interesting to apply this control strategy to a bigger turbine and to see if this control action affects the stability of a floating offshore wind turbine.
Data Availability e findings of this study have been generated by the equations and parameters cited in the article.

Disclosure
An earlier version of this paper was presented in 15th Int. Conf. on Soft Computing Models in Industrial and Environmental Applications, 2020 [18].

Conflicts of Interest
e authors declare that there are no conflicts of interest regarding the publication of this paper.