This paper presents a neural network for designing of a PID controller for suspension system. The suspension system, designed as a quarter model, is used to simplify the problem to onedimensional springdamper system. In this paper, back propagation neural network (BPN) has been used for determining the gain parameters of a PID controller for suspension system of automotive. The BPN method is found to be the most accurate and quick. The best results were obtained by the BPN by LevenbergMarquardt algorithm training with 10 neurons in the one hidden layer. Training was continued until the mean squared error is less than
Vehicle suspension serves as the basic function of isolating passengers and the chassis from the roughness of the road to provide a more comfortable ride. In other words, a very important role of the suspension system is the ride control. Due to developments in the control technology, electronically controlled suspensions have gained more interest. These suspensions have active components controlled by a microprocessor. By using this arrangement, significant achievements in vehicle response can be carried out. Selection of the control method is also important during the design process. The design of vehicle suspension systems is an active research field in which one of the objectives is to improve the passenger’s comfort through the vibration reduction of the internal engine and external road disturbances [
As seen from previous studies, the researcher used the NN for control of suspension system, but in this study we use the BP neural network to estimate a PID controller. Also the constrains such as overshoot, settling time, and road condition to design a PID controller for suspension system with NN are not examined by other authors. To the authors’ best knowledge, no previous studies which cover all these issues are available.
In this paper, a BP neural network was investigated to estimate the gain parameters of PID controller for a suspension system of automotive. The paper is organized in the following manner. Section
A quartercar suspension system shown in Figure
A quartercar model of suspension system.
Also the system is equipped by a hydraulic actuator placed between the sprung and unsprung masses to exert a force
The above equations are linearized dynamic equations at equilibrium point and the vehicle speed is constant.
Variables
The linearized dynamic behavior of tire through interaction with the road is justified where the tire is in contact with the road.
The applied force on tire can be considered as a disturbance force in the system.
Therefore,
Equation (
Assuming that each amplitude is completely decoupled and controlled independently from other amplitudes, the control input signal is given by
Artificial NNs are nonlinear mapping systems with a structure loosely based on principles observed in biological nervous systems. In greatly simplified terms as can be seen from Figure
(a) A biological nervous systems and (b) an artificial neuron model.
An ANN shown in Figure
A layered feedforward artificial NN.
Each node output depends only on information that is locally available at the node, either stored internally or arriving via the weighted connections. Each unit receives inputs from many other nodes and transmits its output to other nodes. By itself, a single processing element is not very powerful; it generates a scalar output with a single numerical value, which is a simple nonlinear function of its inputs. The power of the system emerges from the combination of many units in an appropriate way.
A network is specialized to implement different functions by varying the connection topology and the values of the connecting weights. Complex functions can be implemented by connecting units together with appropriate weights. In fact, it has been shown that a sufficiently large network with an appropriate structure and property chosen weights can approximate with arbitrary accuracy any function satisfying certain broad constraints. Usually, the processing units have responses like (see Figure
First, the dataset of the system, including input and output values, is established.
The dataset is normalized according to the algorithm.
Then, the algorithm is run.
Finally, the desired output values corresponding to the input used in test phase.
Back propagation neural network (BPN), developed by Rumelhart et al. [
Hidden layer calculation results are
where
Output layer calculation results are
where
Activation functions used in layers are logsig, tansig, and linear as
Errors made at the end of one cycle are
where
Weights can be changed using these calculated error values according to (
where
Square error, occurred in one cycle, can be found by (
The completion of training the BPN, relative error (RE) for each data, and mean relative error (MRE) for all data are calculated according to (
where
The speed of automotive is changed between 10 and 55 m/sec. The overshoot, settling time, and steady state error of system response are assumed between 1% and 10%, 0.3 and 1.5 second, and 0% and 2%, respectively. These parameters are the input value of network. Finally, the outputs of net are the gain parameters of the PID controller.
The nodes at the input and output layer are determined by the number of predictor and predicted variables. In this research, there are 4 nodes in the input layers due to the number of input variables, and 3 nodes in the output layer, for similar reasons. There are no rules given to determine the exact number of hidden layers and the number of nodes in hidden layers. A large number of hiddenlayer nodes will lead to an overfit at intermediate points, which can slow down the operation of NN. On the other hand, an accurate output may not be achieved if too few hidden layer nodes are included in the neural network. The results show that the best configuration of the network is achieved by one hidden layer. The number of nodes in the input layer, in the hidden layer, and in the output layer is chosen to 4–10–3, respectively. The activation function in the input and the hidden layers is sigmoid function and linear function in the output layer.
For a proper working of the neural network, a preprocessing of the input and output data is performed. The input values are normalized between −1 and 1, since the activation function is a sigmoid function in the input layer. Normalization is made by the following function:
Once a network is structured for a particular application, that network is ready to be trained. To start this process the initial weights are chosen randomly. During the training, the weights are iteratively adjusted to minimize the network performance function. As performance function the mean square error, the average squared error between the network output and the target output is applied. For the training of the network the MATLAB Neural Network Toolbox is used [
Specifications of the suspension system used for simulation are given in Table
System specifications.








In this study, the back propagation learning algorithm is used in a feed forward, single hidden layer network. A variable transfer function is used as the activation function for both the hidden layer and the output layer. Many back propagation training algorithms were repeatedly applied until satisfactory training was achieved. The number of test data value used in the BPN is shown in Table
The test data values set used in the BPN.
Number  Overshoot 
Settling time 
Steady state error 
Velocity 

1  1  0.35  1  10 
2  2  0.44  2  15 
3  0.5  0.52  1  20 
4  1.5  1  0.09  25 
5  4  1.2  0.085  30 
6  5.5  1.3  2.5  35 
7  6  1.4  0.05  40 
8  7  1.5  0.35  45 
9  2.5  0.65  0.15  50 
10  10  0.73  0.1  55 
11  3.5  0.84  0.01  52 
12  4.3  1.25  0.012  32 
13  1.52  0.55  0.001  23 
14  4.5  1.27  0  42 
The variable training methods.
Acronym  Description 

LM  LevenbergMarquardt 
BFG  BFGS quasinewton 
RP  Resilient back propagation 
SCG  Scaled conjugate gradient 
CGB  Conjugate gradient with Powell/Beale restarts 
CGF  FletcherPowell conjugate gradient 
CGP  PolakRibiére conjugate gradient 
OSS  Onestep secant 
GDX  Variable learning rate back propagation 
The variable activation functions in the layers.
Number  The activation function  

Hidden layer  Output layer  
1  logsig  logsig 
2  logsig  tansig 
3  logsig  purelin 
4  tansig  tansig 
5  tansig  logsig 
6  tansig  purelin 
The best combination for all methods that is used in this paper is logsig for hidden layer and purelin for output layer. In the hidden layer, a number of neurons from 7 to 29 are used. The data set available for
The (
Number of 
Acronym of training method  

LM  BFG  RP  SCG  CGB  CGF  CGP  OSS  GDX  
7  0.9977  0.9888  0.9819  0.9957  0.9738  0.9918  0.9905  0.9913  0.9891 
8  0.9981  0.9676  0.992  0.9909  0.9942  0.9864  0.9846  0.9969  0.9916 
10  0.9999  0.9981  0.9942  0.9578  0.9986  0.9978  0.9681  0.9963  0.9945 
15  0.9985  0.9998  0.9967  0.9996  0.9996  0.9995  0.9595  0.9890  0.9856 
18  0.9967  0.9928  0.9994  0.9998  0.9998  0.9698  0.9898  0.9898  0.9881 
20  0.9998  0.9499  0.999  0.9699  0.9997  0.9698  0.9699  0.9797  0.9950 
25  0.9996  0.9699  0.9994  0.9699  0.9699  0.9599  0.9899  0.9479  0.9983 
26  0.9944  0.9916  0.9996  0.9797  0.9694  0.9792  0.9799  0.9688  0.9782 
27  0.9989  0.9599  0.9995  0.9899  0.9899  0.9899  0.9799  0.9896  0.9870 
28  0.9998  0.9939  0.9994  0.9799  0.9999  0.9599  0.9799  0.9889  0.9984 
29  0.9997  0.9419  0.9997  0.9799  0.9599  0.9699  0.9799  0.9889  0.9985 
In Tables
The results of the variable training methods in the BPN with
Acronym  Epoch 
Error goal  Train time 
Test time 

LM  153  met  28.724082  0.054526 
BFG  1528  met  77.482451  0.046583 
RP  3000  Not met  67.677318  0.046466 
SCG  3000  Not met  101.328176  0.044047 
CGB  2932  Not met  118.102075  0.052230 
CGF  2125  Not met  88.894782  0.046125 
CGP  2302  Not met  94.665335  0.046852 
OSS  3000  Not met  112.071065  0.046677 
GDX  3000  Not met  61.243551  0.046046 
The results of the variable training methods in the BPN with
Acronym  Epoch 
Error goal  Train time 
Test time 

LM  251  met  38.726182  0.057241 
BFG  1018  met  97.481454  0.042678 
RP  2439  Not met  49.16278  0.048146 
SCG  1980  Not met  33.19076  0.042047 
CGB  3232  Not met  128.15905  0.055130 
CGF  3025  Not met  108.15282  0.056985 
CGP  2100  Not met  67.150335  0.044152 
OSS  1000  met  59.011265  0.058677 
GDX  3230  Not met  67.120551  0.056046 
The results of the variable training methods in the BPN with
Acronym  Epoch 
Error goal  Train time 
Test time 

LM  293  met  40.145082  0.054789 
BFG  528  met  88.102451  0.056673 
RP  1210  met  100.677318  0.056906 
SCG  4000  Not met  97.949761  0.065237 
CGB  3745  Not met  138.78905  0.062230 
CGF  3450  Not met  112.89642  0.053565 
CGP  3162  Not met  100.13335  0.056907 
OSS  3400  Not met  160.14895  0.066677 
GDX  2000  Not met  43.446511  0.056044 
The fastest algorithm for this problem is the LM. On the average, it is over two times faster than the next fastest algorithm. This is the type of problem for which the LM algorithm is best suited.
In Tables
Comparison between actual gain parameters of PID and the BPN model (with
Number 


 

Actual  Predicted  Percentage of error  Actual  Predicted  Percentage of error  Actual  Predicted  Percentage of error  
1  12614.31  13017.95  3.25  16743.32  16949.26  1.23  28345.20  30091.26  6.16 
2  14590.41  15830.59  8.50  15234.90  16011.87  5.10  27670.82  28365.35  2.51 
3  12090.59  12634.66  4.50  16045.61  16474.02  2.67  26674.96  27013.73  1.27 
4  14278.66  14649.90  2.60  16396.63  16955.75  3.41  27869.42  28752.88  3.17 
5  14797.23  15674.70  5.93  16853.16  17341.83  2.90  27001.56  27322.87  1.19 
6  14856.34  15683.83  5.57  16166.90  16186.30  0.12  25438.94  25487.28  0.17 
7  14879.61  15821.48  6.33  15879.61  15965.35  0.54  26730.38  27075.20  1.29 
8  14898.49  14916.36  0.12  15990.53  16204.80  1.34  28593.77  30134.97  5.39 
9  14478.19  15394.65  6.33  15069.52  15841.07  5.12  27433.71  28720.35  4.69 
10  13967.83  14526.54  4.00  16649.37  16919.08  1.62  28082.82  28695.02  2.18 
11  12889.63  13119.06  1.78  16298.48  16818.40  3.19  29782.51  29830.16  0.16 
12  12794.88  13164.65  2.89  15589.92  15903.27  2.01  25967.69  25970.28  0.01 
13  12797.34  12799.25  0.015  15590.29  16446.19  5.49  27420.55  28026.54  2.21 
14  13312.90  14244.80  7.00  15690.33  16189.28  3.18  28890.69  29269.15  1.31 
Comparison between actual gain parameters of PID and the BPN model (with
Number 


 

Actual  Predicted  Percentage of error  Actual  Predicted  Percentage of error  Actual  Predicted  Percentage of error  
1  12614.31  12811.09  1.56  16743.32  18136.36  8.32  28345.20  29921.19  5.56 
2  14590.41  14996.02  2.78  15234.90  15496.94  1.72  27670.82  29635.44  7.10 
3  12090.59  12562.12  3.90  16045.61  17080.55  6.45  26674.96  27019.06  1.29 
4  14278.66  14305.78  0.19  16396.63  16832.78  2.66  27869.42  29062.23  4.28 
5  14797.23  15872.98  7.27  16853.16  17139.66  1.70  27001.56  29674.71  9.90 
6  14856.34  15869.54  6.82  16166.90  16186.30  0.12  25438.94  26995.80  6.12 
7  14879.61  15498.60  4.16  15879.61  15881.19  0.01  26730.38  28144.41  5.29 
8  14898.49  15090.68  1.29  15990.53  17563.99  9.84  28593.77  28633.80  0.14 
9  14478.19  15029.80  3.81  15069.52  16107.80  6.89  27433.71  29831.41  8.74 
10  13967.83  14997.25  7.37  16649.37  17528.45  5.28  28082.82  31056.79  10.59 
11  12889.63  13584.38  5.39  16298.48  17493.15  7.33  29782.51  32376.56  8.71 
12  12794.88  13714.83  7.19  15589.92  16480.10  5.71  25967.69  26996.01  3.96 
13  12797.34  14129.54  10.41  15590.29  16605.21  6.51  27420.55  28363.81  3.44 
14  13312.90  14621.55  9.83  15690.33  16237.92  3.49  28890.69  30566.35  5.80 
Comparison between actual gain parameters of PID and the BPN model (with
Number 


 

Actual  Predicted  Percentage of error  Actual  Predicted  Percentage of error  Actual  Predicted  Percentage of error  
1  12614.31  12775.77  1.28  16743.32  16818.66  0.45  28345.20  28827.06  1.70 
2  14590.41  16301.86  11.73  15234.90  15516.74  1.85  27670.82  29992.40  8.39 
3  12090.59  13617.63  12.63  16045.61  16457.98  2.57  26674.96  29507.84  10.62 
4  14278.66  14685.60  2.85  16396.63  17701.80  7.96  27869.42  31205.38  11.97 
5  14797.23  15801.96  6.79  16853.16  17773.34  5.46  27001.56  31483.81  16.60 
6  14856.34  16102.78  8.39  16166.90  18824.73  16.44  25438.94  27245.10  7.10 
7  14879.61  15251.60  2.50  15879.61  17230.96  8.51  26730.38  27521.59  2.96 
8  14898.49  15485.49  3.94  15990.53  17490.44  9.38  28593.77  29820.44  4.29 
9  14478.19  15242.63  5.28  15069.52  16379.06  8.69  27433.71  30064.60  9.59 
10  13967.83  15138.33  8.38  16649.37  16922.41  1.64  28082.82  30127.24  7.28 
11  12889.63  13516.06  4.86  16298.48  18495.51  13.48  29782.51  30300.72  1.74 
12  12794.88  13740.42  7.39  15589.92  16004.61  2.66  25967.69  26557.15  2.27 
13  12797.34  14010.52  9.48  15590.29  16329.26  4.74  27420.55  29200.14  6.49 
14  13312.90  14645.52  10.01  15690.33  16459.15  4.90  28890.69  29147.81  0.89 
Controlled and uncontrolled of sprung mass of vehicle is compared in displacement and acceleration as shown from Figures
The body displacement.
The body acceleration.
The present study shows that for the analyses of PID controller of suspension system, the BPN is a suitable method. The BPN was successfully applied for determining the gain parameters of a PID controller for a suspension system. Date for developing the ANN model is obtained by the written code with MATLAB. Results from ANN model are compared with the results from the classical model. The best regression value for the simulation is 0.9999 with newelm function. The MRE value of the BPN model is 4.2%. The results show that newelm function is more accurate than newff and newcf functions. Also the LevenbergMarquardt training is faster than other training methods. The BPN method also has the advantages of computational speed, low cost, and ease of use by people with little technical experience.