Feedforward Chaotic Neural Network Model for Rotor Rub-Impact Fault Recognition Using Acoustic Emission Method

+e rubbing faults caused by dynamic and static components in large rotatory machine are dangerous in manufacture process. +is paper applies a feedforward chaotic neural network (FCNN) to recognize acoustic emission (AE) source in rotor rubbing and diagnose the rotor operational condition. +is method adds the dynamic chaotic neurons based on logistic mapping into the multilayer perceptron (MLP) model to avoid the network falling into a local minimum, the delayed and feedback structure for maximum efficiency of recognition performance. +e AE data was rotor rubbing process sampled from the test rig of rotatory machine, classification by fault degree. +e experimental results indicate that the recognition rate is superior to the traditional BP network models. It is an effective method to recognize the rubbing faults for the machine normal operation.


Introduction
Rotor condition monitoring has received considerable attentions as the majority of the rotating machinery problems are caused by the surfaces of dynamic and static components in relative motion [1][2][3][4][5].erefore, there is a need in the industry for rotor incipient fatigue detection.
By far, most researches are focused on the different rubbing conditions caused by various damages of the rotor system using vibration and acoustic emission (AE) methods [6][7][8][9].e vibration method is performed based on the changes in stiffness, damping, mode, and other parameters of a rotor system to monitor the rubbing fault statues.Since the vibration response of rotor-stator rubbing is obviously nonlinear and highly depends on the rubbing conditions, it is not sensitive to incipient faults, and the faults are usually masked by background noise caused by mechanical vibration signals from rotating machinery [10][11][12].Hence, it is not effective to use the vibration method to recognize rubbing fault diagnosis of the rotor-stator.
AE serving as a significant condition monitoring technology to offer earlier fault detection compared with other more established techniques is the phenomenon of transient elastic wave generation due to a rapid release of strain energy caused by relative motion of small particles under mechanical stresses.At present, some scholars have investigated rubbing fault features through AE signal waveform analysis technology, which are usually described by some characteristic parameters such as hit accumulation, amplitude distribution [13], frequency distribution, and power spectral density (PSD) [14,15].Deng et al. [16] researched waveform fractal dimension algorithm and further used the support vector machine (SVM) to recognize the rubbing fault in the rotatory machine. is technology has been demonstrated, and it has beneficial prospects for applications in rubbing fault diagnosis field [17][18][19].
e hysteretic characteristic can help us to enhance the capacity of memory and steadiness of the original states for the neural network, and chaotic characteristic can reflect some perception phenomena or cognitive process of human.erefore, many neural networks with kinds of nonlinear characteristics such as chaos and hysteretic are proposed to improve the performance of the conventional neural network.However, although HHNN applied the hysteretic neuron into Hopfield neural network, the gradient decent mechanism enabled the network to get easily dragged into local minima as the initial condition is not deal.
CNN is originated from researching dynamic characteristics of nonlinear systems with artificial neural network (ANN) [26][27][28][29].As nonlinear systems, the stability of neural network becomes the important characteristics of the whole system, and it usually needs to use the statistical neural network model instead of the identified model to make the dynamic reconfiguration.Chaos theory is the mathematical model aiming to analyse the unordered, unstable, and unbalanced phenomenon.Unlike back propagation (BP) network, CNN searches out the phase space of chaotic attractors and iterates all the states with its rules unrepeatably, in order to avoid falling into a local minimum effectively [30].Because chaotic neural network has complex dynamic characteristics, the study on chaos control is the basis for utilizing chaotic neural network to resolve the practical engineering problems [2][3][4].
In order to achieve a high performance of the rubbing fault recognition algorithm using AE technology, this paper mainly focuses on the design of the improved novel FCNN algorithm, which adds the chaotic neurons based on logistic mapping into the MLP model.FCNN is a kind of dynamic network including delayed and feedback structure by adding the self-feedback gain α into the conventional neuron so as to have the associative effect [31].In this way, it can be seen that hysteretic characteristic and chaotic characteristic are brought into neuron simultaneously.Besides, the context layer with rich chaotic dynamics nodes to make the network parameters from the local minima.
Generally, the function minima problem can be resolved by the feedforward chaotic neural network.In this paper, the uncertain neuron and neural networks are innovatively used to resolve the function optimization problem by the logistic mapping control.e rest of the paper is organized as follows.In Section 2, the logistic mapping is introduced and the relationships between the main parameters are given by numerical simulation.Section 3 presents the chaos control and learning algorithm for FCNN algorithm.Section 4 shows recognition experiments and results of AE source in rotor rubbing and verifies the application performance of the proposed FCNN, and Section 5 gives the final conclusion.

Logistic Mapping
In the CNNs, doubling bifurcation is the common method of transferring to chaos states, and logistic mapping is the typical structure to show the multiperiod bifurcation.Logistic mapping also known as the insect population model is a polynomial mapping (equivalently, recurrence relation) of degree 2, and often cited as an archetypal example of how complex chaotic behaviour can arise from very simple nonlinear dynamical equations [32][33][34] as defined below: where x n ∈ (0, 1) severing as every year's inspect population represents the ratio of existing population to the maximum possible population.e value of interest for the parameter μ is those in the interval (0, 4].
x n indicates different changing trends with different parameters μ: when 0 < μ ≤ 1, mapping has the tendency to easily attain the stationary state, which is independent of the initial population, and x 0 � 0 is the motionless point without any other periodic points.When 1 < μ < 3, it has only two motionless points, that is, x 0 � 0 and x 0 � 1 − 1/μ, where the population can quickly approach the second motionless points as soon as possible.
e system fluctuates at the beginning and returns to the stable state as shown in Figure 1(a).When 3 ≤ μ ≤ 4, the system begins in concrete terms from multiperiod states to go to chaos.Concretely speaking, when 3 , mapping from almost all initial conditions the population can approach the permanent oscillations between two values, and these two values are dependent on μ.As a result, the system begins to go to chaos with the period T � 2 as shown in Figure 1(b).When 1 + � 6 √ ≤ μ < 3.544, mapping from almost all initial conditions the population can approach the permanent oscillations among four values as shown in Figure 1(c).
According to the soaring μ, it evolves more bifurcations from almost all initial conditions to 8 values, 16, 32, etc.When μ � μ ∞ � 3.56994 , it is the onset of chaos at the end of the period-doubling cascade with the 2 ∞ period states, and the logistic mapping is in chaos as shown in Figure 1(d).When μ > μ ∞ , it has more bifurcations from almost all initial conditions.It exhibits chaotic behaviour and slight variations in the initial population yield dramatically different results over time.
e system works more complexly to have cycling "blank band" window as shown as Figure 1(e).
Above all, with the rising μ, the system has new bifurcations constantly, and the period becomes unstable to produce the next period, like period 1 splits to have period 2, and period 2 splits to have period 4, . .., and period 2 n−1 splits to have period 2 n . is is the process of multiperiod bifurcations [35].Every bifurcation will result in the instability of the system and produce the two new stable periods to go on until the chaotic state without any existing multiperiods.Countless unstable periods together indicate the evident chaotic characteristics.

FCNN Algorithm
e FCNN is based on the MLP model with the chaotic neurons basing logistic mapping, shown in Figure 2. e network includes the front and rear parts.e front hidden layer is composed by neurons F and neurons B in pairs.Neurons F receive weighted sum outputs from the previous layer, and neurons B receive chaotic outputs from itself.Neurons F are connected with all neurons in the previous layer, while neurons B are independent with the previous part [36], called the parameter modulated chaos controller.
e rear hidden layer is composed by neurons H which receive the corresponding neurons F and neurons B and are integrated and calculated together using weighting function which is served as the final hidden output.
e chaotic characteristics of network in this model are mainly indicated 2 Journal of Electrical and Computer Engineering by neurons B in the front layer which is known as the selffeedback of the logistic mapping unit.e chaotic characteristics output h i 1 (k) , the previous layer output h i 2 (k) , the hidden layer output h i (k), and the model output result y(k) can be proposed to explain the dynamics of the FCNN model and described in vector form as follows: where x(k) and y(k) are the input and output of the structure, h i (k) is the rear hidden layer output, h 1 i (k − 1) is the chaos controller output, and h 2 i (k − 1) is the front hidden layer output.f(•) is the activation function with sigmoid type.e weight matrixes of the input layer, neurons H layer, and neuron thresholds are corresponding to ω I , ω O , and ω T , respectively.ω D is the chaotic coe cient given by the logistic mapping to control the model chaotic characteristics.
When training the FCNN model, it is necessary to rstly de ne the error function between the actual output y(k) and the desired output d(k) as follows: e adjusting derivation equations of all the weights in the input layer, chaotic controller, neurons H layer, and neuron threshold are de ned and described in the following detailed ways.

Outputs Weights ω
From the above equation, the output weights ω O i can be obtained by calculating the di erential equations of the current output, the deviation between the current output weights, and the previous output weights in neurons H layer.

Input
Output x 2 (k)

Neuron B Feedback Weights ω
where e feedback weights could be solved by the output weights ω O i , the deviation between the current and the previous neuron B feedback weights and p i (k) which is the derivative of the chaotic characteristics output h 1 i (k), and neuron B feedback weights ω D i .

Neuron reshold ω T
e neuron threshold can be calculated based on three necessary parameters, which are the output weights ω O i , the front hidden layer output h 2 i (k), and the deviation between the current neuron threshold and the previous neuron threshold.

Neuron F and Input Weights ω
It is concluded that neuron F and input weights can be updated based on these four parameters which are the output weights ω O i , the front hidden layer output h 2 i (k), the input of the model, and the deviation between the current and the previous neuron F and input weights.
Finally, from the all the above derivation results, the overall weights for each layer of a neural network could be solved by training the FCNN model using training vector sets.Summing up the above deduction, the weight update process of the FCNN model compared with the BP network learning algorithm is provided in Table 1.e experiment rig of rub-impact AE acquisition in rotatory machinery is shown in Figure 3. e rotor speed controller is used to regulate the rotational speed of rotor.e friction test bed is applied to generate the AE signal source in rubbing location.And the AE acquisition system is used to record the AE data in accordance with the various damage degrees.

Experiments and Result Analysis
e friction test bed of rotor system emitting acoustic emission is shown in Figure 4. e input voltage of the motor is used to regulate the rotational speed.e semiflexible shaft connects the electric motor with the shaft section, and the sliding bearing chock supports the rotor.A mobile friction device is installed at the base of the test bed.e mobile friction device is located in the space between shaft blocks 1 and 2. A retractable bolt is installed on the side of the screw along the centre of the radial axis, and the acoustic emission signals will be excited from the friction between the rotors by adjusting the bolts.
e AE signals source in rub impact are recorded by two R150 sensors, where one sensor is placed in the side edge of the rub-impact block and the other is placed on the shaft block for receiving the AE signal from different propagation paths.AE signals generated by rubbing source can be coupled to the propagation path through the rubbing screw and then propagated to AE sensor.e AE acquisition system used in this experiment made by PAC Corporation includes the sensor with frequency range covering from 20 kHz to 300 kHz, followed by a preamplifier of 60 dB gain and 18 bit A/D resolution, and the two-channel acquisition card with 1 MHz sampling frequency, where the AE signal is gathered into the twochannel acquisition card, respectively.
Most rotary mechanical rubbing faults are always shown as local rubbing which periodically generates a cluster of high-energy acoustic emission signals; the energy between the adjacent two clusters of rubbing acoustic emission signals is much smaller, and it is mainly caused by mechanical noise, environmental noise, and electromagnetic noise.Figures 5-7, respectively, show the time-domain waveform of the continuous rubbing acoustic emission signal of no rub impact, slight rub impact, and serious rub impact when the rotational speed is 350 r/min.
According to the three different damage degrees from rubbing, none rub, light rub, and heavy rub, we rub the rotors 50 times separately and then the AE records can be divided into three classifications with 100 items for each.According to the two-way AE data, we randomly select 40 group samples in each class as the input to train the FCNN model and the rest 10 group samples to test the recognition rate of the trained FCNN model.e feature vector as shown  Table 1: FCNN model parameter learning.
Step 1: set the initialization of network weights, the numbers of layers and nodes for each layer, and the initial weight coe cients to ensure the convergence conditions and activation functions Step 2: in the forward phase, calculate outputs y(k) of the training instance from the signals through the network and calculate the error function between the actual output y(k) and the desired output d(k).Besides, update the output weights ω O i , neuron B feedback weights ω D i , neuron F and input weights ω 1 ji , and the neuron threshold ω T i Step 3: if the derivation is less than the threshold or the iteration has been reached, then the training is over; otherwise, go to the backward phase, building the correcting terms with derivation, output weights, backward weights, and learning rate in order to revise each weight, respectively Step 4: repeat forward and backward phases until it satis es the convergence conditions  Journal of Electrical and Computer Engineering in Table 2 is composed by 12-dimension cepstral coe cients, the Hurst index, and the approximate entropy [37].Before extracting the acoustic features, segment the AE signals into frames with 20 ms length and 1/2 overlap and then use the Hamming window to reduce the cuto e ect for each frame.In the experiment, the parameters settings are the learning rate η 0.1 and the correcting-weight impulse coe cient α 0.05.Set the training derivation as 0.0001, and set the max training time J 1000.
Set the structure of the FCNN model.It is a single hidden layer FCNN, including input, hidden, and output layers.FCNN has a 14-node input layer, a 3-node output layer, and a hidden layer with number of nodes in 7, 8, 9, 10, and 11, respectively.
Set the initial state of the network.e self-feedback structure of logistic mapping unit in neuron B re ects chaotic characteristics of the system.So make the feedback weights as W D 3.7.
e input weights in neuron F are random numbers evenly distributed in [−1, 1]. e threshold of neuron H is W T 0 and the output weight is W O ± 1, which have equal probabilities [38].

Computational Performance on Di erent Nodes in
Hidden Layer.We use the above network to carry out the experiments to detect fault conditions of the rubbing caused by dynamic and stationary components contact.Table 3 and Figures 8 and 9 give the computational performance of the di erent number of nodes in the hidden layer.It makes the conclusion that the FCNN algorithm has excellent computional convergence.
e deviation goal can be achieved with di erent nodes from 7 to 11 in the hidden layer.As the nodes' number rises, the computational consuming time and iteration steps will be increased.Using 9 nodes in the hidden layer, the computational consuming time and iteration steps will be increased to 3.19 s and 27 steps; using 10 nodes in the hidden layer, the computational consuming time and iteration steps will be increased to 17.40 s and 100 steps; using 11 nodes in the hidden layer, the computational consuming time and iteration steps will be increased to 25.66 s and 232 steps.Besides, using 7 nodes in the hidden layer, the computational consuming time and iteration steps will be increased to 10.32 s and 55 steps compared with using 8 nodes in the hidden layer.Above all, the node number can be proved as the important factor to impact the computational complexity of the FCNN model.It is noted that the hidden layer has 8 nodes consuming the shortest time for computation.
erefore, the less deviation could be received by choosing the appropriate number of nodes and training time.

Recognition Performance on Di erent BP Models.
We use the above model settings to carry out these experiments that are the recognition performances of the FCNN with 8 nodes in the hidden layer with BP network.BP algorithm also is set as one hidden layer with di erent nodes in the hidden layer, which is analysed by using the same training data and the same test data.e experiment shows that adding nodes in the hidden layer appropriately will decrease the deviation in the network and improve the recognition accuracy, but too many nodes would complicate the network and improve the probability of over tting.erefore, the suitable number of nodes in the hidden layer plays an important role in FCNN's performance to defect the fault of rotor rub impact in the rotary machine.

Conclusions
is paper researches on the fault degree recognition from the AE signal source in rotor rubbing based on the CNN model, adding the self-feedback neural network which simulates the nonlinear chaotic action compared with traditional BP network.For the defect of falling into local minimum in BP network, FCNN uses chaotic characters to improve the capability of searching global optimization e ectively.e system has more essential description of the AE rub features.erefore, it has superiority in fault diagnosis applications.

Figure 2 :
Figure 2: e structure of the FCNN model.

Figure 5 :
Figure 5: AE signal with no rub impact.(a) AE waveform in the time domain.(b) AE waveform in the frequency domain.

Figure 6 :Figure 7 :
Figure 6: AE signal with slight rub impact.(a) AE waveform the in time domain.(b) AE waveform in the frequency domain.

Figure 8 :
Figure 8: Computational consuming time on the di erent number of nodes in the hidden layer.

Table 4 and
Figures 10 and 11presents the comparison of the recognition performance with BP and FCNN algorithms.Using the FCNN model, the training time and error
hidden layer; using the BP1 model, the training time decreases to 1.56 s and the error rate increases to 8.9% with 15 nodes in the hidden layer.It can be seen that FCNN needs less nodes and shorter time with the similar theory and the same number of layers, achieving better performance.

Table 3 :
Comparison of the di erent number of nodes in the hidden layer.

Table 4 :
Comparison of FCNN and BP.