Prediction of the Loss of Feed Water Fault Signatures Using Machine Learning Techniques

Fault diagnosis occurrence and its precise prediction in nuclear power plants are extremely important in avoiding disastrous consequences. The inherent limitations of the current fault diagnosis methods make machine learning techniques and their hybrid methodologies possible solutions to remedy this challenge. This study sought to develop, examine, compare, and contrast three robust machine learning methodologies of adaptive neurofuzzy inference system, long short-term memory, and radial basis function network by modeling the loss of feed water event using RELAP5. The performance indices of residual plots, mean absolute percentage error, root mean squared error, and coefficient of determination were used to determine the most suitable algorithms for accurately diagnosing the loss of feed water transient signatures. The study found out that the adaptive neurofuzzy inference system model outperformed the other schemes when predicting the temperature of the steam generator tubes, the radial basis function network scheme was best suited in forecasting the mass flow rate at the core inlet, while the long short-term memory algorithm was best suited for the estimation of the severities of the loss of the feed water fault.


Introduction
Advances in digital and computer technologies have increased the reliability and efficiencies of systems and processes of most industries including the nuclear industry. Nuclear power plants (NPPs) whose main objective is to generate reliable and safe energy have adopted automated systems as a result of these digital and computer revolutions. e automated systems have been adopted to increase the plants' safety and efficiencies as well as to reduce human errors in fault diagnostics. Intelligent fault diagnostic schemes are capable of enhancing the safety standards in these plants due to their capabilities of timely detection, isolation, and identification of faults in a system. e current fault diagnosis systems being used in NPPs have inherent limitations [1], which have necessitated the need to come up with intelligent methods for diagnosing faults. In this regard, active research in the field of machine learning algorithms and their hybrid techniques is promising and proving as a sure solution to remedy the challenges being experienced [2,3].
Various machine learning technologies that have been proposed include using artificial neural network (ANN), deep learning (DL), adaptive neurofuzzy inference system (ANFIS), and support vector regression (SVR). Examples of these studies include a study done by [4] where two deep learning networks of convolution neural network (CNN) and long short-term memory (LSTM) were used simultaneously in the diagnosis of faults. Moreover, a deep convolution neural network (DCNN) was optimized by sliding window technique to diagnose faults [5]. LSTM model was applied in NPP to predict abnormal conditions [6]. Works by [7] used LSTM network and Function-Based Hierarchical Framework (FHF) in the autonomous operation of a reference three-loop pressurized water reactor NPP safety systems. Fault identification capability of deep belief network (DBN) was proposed to formulate a fault diagnosis scheme [8]. A study by [9] developed two ANN techniques of radial basis function network (RBFN) and Elman neural network (ENN) to diagnose faults. Four different neural networks by [10] tested their robustness by evaluating different break sizes of loss of coolant accidents. Several machine learning algorithms such as SVR and cascaded fuzzy neural networks were created by [3] to diagnose severe accidents in an NPP while SVR schemes were employed in the diagnosis of incipient cracks at the steam generator [11]. LSTM algorithm was used in accident diagnostics [12] whereas works by [13] applied ANFIS models in neutron noise identification inside reactor cores.
Different studies have been conducted to explore the huge potential presented by these machine learning technologies for use in fault diagnostic schemes. However, there is a need to perform comparative studies to show how these algorithms compare to and contrast against each other. e comparative studies that have so far been done to evaluate these schemes are few and inconclusive. ese studies include [5] that compared the competencies of deep learning scheme with that of SVR and ANN networks while [8] compared the performances of deep belief networks (DBN), support vector machine (SVM), and backpropagation neural network (BPNN). A comparative study was done between ANFIS and ANN as presented by [14]. Neutron noise source reconstruction process was used to compare RBFN, ANFIS, and other algorithms [15]. ese respective studies showed that ANN, DL, and ANFIS methodologies are superior to any other machine learning methodologies. erefore, there is need for a study that examines in detail these three robust algorithms.
ANFIS is a robust approach that uses fuzzy systems' knowledge representation and combines with ANN's merit of adaptative and learning capabilities to form an intelligence hybrid algorithm [16,17]. RBFN, being a variant of ANN, has shown superior prediction competencies among other ANN methodologies [9]. e ability of DL to disentangle abstractions makes it suitable to be used in intelligent systems. More particularly, the recurrent neural network (RNN) which is a type of DL network is the most appropriate for processing sequential data. e most successful RNN architecture is LSTM due to the presence of a memory cell that overcomes the limitation of vanishing gradient and blowing-up experienced by basic RNN. erefore, LSTM algorithms have high prediction competencies due to their grasping of the structure of data dynamically over time [18].
is study modeled a risk significant initiating event of Loss of Feed Water (LOFW) focusing on the Chinese-built Qinshan I NPP reactor coolant system (RCS) using RELAP5/SCDAP Mod4.0 then compared and contrasted the performance of the robust ANFIS, RBFN, and LSTM methodologies. e main contributions of this study were the development of the RBFN, ANFIS, and LSTM models for diagnosing the LOFW transient signatures, evaluation of the prediction capabilities of ANFIS, RBFN, and LSTM approaches of various critical parameters affected by the LOFW event by analyzing several performance indices, and proposing the most suitable algorithms for accurately diagnosing LOFW signatures. Section 2 presents the methodology of the study where a description of the code and system is done; the proposed models of ANFIS, RBFN, and LSTM are expounded in detail, and the implementation of the three models for fault diagnosis is presented. Results and their discussion involving analysis of different parameters under LOFW event fault signatures are elaborated in Section 3. Section 4 spells out the concluding remarks of this study.

Methodology
2.1. Description of the System and Code. RELAP5 simulation code is a system code used in analyzing the thermal-hydraulic components during transients, postulated accidents, and steady-state conditions in light water reactors [19]. In this study, the code was utilized in simulating the variances of features of thermal-hydraulic components of the RCS of Qinshan I NPP. Based in China, Qinshan I NPP is a twolooped pressurized water reactor (PWR) with a 966 MW thermal output and a 300 MW electric capacity.
e Qinshan I NPP's RCS modeled components include pressurizer, cold legs, steam generators, hot legs, reactor vessel, reactor coolant pumps, and reactor core. Figure 1 shows Qinshan I RCS nodalization diagram. A RELAP5 simulation of the steady-state condition of Qinshan 1 plant was done and it was verified against the measured values from the actual plant. Table 1 shows the verification of the simulated values from RELAP5 code with their corresponding measured values of Qinshan I NPP. e tolerances detected were within permissible limits, thereby making the simulation code suitable to model transients in the plant.
LOFW transient event was modeled in RELAP5 in order to determine the mass flow rate, pressure, and temperature within the RCS. LOFW is a high-risk initiating event involving the malfunction of the systems that provide and circulate water through the secondary loop of NPP. is transient is characterized by start-up failure of the auxiliary feed water system, the loss of main feed water, and the inability to depressurize the RCS [20]. e inability of heat removal by systems in both primary and secondary loops may result in heat up or even core damage. Hence, this is why LOFW is considered a high-risk initiating event that could lead to core damage [21]. LOFW transient event was the case study for this work.
Initially, the steady-state conditions of the plant were modeled for 100 seconds to find the parameters' benchmark state. ereafter, modeling of various LOFW scenarios was initiated, and the changes in the respective critical parameters were tracked. Several events scenarios of the loss of feed water were modeled. e severities of LOFW events and steady-state conditions as a function of time for different parameters were plotted. Figure 2 displays the heat transfer rate variation with increasing time while Figure 3 shows the fluctuation of water level at the pressurizer for the different LOFW scenarios. e core inlet temperature changes for the varying severities of the fault are presented in Figure 4. It has been shown that there is a link between the severity of LOFW and the deviation of the parameters as illustrated in Figures 2-4.

ANFIS
2.2.1. ANFIS Overview. Adaptive neurofuzzy inference system (ANFIS) scheme is a feed-forward, multilayer network that employs the learning algorithms of neural networks and human-like reasoning of fuzzy logic to map from input space to output space. ANFIS approach uses the fuzzy inference of Takagi-Sugeno to construct fuzzy rules with the proper membership functions and the excellent feature of artificial neural network of approximating nonlinear functions and learning capabilities to generate the stipulated input-output pairs [16,17]. Besides the ability of the ANFIS algorithm to display excellent generalization capabilities, the method also has a high sensitivity in detecting patterns, transparency in reasoning, speedy training, and processing ability as well as its learning is adaptive in nature. In addition, it is considered a universal estimator due to its low error output [22]. ANFIS algorithm has been applied in many fields due to its ability of modeling complex conditions. ese applications include a study done by [23] where ANFIS was employed in optimizing energy systems; unfolding of the neutron noise source in a nuclear reactor core was accomplished by ANFIS algorithms [13], multiclass event classification using ANFIS approach in a prototype fast breeder reactor [24], estimation of the output of turbinegenerator by ANFIS approach [25], and during radiology image analysis, a combination of ANFIS and Granger Causality was utilized in the detection of both linear and nonlinear causal information [26].

ANFIS
Architecture. e architecture of ANFIS has five layers as shown in Figure 5.
e first layer is known as the fuzzification layer. Using several parameters, the layer generates membership functions and nodes that are adjustable with node functions. e rule layer is the second layer where multiplication using the logical AND is implemented. e fixed node labeled Π in this layer produces an output as a result of all inputs as expressed by the following equation: (1) Layer 3 with fixed nodes is known as the normalization layer. As the name suggests, this layer implements the normalization of inputs. e output of the nodes, also referred to as normalized firing strength, is calculated as the quotient of the respective w i and the sum of all w k as expressed in the following equation: e defuzzification layer is the fourth layer with adaptive nodes. e operation done in this layer is linear regression, and it results in the output shown by the following equation: where p i , q i , and r i are parameters computed during the learning process while w i is the result of the previous layer. Layer 5 is known as a single summation layer with a single and fixed node function represented by . e summation of the result of layer 4 generates the network's overall output. is is given as follows:

ANFIS Parameters.
e development of the ANFIS algorithm involves three processes. ese processes are the formulation of model structure or Fuzzy Inference System (FIS), optimizing the model structure, and evaluating the ANFIS model structure. e formation and optimizing of the model structure make use of the training dataset while the checking dataset prevents the predictor model structure from overfitting. Overfitting starts immediately the error increases during model optimization. For model validation, a completely different testing dataset is used to appraise the precision of the estimated values [27]. e optimized parameters of the ANFIS algorithms were derived by a trial-and-error method.
e best model to predict the LOFW transient event had the model structure formulated by subtractive clustering, tuned by hybrid learning algorithms and for each optimization cycle, the number of epochs selected was 1000. Hybrid learning algorithms included using the least square approach in tuning the consequence parameters during the forward sequence,  and the gradient descent mechanism is used in updating the premise parameters in the backward run.

LSTM
algorithm is a variant of Recurrent Neural Networks (RNNs). It was introduced by [18] to address the backpropagation problems of long-term sequences. RNN is one of the powerful networks of deep learning with the ability to extract the information feature in a dynamic and nonlinear system. However, two known limitations hinder basic RNN from learning effectively the information in long timeline sequential data. ese two well-known issues affecting basic RNN are vanishing gradient and blowing-up. Vanishing gradient is a phenomenon experienced due to backpropagation optimization, and it may lead to weights in most areas becoming zero. On the other hand, the blowing-up scenario may result in the oscillation of weights. ese issues undermine the learning process of basic RNN. LSTM algorithm overcomes these limitations by introducing a memory cell and gate mechanism. is gives LSTM model the ability to extract long-term dependencies among elements in sequential data resulting in superior prediction outcomes. Due to its superior prediction capabilities, the LSTM algorithm has found many applications in various industries. A study by [28] applied LSTM network in the prediction of water level for pressurizer of a nuclear reactor, abnormal conditions prediction in an NPP by using the LSTM model was presented by [6], and works by [29] utilized LSTM scheme in the prediction of bearing performance degradation and forecasting of wind power while employing LSTM approach [30] are some of the examples of the LSTM algorithm applications.

LSTM Architecture.
e LSTM architecture comprises three main parts, namely, the multiple hidden layers, the input layer, and the output layer. In the hidden layer, there are memory cells. e memory cell is used to store the previous state for a certain duration of time slices and optimizing the information flow of the cell's input and output using its nonlinear gating mechanism. Each memory cell has the input gate, the output gate, and the forget gate. Inside every LSTM cell, the inputs of these gates are calculated using the input vector X and related weights. e forget gate is used as a filter of only redundant parameters, and it allows long-term dependencies to pass through the central memory unit (C) unaffected. is enables the limitation of vanishing gradient or blowing-up phenomena to be avoided and the required memory state to be transmitted to the next cell. e information required in long-term dependencies for prediction purposes is in the central memory unit (C) [18]. Figure 6 illustrates an example of an LSTM cell.

LSTM Parameters.
LSTM development involves two main steps. ese steps are the generation of model structure using training and checking datasets and evaluating the LSTM model predictor against testing datasets.
LSTM network characteristics were chosen through a process of trial and error. e adaptive moment estimator was selected as the training algorithm due to its superior optimization qualities over the others [31]. e other optimized hyperparameters of the model selected were 180 hidden units, 1000 epochs for each optimization cycle, and 0.00003 as the learning rate.

RBFN Outline. Radial Basis Function Network
(RBFN) algorithm is a variant of artificial neural network (ANN) that uses a single hidden layer to map the input space to output space. RBFN algorithm is superior multilayer perception of ANN due to its lower computational requirement, automatic fix of the number of hidden layers, and its excellent prediction capabilities. Introduced by Broomhead and Lowe [32], RBFN algorithm is an excellent model used for data classification, function approximation, and control of systems.
RBFN has been applied in many fields. Some include a comparison of RBFN and Elman Neural Network (ENN) in the fault diagnostics of NPP [9] and accidental drop of a control rod identification using RBFN algorithm [33]. RBFN x 1 x 2 x 1 x 2 Figure 5: e ANFIS structure architecture [17].
Science and Technology of Nuclear Installations was compared against other soft computing methods in neutron noise source reconstruction study [15]. A comparative study was done between RBFN and other multilayer perceptron neural networks in the prediction of critical heat flux [34]. Work by [35] utilized RBFN in identifying Accelerator Driven System based on RBFN and the calculation of departure from nucleate boiling ratio (DNBR) using RBFN [36].

RBFN Architecture.
e RBFN architecture is made up of three distinct parts which are the output layer, the input layer, and a single hidden layer. Both the output and input layers are linear in nature while the hidden layer is nonlinear.
e Gaussian transfer function present in the hidden layer is defined by biases and weights of neurons. e Gaussian function as shown in the following equation activates the RBF neurons: where y j represents the output of the hidden neuron at position j, μ → j represents the midpoint of the hidden neuron at position j, x → represents the input vector, and σ 2 represents the variance of the function [32]. e output layer is then fed the outputs of the neurons from the hidden layer. Figure 7 illustrates the RBFN structure.

RBFN Parameters.
Modeling using the RBFN algorithm development involves two main steps. ese steps are the generation of model structure using the training dataset and evaluating the RBFN model predictor against testing datasets. RBFN network characteristics were chosen through a process of trial and error. e network properties were the radial basis with exact fit and with a spread constant of 1.0. Subsequently, the respective input and output data were fed to yield the optimized model structures.

LOFW Diagnosis Process.
Modeling of the steady-state conditions and various LOFW scenarios (10%, 25%, 50%, 60%, 75%, 90%, and 100% fault severities) of the nuclear plant was performed and their corresponding signatures were retrieved for further analyses. ese signatures were collated to form a dataset with 17,718 samples. e input parameters were temperature at the reactor core inlet, water level at the pressurizer, and secondary pressure at the steam generator (SG) U-tube while the mass flow rate at the core inlet, the temperature of the SG U-tube, and the severity of the fault estimator served as the outputs of the models. e dataset was then normalized.
Normalization is a data preprocessing procedure that maintains the differences in the range of values by scaling the range of values to within a common scale. By using a normalized dataset, the algorithm applied can easily extract long-term dependencies among elements in sequential entries resulting in superior prediction outcomes. Subsequently, the dataset was randomly split into the training dataset and checking datasets. Of the 17,718 input and output pairs dataset, the training dataset was 15 ANFIS, LSTM, and RBFN networks were generated, trained, and optimized using an identical set of training and checking datasets. Two categories of models were generated, that is, models with single output only and multiple outputs schemes. Both RBFN and LSTM schemes generated single-and multiple-output algorithms whereas . . ANFIS methodology had single-output models only since ANFIS structure has a single output. ree model structures of single-output ANFIS schemes, three model structures of LSTM single-output algorithms, one multiple-output LSTM network, three single-output model structures of RBFN methodology, and one multipleoutput RBFN algorithm were generated as a result of the training process. For single-output models, each of the models had three input parameters and one output for each of these critical parameters (core inlet mass flow rate, SG temperature, and fault severity). Multiple-output algorithms of RBFN and LSTM with the same inputs and outputs were also generated to compare with the singleoutput models.
Five different LOFW events were then simulated to get the testing dataset. e LOFW events were fault severities of 15%, 30%, 40%, 70%, and 85%, respectively. e testing dataset had a sample size of 1766 in number with 15% fault severity accounting for 736 entries, 397 samples were from 30% fault severity, 40% fault severity had 302 samples, 179 entries were for 70% fault severity, and a total of 152 samples were from 85% fault severity. e inputs of the testing dataset derived from these critical parameters were subsequently projected to the optimized ANFIS, LSTM, and RBFN approaches and the predicted outcomes were calculated. e efficacy of the fault diagnostic schemes was evaluated by the schemes' abilities to recognize patterns or signatures in different critical parameters. Figure 8 displays the framework of the proposed fault diagnosis system. e predicted values of the optimized models and actual values of the testing dataset were compared and contrasted to evaluate the prediction competencies of these models.

Results and Discussion
Optimized models of ANFIS, LSTM, and RBFN were selected for evaluation of the testing datasets. e performances of the outputs of all the models were assessed through both statistical and graphical approaches. Statistical approaches give a succinct method of model comparison numerically while graphical assessments provide insightful observations about the estimated output derived from the optimized models in comparison to the actual values. Mean absolute percentage error (MAPE), root mean squared error (RMSE), and coefficient of determination (R 2 ) were employed as statistical appraisers of performance. MAPE is a dimension-independent statistical performance indicator that expresses the summation of the residuals as a function of the actual values in percentage form. Due to its advantage of unit independence, it is suitable for model comparison. RMSE index is dimension sensitive and is used to estimate the residuals between the actual values and predicted values. e coefficient of determination (R 2 ) shows the proportion of the variance of the predicted value explained by the true value. MAPE, RMSE, and R 2 are expressed by equations (6)-(8), respectively: where N is the sample size, p i symbolizes the values predicted, and a i symbolizes the actual values. Table 2 shows the performance indicators of the critical parameters used in the prediction of LOFW transients. ese critical parameters were the mass flow rate at the core inlet, the temperature of the SG U-tube and the severity of the fault estimator. e statistical values of MAPE, RMSE, and R 2 were computed from the actual output of the testing dataset and their corresponding predicted output values were estimated by the ANFIS, LSTM, and RBFN algorithms. Table 2 shows that for the same parameter, a difference in the MAPE, RMSE, and R 2 values was observed for ANFIS, RBFN, and LSTM models. is difference in values can be attributed to the different model structures used in computing the outputs of the testing dataset. Different model structures produce different outputs of the testing dataset.

Statistical Analyses.
Additionally, it was observed that RBFN single-output models had the same output values as RBFN multiple-output model whereas the LSTM single-output algorithms outperformed the LSTM multiple-output model in predicting all the critical parameters. is is a result of input-output mapping during model generation. In RBFN, inputs were mapped to each output independently which resulted in the same output values irrespective of the number of outputs whereas during LSTM model structure generation, inputs were collectively mapped to outputs hence, the difference in the model prediction performance with varying outputs. In line with these observations, this study focused on singleoutput models only due to their superior prediction capabilities over multiple-output algorithms.

Mass Flow of the Core Inlet.
e MAPE values for all the single-output models were less than 1% with the RBFN model having the lowest value (0.3238%) and the LSTM network having the highest at 0.9645%. is implies that all the algorithms are highly accurate at forecasting core inlet mass flow rate since MAPE values below 10% are classified as highly accurate forecasting % [37].
e RMSE values for all the models were two orders of magnitude smaller than the range of values. e RBFN algorithm (20.1910) outperformed both the ANFIS approach and the LSTM network with the ANFIS network following closely with an RMSE value of 20.8193 and the LSTM scheme having 57.1781. Generally, all the models had low RMSE values compared to the range of values inferring that Science and Technology of Nuclear Installations all algorithms have good prediction competencies and hence, the models are well suited for system faults diagnosis. e closer to 1 the R 2 values, the less the variance between the predicted values and the true values. ANFIS scheme with an R 2 value of 0.9869 was the closest to 1 compared to the other models of RBFN (0.9800) and LSTM (0.8104). is infers that all the models were excellent in predicting the core inlet mass flow.

Temperature of the SG U-Tube.
Similarly to core inlet mass flow, the MAPE values for all the models were less than 1% with the ANFIS model outperforming the other algorithms.
is implies that all the algorithms have good prediction competencies [37]. e ANFIS algorithm had the smallest RMSE value of 1.4038 followed by the RBFN network having 1.5662 and the LSTM scheme with an RMSE value of 5.2899. e RMSE values for all the models were two orders of magnitude smaller than the range of values. is implies that all algorithms are highly accurate at forecasting the temperature of the SG U-tube. Hence, the models are well suited for estimating the temperature of the SG U-tube. e closer to 1 the R 2 values, the less the variance of the predicted values measured against the true values. ANFIS scheme (0.9843) had the closest to 1 R 2 values compared to the other models of RBFN (0.9834) and LSTM (0.8162). is signifies that all the models were superb predictors of the SG U-tube temperature.

Severity of the Fault.
LSTM scheme is considered to have a reasonable forecasting ability since it fell within the MAPE value with a range of 30-50% [37]. Both RBFN and ANFIS networks had MAPE values greater than 50% signifying that the residuals were sparsely scattered from the regression line, hence they were not good at predicting the severities of fault. is could be attributed to the nature of output data. Mapping continuous input data to discrete output data results in compatibility error for both RBFN and ANFIS networks leading to dismal prediction performance.
e RMSE values for all the models were of the same orders of magnitude as the range of values. e LSTM algorithm had the smallest RMSE value (18.3586) compared to the other models. e dismal performance of both RBFN and ANFIS networks in predicting the severities of fault was  attributed to the discrete nature of output data. is inference shows that the LSTM network has superior severity of fault estimation competencies compared to the other models.
Corresponding to the performance of MAPE and RMSE, the R 2 value (0.8412) of the LSTM algorithm was closer to 1 than for the other networks. Since R 2 values closer to 1 show less variance between the predicted values and the true values, this infers that the LSTM scheme has a superior prediction capability than ANFIS and RBFN schemes in predicting the different severities of fault.

Graphical Assessments.
Graphical assessments of the outputs of the three single-output models for the different parameters were also performed to understand comprehensively the performance of the models. Scatter plots of the residuals were employed for this purpose. Visualization of the predicted data in the form of a plot of the residuals against the actual values gives an excellent way of illustrating the variations of the predicted outputs from actual values. Deviations of the data points from the horizontal regression line are magnified making unusual patterns and observations easier to spot [38]. Figures 9-11 show the residual scatter plots of core inlet mass flow rate, the temperature of the SG U-tube, and the severity of fault estimator, respectively. e residuals were distributed normally devoid of any distinct pattern for the ANFIS model as shown by Figure 9(a) and the RBFN model (Figure 9(b)). Residuals of the LSTM network (Figure 9(c)) were also normally dispersed. Likewise to the pattern exhibited by RMSE, residuals of the RBFN (Figure 9(a)) are more clustered around the zero residual line than for ANFIS and LSTM schemes. is signifies that the RBFN model has a superior precision capability than both ANFIS and LSTM algorithms for the prediction of the mass flow rate at the core inlet. Figure 10 illustrates the residuals were normally and randomly dispersed with the majority of the residuals being observed around the zero residuals line. e random dispersion of residuals implies that there was no systematic error present in the predicted outputs of the temperature at the SG U-tube. Additionally, the residuals for ANFIS, RBFN, and LSTM networks were two orders of magnitude smaller than the magnitude of the range of values, respectively, indicating that the residuals were too inconsequential to    impact any significant deviation to the values of the temperature at the SG U-tube. Hence, all networks were suitable for the prediction of the temperature of SG U-tube. However, the ANFIS model has a superior forecasting competency than the LSTM and RBFN algorithms for this case due to the majority of its residuals being clustered around the regression line.
With the discrete natured severities of fault dataset, the residuals were centered at the modeled severities as demonstrated in Figure 11. For all the networks, the magnitude of the residuals and that of the range of values were of the same order. e size of the residuals of the ANFIS and RBFN schemes (Figures 11(a) and 11(b)) increased with every increase in the severity of LOFW.
is suggests that the forecasting ability of the ANFIS and RBFN networks diminished at higher values of the severity of the fault. On the other hand, the highest magnitude of residuals of the LSTM algorithm ( Figure 11(c)) was below |10%| for all the modeled cases of LOFW events. is infers that of the three models, the LSTM model performed best at predicting the severities of LOFW fault than the other networks.
After carefully assessing the residuals plots, MAPE, RMSE, and R 2 values of all the networks, it can be inferred that the RBFN, ANFIS, and LSTM models have the ability to diagnose fault signatures of parameters in an NPP. erefore, these algorithms are well suited for the development of intelligent fault diagnostic systems. For LOFW fault diagnosis, it can be implied that the RBFN scheme is best suited in predicting the mass flow rate at the reactor core inlet and the ANFIS network for the temperature at the SG U-tube prediction capabilities and the LSTM network outperforms the ANFIS and RBFN schemes in forecasting the severities of LOFW transient.

Conclusions
In this study, various machine learning techniques have been evaluated by estimating various critical parameters during a risk significant LOFW event. ese three methodologies have exhibited superior prediction capabilities, speedy training and processing abilities, adaptive learning, and excellent display of generalization. Generally, the results showed that RBFN, LSTM, and ANFIS algorithms performed well in the prediction of parameter outputs and therefore, these algorithms can be applied in developing precise and robust fault diagnostics systems. Single-output models of LSTM outperformed LSTM multiple-output algorithm in predictions of critical parameters whereas RBFN multiple-output network and single-output RBFN schemes had similar performance accuracies. Additionally, it was also demonstrated that the LSTM network performed best at predicting the severities of LOFW fault, the RBFN scheme best estimated the mass flow rate at the reactor core inlet, and ANFIS algorithms outshined the other schemes in forecasting the temperature at the SG U-tube. e novelties of this study were the development of RBFN, ANFIS, and LSTM models to diagnose LOFW transient signatures, the novel comparative evaluation of the prediction capabilities of ANFIS, RBFN, and LSTM algorithms on various critical parameters affected by LOFW event and the determination of the most suitable algorithms for estimating the critical parameters during the LOFW transient event. For future work, the authors will develop an intelligent operator support system based on the ANFIS, LSTM, and RBFN algorithms.

Data Availability
e simulation data used to support the findings of this study are included within the article.

Conflicts of Interest
e authors declare that there are no conflicts of interest regarding the publication of this paper. Science and Technology of Nuclear Installations