A Fuzzy Clustering Logic Life Loss Risk Evaluation Model for Dam-Break Floods

State Key Laboratory of Hydrology-Water Resources and Hydraulic Engineering, Hohai University, Nanjing, China National Engineering Research Center of Water Resources Efficient Utilization and Engineering Safety, Hohai University, Nanjing, China College of Water Conservancy and Hydropower Engineering, Hohai University, Nanjing, China Changjiang Institute of Survey, Planning, Design and Research, Wuhan, China National Dam Safety Research Center, Wuhan, China Materials Science and Engineering, University of California-Los Angeles, Los Angeles, CA, USA


Introduction
Dam-break floods can pose great threats to people's lives and properties. In 1852, the South Fork Dam in Pennsylvania was destroyed, killing [1][2][3]. In 2017, part of the Oroville Dam in USA was damaged, killing 34 people, and about 188,000 people were ordered to evacuate the area [4]. Consequently, dam safety risk analysis entered the vision of researchers by late 20 th century. Beyond the safety of the dam structure and construction itself, dam safety risk analysis also evaluates the impact of dam failure on downstream areas, which responds to people's increasing consciousness of self-protection and self-rescue [5][6][7][8].
A comprehensive consequence assessment is an indispensable part of dam risk analysis. As a complex assessment system, it involves multiple assessment factors such as loss of life, economic loss, and social and environmental impacts [9,10].
Life loss due to dam failures refers to the fatalities caused by floods in the downstream areas of the dam after the dam failure.
e two most direct indicators of LOL are the mortality rate of the population at risk and the death rate of the population at risk downstream the failed dam. e life loss evaluation model and outburst flood model [11,12] are the main focuses in this field of dam failure life safety research.
Brown and Graham [13] used a mathematical statistical method to analyze the data of some dam-break life loss in the history of the United States and other countries in the world and established a simple empirical estimation formula for dam-break life loss (Brown & Graham method). Assal et al. [14] of BC Hydro, Canada, based on the empirical statistics and regression analysis of predecessors, applied the dambreak simulation technique and probability theory to estimate the life loss of dam breaks (Assaf method). Dekay from the University of Colorado and McClelland from the United States Bureau of Reclamation (USBR) [15] proposed an empirical estimation formula (Dekay & McClelland method) to describe the nonlinear relationship between the life loss and the population at risk, considering the difference in severity of dam-break floods. [16] Graham [17] suggested that the life loss of dam breaks should be estimated based on the severity of dam-break floods and built a mortality rate table of dam-break risk with regard to the release time of flood forecasts (Graham method). McClelland and Bowles [18] proposed a method to estimate the life loss based on the division of flooded areas and risk analysis in the dam-break downstream (McClelland & Bowles method). Reiter [19] in Finland proposed a simplified life loss estimation method (RESCDAM method) from Graham's method. e United States Army Corps of Engineers, USACE, USBR, and the Australian National Committee on Large Dams (ANCOLD) also supported the construction of LIFESim and other life loss calculation models [20]. Based on the above studies, Jonkman et al. [21,22] constructed the life loss and economic loss models of low-probability high-loss accidents by analyzing the key parameters of accidents and the loss rate of accident damage areas. Fan and Jiang [23] proposed a quantitative calculation method for flood control risk rate according to the logical process of flood control project overtopping failure and introduced empirical formulas to predict fatalities. Based on the analysis of Craham and RESCDAM methods, Zhou et al. [24] analyzed the loss rates and correction coefficients under various working conditions based on the data of eight dam breaks and proposed a corresponding empirical model (Li-Zhou method). Song and He [25] studied the estimation methods of individual life loss and social life loss, analyzed the corresponding uncertainty factors, and improved the traditional estimation methods. Sun et al. [26] used Graham method to calculate the loss of life in dam-break flood simulation based on Monte Carlo and T hypercube sampling. Peng and Zhang [27,28] built a life loss analysis model based on Bayesian networks by analyzing flood paths and other uncertain factors. Ge [29] proposed a fast evaluation method of potential consequences based on catastrophe theory.
In addition, many scholars have adopted the method of fuzzy mathematics. Wang et al. [30] established the life loss model of dam breaks based on fuzzy matter elements; Wang et al. [31] introduced indirect influencing factors of dambreak life loss and also built a prediction model of life loss. Wang et al. [32] introduced the fuzzy clustering iterative model to the dam-break life loss model. Li et al. [33] established the risk grade evaluation model of the dam-break life loss in China based on the theory of variable fuzzy sets theory. e above three are independent of the simulation method based on fuzzy mathematics and consider the loss factors of dam failure life of the weighting matrix by using either the expert assigned AHP method or the adjacent fuzzy scale method [34,35]. Li [36] built a dam-break comprehensive evaluation function which utilized the linear weighting method integrating the weight of life and economic, social, and environmental consequences of dam failures.

Influential Factors of Life Loss
According to the degree of influences, the influential factors of a dam-break life loss of the population at risk can be divided into direct and indirect categories as shown in Figure 1. Direct factors include warning time (WT), flood severity (FS), rescue capability RC, break time BT, and understanding degree of the flood severity by population at risk U D. Indirect factors include the temperature at the time of dam break, the distance between the risk population and the dam site L, and the downstream terrain conditions [37,38], as shown in Figure 2. e influential factors are analyzed below.

Direct Factors
e flood severity is a parameter that indicates the severity of damage to downstream lives and buildings. e severity of a dam-break flood is influenced by the dam type, water discharges, reservoir capacities, and downstream topographical and geomorphic conditions. In general, flood severity can be defined as the flood flow per unit section width, FS � (Q/W) � DV, where Q is the flood flow per section (m 3 /s), W is the width of the section (m), D is the depth of the flood, and V is the flow rate of the flood. e severity of a flood and the life loss rate in a disaster have positive correlation, so the distribution is ascending half ridge type for the membership function [39], that is, From literature studies, whenFS < 0.1(m 2 /s), the flood will not destroy the buildings, so μ(FS) � 0; when FS > 11.0(m 2 /s), the flood will destroy all the buildings in downstream areas, so μ(FS) � 1. erefore, a 1 � 0.1 and a 2 � 11.0.

Rescue Capability RC.
e rescue capability RC considers the evacuation condition of the disaster, the rescue ability of the emergency department, the self-rescue ability of the individuals affected by the disaster, and so forth. e rescue ability is the success rate obtained by people who use 2 Complexity all possible means to escape. e rescue consequence is the survival rate of the population at risk after the disaster. And the membership function of the emergency rescue ability is expressed as From the studies of BRUS [40], when WT is less than 0.25 h, the alarm can be considered invalid, since the people at risk cannot manage to evacuate in time; when WT is longer than 1 h, it can be considered as the state of "full warning," since it provides enough time for people to evacuate; when WT is between 0.25 h-1 h, the state is considered "partial warning." erefore, a 1 is 0.25 and a 2 is 1.

Dam-Break Time BT.
e dam-break time has a great impact on the life loss. For example, in a case where BT is in the midnight, like 3 a.m., the probability that downstream residents receive the flood warning and evacuate in time is low. Consequently, BT influences the effectiveness of warning message transmission. In the same flooded area, different dam-break times will bring about fluctuations to the life loss distribution. Regarding the daily event segments, WT can be divided into working time (7:00-17:00), rest time (17:00-22:00), and sleep time (22:0-7:00). e membership function is accordingly as follows: Regarding the seasons, WT can be divided into spring (March-May), summer (June-August), autumn (September-November), and winter (December-February), and its corresponding membership function is Regarding the fuzzy operation, the membership function of dam-break time BTis

Dam break
The cascading flood inundated the downstream area Population at risk   e temperature at the dam-break time affects the life loss of the population at risk in a dam-break accident. Freezing cold conditions can hinder the evacuation process while burning hot conditions may lead to the outbreak of diseases after the evacuation. Temperatures can change with different geographical conditions, climates, seasons, and so forth.

Distance between Population at Risk and the Dam Site
(L). L refers to the distance between the area where population at risk is located and the damaged dam. e farther away from the dam, the longer time available for people to receive the warning and prepare to escape, so the life loss rate reduces with increasing L. Conversely, smaller L can result in higher life loss rate.

Characteristics of the Dam.
Dam characteristics include the dam type, dam height, storage capacity, service life, dam materials, and other dam structural parameter properties.

Terrain Conditions around the Downstream Areas.
When downstream areas have high terrain or some geographic/man-made barriers, these terrain conditions raise the chance of a successful evacuation and thus dramatically reduce the life loss rate.

Risk Evaluation Model of Life Loss Caused by Dam-Break Floods
\To evaluate the dam-break life loss risk, various influential factors are selected to perform classification and similarity sorting to the target dam and known dam-break cases, using the method of adaptive differential evolution (ADE) optimization of fuzzy clustering iteration model: three known dams containing the most common parameters with the target dam are selected. e exponential smoothing method is used to perform risk evaluation of the target dam's life loss rate.

Similarity Assessment of Life Loss.
In this paper, a fuzzy clustering iteration model optimized by the ADE method is used to evaluate the similarity of life loss in dam accidents in China. e following part studies the application of the fuzzy clustering iterative model, the adaptive differential algorithm, and the fusion method for the similarity evaluation of life loss in dam accidents.

Fuzzy Clustering Iterative
Model. Fuzzy clustering model (FCM) is a fuzzy clustering method based on the traditional clustering cognition and the concept of fuzzy set. In this method, each object has a membership degree about a certain class. e objects (u i ) in the clustering object set (U) are categorized to a certain class because of some characteristics in a certain membership degree. e core idea of fuzzy clustering iterative method is to cluster the samples in the sample space near several clustering centers by a clustering algorithm and to identify the category of each sample regarding the relative membership degree in the generated fuzzy clustering matrix. e fuzzy clustering iterative model can be used to rank the similarity of life loss samples from multiple dam accidents. Suppose that n samples of life loss in a dam accident have a set X, where Consider one random sample x im .
In the formula, x ij represents the eigenvalue of the j th index of the i th sample. i � 1, 2, . . . , n and j � 1, 2, . . . , m. n, m are the number of samples and the number of indicators, respectively. Due to different data dimensions of different categories, the eigenvalues in the matrix X need to be normalized. e normalization formula is as follows: In the formula, x max (j) and x min (j) are the maximum and minimum values of index j, respectively. r ij is the normalized value of the j th index of the i th sample.
According to equation (9), the normalized matrix R can be obtained as follows: For n samples of life loss with m indexes, clustering analysis is conducted according to c samples, and the membership matrix of fuzzy clustering is obtained as follows: 4 Complexity where c h�1 u hi � 1; 0 ≤ u hi ≤ 1; n i�1 u hi ≥ 1. In the formula, u hi is the relative membership degree of the i th life loss samples in a dam break to the h category: i � 1, 2, . . . , n and h � 1, 2, . . . , c. n is the number of samples, and c is the number of classifications.
Let the fuzzy clustering centers of c categories be In the formula, s jh is the relative membership degree of the j th indicator for the h th category. In this paper, generalized Euclidean distance is used to represent the difference between the i th sample and the h th category: In the formula, r i � (r i1 , r i2 , . . . , r im ) T represents the eigenvalue vector of the i th index and the m th index; s h � (s 1h , s 2h , . . . , s mh ) T represents the clustering center vector of category h. Since different evaluation indexes are of different importance in clustering, index weight is introduced. Equation (13) can be extended to where ω j is the weight of the j th evaluation index. From formula (14), the weighted generalized Euclidean weight distance D(r i − s h ) is defined as follows: e objective function formula (15) is established to solve the optimal fuzzy clustering matrix, fuzzy clustering central matrix, and weight. e objective is to minimize the sum of the squares of weighted generalized Euclidean weight distances of all life loss samples for all categories: According to the Lagrangian function method, the Lagrangian function is constructed, and the fuzzy clustering iterative model can be obtained: e above three formulas are the iterative formulas of fuzzy clustering cycle. After iterative calculation, the membership matrix of the optimal fuzzy classes, the optimal fuzzy clustering center, and the optimal weight vector can be obtained.

Adaptive Differential Evolution (ADE).
(1) Differential Evolution Algorithm e differential evolution (DE) algorithm is an optimization algorithm based on the greedy idea. It Table 1: Degree of understanding.

Level
Degree Description

I Fuzzy
When people receive the flood warning, they cannot correctly understand the severity of this flood, fail to thoroughly understand and properly estimate the downstream flood propagation extent and its resulting damage to the downstream areas, are unable to estimate the effective escaping time before the flood arrives, fail to respond properly to escape successfully, and fail to determine possible escape routes and measures II Half-fuzzy People have vague understanding of the flood severity by the time they receive the warning messages, have vague awareness of the damage to the downstream drainage systems by the flood but not accurately estimate the damage extent, and know they should take timely measures to escape and choose an escape route but cannot manage to choose the best path and the best measures Another important characteristic of DE algorithm is its mutation process. In the DE algorithm, the mutation values are not generated randomly; instead, they are evolved from the differences among randomly selected individuals in the population in the feasible solution space. e global search ability of this algorithm is ensured by the search strategy of exchanging information among individuals. e DE algorithm, with simple structure, few control parameters, and small population size, is suitable for computer program implementation. erefore, it was proved to be the fastest evolutionary algorithm among all the participating algorithms in the first IEEE program algorithm competition, with affirmed convergence speed and stability was also affirmed. e DE algorithm includes random initialization, mutation operation, cross, and selection. Its optimization process is mainly realized through a population containing several individuals. is population should be initialized to determine the corresponding individuals within the population. Each individual is a feasible solution, and the population of feasible solutions is the feasible solution space of the optimization problem. e main analysis steps of the differential evolution method are as follows.
(1) Population Initialization Suppose that the population size of differential evolution is N P , and the initial population is obtained by random method, which is expressed , where x 0 2 ∈ R n . e individual in the population can be expressed as where G represents the current evolutionary algebra, i � 1, 2, . . . , N P , and i represents the population number of the individual; D represents the number of genes contained in each individual. e initial population component x 0 ij is firstly initialized: where U(j) − L(j), respectively, represent the maximum and minimum values of the component, so as to ensure that the initial component does not exceed the component limit. (2) Mutation e mutation operation of differential evolution method is different from that of other genetic algorithms. It is manifested in the superposition of an external random disturbance based on individual components, and this disturbance is based on two randomly selected individuals in the set of two noninferior solutions. e difference between the two random individuals is calculated, and then the mutation vector v G i is obtained.
In the formula, v G i is the mutation vector, x G r1 represents the individual in the current population, and r2, r3 represents any integer between [1, N P ], that is, the noncleaved set. G represents the evolutionary algebra, and F is the mutation factor between [0, 2]. e mutation factor F controlled the degree of mutation of random parameter vector. Larger mutation factor F means stronger global search ability, while smaller F indicates stronger local search capability.
(3) Cross e cross operation simulates the process of propagating mutation during evolution to generate regular random disturbance in the population. In order to enrich the diversity of the population, the individual information within the population is exchanged, and the mutant is operated according to the following formula to generate experimental individuals u G ij : In the formula, i represents the population number, j represents the individual gene, Rand(j) ∈ [0, 1] represents the random quantity distributed uniformly, Rnb(i) ∈ [1, D] represents the random integer, CR is the cross factor, and the value is [0, 1]. e advantage of the cross operation according to formula (20) is that the variability of the experimental individuals u G ij is guaranteed. No matter how the random quantity is taken, the existence of at least one mutation gene v G ij can be guaranteed and the effectiveness of evolution can be maintained. (4) Selection e selection simulates the survival of the fittest in the evolution process: parent individuals and experimental individuals are compared and the better ones are selected to enter the next generation.
is selection process can also be referred to as "greedy" selection strategy. To implement a "greedy" strategy, every individual in the population will be filtered by the fitness function. When the experiment individual u G i in the fitness function f(·) has superior results to the current individual x G i , the current results of the mutation cross u G i will replace those of x G i into the offspring; otherwise, keep the current individuals into the offspring. e fitness function f(·) represents the target problem that needs to be optimized. If the goal of the 6 Complexity algorithm optimization is to take the minimum value, the selection operation can be performed as shown in equation (21): (2) Construction Process of the ADE Algorithm e standard DE method has excellent global optimization ability and fast convergence. Mutations and cross operations are the main sources of offspring generation. However, although the analysis of the parameters from the DE algorithm is found, its sensitivity to the mutation factor F and cross factor CR is very high, much higher than the sensitivity to the mathematical model of the optimization. In this case, the DE algorithm efficiency, effectiveness, and robustness of the evolutionary parameters, to a great extent, are dependent on the following main research the structure of the adaptive differential evolution algorithm and the algorithm flow

(1) Construction of the ADE Algorithm
In the standard DE method, the difference vector is related to the speed of the population convergence, and the mutation factor F controls the modulus of the difference vector. At the early stage of the population evolution, the individual distribution is relatively dispersed through random initialization. In this case, the step size of the difference vector is relatively large and the evolution speed is fast. By running the ADE algorithm, each individual in the population constantly evolves towards the direction with the higher fitness function value, and individuals in a population are relatively gathering together. In this case, the difference vector step size from the mutation process is small, the degree of population mutation is greatly reduced, and the population begins to homogenize. Consequently, the fitness function values of the offspring cannot override those of the parent generation. So, the rate of population evolution is greatly reduced, and it may be trapped into the optimal solution in a local space. In other words, when the mutation factor F is too large, the population evolution process will oscillate heavily, and the optimal solution cannot be found for a long time.
When the mutation factor F is too small, the search step size will decrease, which will easily lead to the homogenization of individuals in the population. Obviously, constant mutation factor is not a positive strategy for effective optimization algorithm. erefore, for the mutation factor F, this paper adopts an adaptive mutation factor operator that changes proportionally with the evolution algebra, which can be expressed as ] . (22) In formula (22), F 0 is the initial mutation factor, G max is the maximum evolution algebra, and G represents the current evolution algebra. It can be seen from formula (22) that the mutation factors F are negatively correlated with the evolutionary algebra. In the early stage of operation, the global search ability is high, and the diversity of the population is guaranteed. In the later stage of operation, the local search ability is improved, and the accuracy of search is improved. At the same time, the speed of operation is accelerated. In addition, the cross factor CR is also an essential parameter that can severely affect the optimization effect of the population, because CR plays an important role in coordinating the local search speed and the global optimization ability. When CR is larger, the contribution of the mutation vector to the test individuals is higher, which helps to improve the population diversity. When CR is smaller, more test individuals are provided by the parent individuals, which helps to narrow the search range and improve the local optimization accuracy. CR determines the degree of mutation in the cross operation, so the adaptive search strategy optimization of CR is as follows: where CR 0 is the initial cross factor and G is evolutionary algebra. e cross factor CR is positively correlated with evolutionary algebra.
at is, with the gradual accumulation of samples, the diversity gradually decreased, but the participation degree of the mutant gradually increased. So, the rate of the mutation is not going to slow down. When the maximum evolution rate is reached, the convergence rate also reaches its peak.
(2) Adapt the Differential Evolution Algorithm Steps Step 1. Initialization: initialization was carried out according to equation (18). Spatial dimension D, population size N P , initial cross probability CR 0 , initial mutation factor F 0 , and initial evolutionary algebra G are set, and random initialization is carried out within the value range.
Step 2. Generate the operation factor: equation (22) is applied to calculate the parameters.
Step 3. Evolution operation: use equations (19), (20), and (21) to carry out the mutation operation, cross operation, and selection, respectively, to complete the evolution of the present generation.
Step 4. Individual evaluation: the fitness is calculated for each individual in the population of Complexity 7 the current generation. e best fitness means the best value of this generation.
Step 5. Terminate the judgment: judge the optimal value of this generation. If the value of fitness function meets the accuracy requirement or the number of iterations reaches the upper limit, the operation will terminate. Otherwise, the evolution times will increase by one and continue to iterate from Step 2.

Similarity Evaluation Process.
In the following, the ADE method is introduced to the traditional fuzzy clustering iterative model (FCM) to analyze, identify, and rank multiple life losses caused by dam-break flood samples, so ADE can optimize the clustering efficiency of the traditional FCM model and construct an efficient and concise ADE-FCM dam-failure life loss similarity evaluation method.
(1) Search Variable Encoding To fuse the ADE algorithm with the fuzzy clustering algorithm, the primary task is to determine the encoding mode of the optimization object and the search variables. As the ADE is actually an argument evolutionary algorithm, the search variables must be coded with real numbers. e purpose of fuzzy clustering iteration is to find the optimal clustering center, the fuzzy membership matrix, and the optimal weight matrix. erefore, the selected optimization object cannot only optimize but also classify the clustering center. erefore, the clustering center (s jh ) m×c is selected as the optimized object and coded as a population individual, which means that there are m × c populations of individuals (optimization variables) that need to be coded. e ADE vector can be expressed as s j � s j,1 , s j,2 , . . . , s j,c(m−1)+1 , s j,c(m−1)+2 , . . . , s j,cm . (24) In other words, m × c variables are divided into c groups; each group represents a clustering center and has m indicators. is method numbers the vectors and starts the calculation.
(2) Determine the Fitness Function e objective function of FCM iteration is a generalized Euclidean function, so the following equation is used as the fitness function: e goal is to minimize the fitness function value (Euclidean distance), which is the optimal partition of the data set. e process of risk evaluation of life loss caused by a dam-break flood with ADE-FCM method is as follows: Step 1. Initialize each parameter of the FCM: to initialize the cluster center s jh , set the stop operation precision ε, set a large constant θ, and make the target function F(ω j , s jh , u hi ) � θ Step 2. Initialize ADE parameters: let the initial evolution algebra G � 1, and set the population size N P , the maximum evolution algebra G max , the initial cross factor CR 0 , and the initial mutation factor F 0 Step 3. Optimize the clustering center: use the adaptive differential optimization method to carry out mutation, cross, and selection operations to select the optimal clustering center of this generation Step 4. Calculate the fuzzy membership matrix of this generation u G hi Step 5. Calculate the weight of this generation ω G j Step 6. Calculate the fitness function of this generation f G Step 7. Compare the fitness function value of the current generation with the fitness function f G of the previous generation f G− 1 . If |f G − f G− 1 | < ε or G > G max , the operation will stop; otherwise, go to Step 3 to continue the iterative operation

Estimation of Life Loss Rate.
After the similarity evaluation of the life loss, the exponential smoothing method was used to estimate the estimated samples. e principle of the exponential smoothing method is to select the known samples that are similar to the samples to be estimated as well as giving more weight to those with higher similarity and giving less weight to those with lower similarity. e calculation results reflect the comprehensive influence of similar samples, and the results are more in line with the actual situation. e formula for the exponential smoothing method to fit the sample to be evaluated is as follows: In the formula, N is the estimated value of the sample to be estimated, N 1 , N 2 , N 3 are the three known data samples that are most similar to the sample to be estimated, α is the smoothing coefficient, α ∈ [0, 1], and α is related to the degree of stationarity of the known value. e data of the three dams which are most similar to the risk of life loss due 8 Complexity to dam-break flood to be estimated are taken into equation (26) to estimate the risk of life loss due to dam-break flood. e calculation process is shown in Figure 3.

Case Study
Aiming at investigating the situation of dam breaches in China [41], the case studies have selected eight dams' (a total of 16 sets of data) life loss history data as the research objects of dam breaks; see Table 2. e locations of the eight dams are shown in Figure 4. Six influential factors BT, FS, RC, WT, U D, and L of the population mortality risk f are set as the characteristic values. A set of samples are selected to be estimated using the method of mutual interaction validation, while the rest of the samples are set as known references. e calculation results from the estimation are compared with those from the field data. en, the feasibility of the estimation method is verified.

Membership Degree Calculation and Normalization
Processing. According to the analysis of life loss factors in Section 2, the membership degree of each index is calculated, and the membership degree matrix is shown in the following equation: In the above equation, I 1 , I 2 · · · I 16 are 16 samples, one of which was set as the sample to be evaluated, and the rest were 15 known samples; J 1 , J 2 · · · J 6 were the 6 factors affecting the severity of life loss caused by a dam-break flood, which were the time of break BT, flood severity FS, rescue capability RC, warning time WT, understanding degree of the flood severity by population at risk U D, and the distance between the risk population and the dam site L, respectively. Among these six factors, only U D and L did not experience membership degree calculation, and others had membership degree between 0 and 1. erefore, values of UD and L needed to be normalized. Matrix (30) was normalized according to equation (9), and its normalized matrix was

Parameter Settings.
Firstly, the parameters of the fuzzy clustering algorithm were set. e number of samples was 16, the index dimension was j � 6, and the number of categories to be classified was c � 3. Due to the low degree of data dimensions and functionality in the clustering problem, the parameters of the ADE part were set as follows: population size (N P ) was 10, maximum evolutionary algebra (G max ) was 10000, initial cross factor (CR 0 ) was 0.45, and initial mutation factor (F 0 ) was 0.45.

Similarity Evaluation.
After setting the parameters, the optimal fuzzy clustering center (s jh ), the optimal fuzzy membership matrix (u hi ), and the membership degree (ω j ) can be obtained. In this paper, the MATLAB program was used to achieve the following calculation results:

Complexity
According to the feature consensus of variable fuzzy set in variable fuzzy set theory, the classification of the sample was calculated as follows: belong to level c.
According to the fuzzy membership matrix, the similarity of each sample to be evaluated was sorted, and Figure 5 was obtained.

Estimation of Life Loss
Rate. According to equation (26), 16 known samples were estimated according to the smoothing index method, and the difference values between estimated results and the actual field data were calculated. e calculation results of this paper were evaluated by comparison with the Fuzzy Matter-Element Analysis [11], as shown in Table 3.
From the comparisons shown in Figures 6 and 7, the calculation accuracy of the method presented in this paper is obviously better. e relative errors between the estimated values and the actual values in this paper are between [0, 0.37], and most of the errors are between [0, 0.01].
erefore, the life loss estimation method after similarity evaluation by ADE-FCM method has certain feasibility and accuracy in estimating the life loss rate Complexity caused by dam-break floods. In theory, it can result from the following: (1) rough ADE method to optimize the fuzzy clustering center, a better fuzzy clustering center is obtained, so the similarity evaluation is more scientific and reasonable, and the influence of the known samples and the sample to be estimated can be more accurately grasped.
(2) rough ADE-FCM model, a weight based on objective facts is obtained, and the weight analysis shows that the weight of j � 2, 3, 4 (flood severity, rescue capability, and warning time) is larger, which is consistent with the weight results calculated by adjacent fuzzy scale method in literature [28]. So, the life loss estimation method proposed in this paper is scientific and feasible.

Summary and Conclusion
is study analyzes the influential factors of dam-break flood life loss through the ADE-FCM method. e similarity of eight dam-break cases and their life loss data are analyzed, the life loss at the target dam is estimated with a comprehensive exponential smoothing method, and the results are compared with the actual values. e main research contents are as follows: (1) e direct and indirect factors affecting the risk of life loss due to dam failure are analyzed, and the characteristics, distribution rules, and membership functions of the factors are studied. Based on the research of fuzzy clustering iterative method, the FCM method is introduced into the similarity evaluation of life loss caused by dam-break floods. (2) An adaptive differential evolution method was constructed by improving the mutation factor and cross factors of the differential evolution method. Based on the fusion of ADE and FCM, a fuzzy clustering iterative method based on adaptive differential evolution method (ADE-FCM) was established, and a similarity evaluation method for life loss caused by dam-break floods was established. (3) e principle of exponential smoothing method was studied. On the basis of the similarity evaluation, the method was used to estimate the target sample and compare the results with the actual data. Based on case studies, ADE-FCM was used to rank the similarity of life loss in dam accidents, and the risk population mortality of the target was estimated based on the exponential smoothing method.

Data Availability
e raw data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest
e authors declare that there are no conflicts of interest regarding the publication of this paper.