A Closed-Loop Method for Multiperiod Intelligent Information Processing with Cost Constraints under the Fuzzy Environment

From trivial matters in life to major scientific projects related to the fate of mankind, decision-making is everywhere. Whether high-quality decisions can be made often directly affects the development of affairs, especially when sudden disasters occur. As the basis of decision-making, data are crucial. The continuously probabilistic linguistic set, a data structure of the fuzzy mathematics, is selected in the paper to collect original data after careful comparisons, because this data structure can fully consider the hesitation of decision-makers and the fuzziness of complex problems. Although all alternatives are costly, the costs of different alternatives still vary greatly; obviously, the low-cost alternative is better than others when the same predetermined goal can be achieved, which is one of the research objectives and characteristics of this paper. Different from other researchers who only take the cost as one of the decision-making indicators, the algorithm proposed in the paper pays much more attention on the cost reduction. When dealing with an emergency, it is often difficult to solve the problem by taking measures only once; usually, multiple rounds of measures are needed. Each round of decision-making has both connections and differences, and the multiround decision-making model is proposed and built in the paper. Different from traditional linear structures, the model mainly adopts the closed-loop structure, which divides the whole process into multiple sub-decision-making points, the severities measured at the current time point will be compared with the values estimated at the latter time point, and then, the differences will be input into the system, the corresponding automatic adjustment modules will be activated immediately according to the values. The accuracy of the system can be verified and adjusted in time by the closed-loop control module. Finally, several experiments are carried out and the results show that the algorithm proposed in the paper is more effective and the cost is lower.


Introduction
People are always faced with all kinds of decision-making problems, how to make an appropriate decision in time is a scientific problem and has become one of research hotspots in the academic circle.
ere are several different descriptions for the definition of decision-making. Simon believes that decision-making is essentially management [1]; Mikesell and Griffin, management professors, point out that decision-making is a process in which an appropriate alternative will be selected from multiple alternatives [2]; the American scholars Ebers and Maurer believe that decision-making should also include all activities, which must be carried out before making the final decision [3]. Generally speaking, decision-making is regarded as the process in which individuals or groups make appropriate decisions for specific goals.
Decision-making problems can be roughly divided into three categories from the perspective of known conditions: (1) the deterministic decision-making problem, such problems have clear alternatives and expected results; (2) the risky decision-making problem, the predetermined goal is clear; however, there are many paths to the goal, every path has certain risks and uncertainties, and fortunately, the probabilities can be roughly calculated; (3) the uncertain decision-making problem, it is similar to the risky decisionmaking problem; however, the probabilities can only be estimated, and even worse, there may be certain deviations in the estimated values. e problem studied in this paper belongs to the third category, which has many uncertainties and is the most complex of the three categories.
Information collection is a basic and key step of the decision-making; however, most information provided by interviewees is uncertain, vague, and hard to be denoted mathematically, how to scientifically record uncertain information is the first problem to be solved. In 1965, the Professor Zadeh has put forward the concept of the fuzzy set, which provided a new idea for solving such problems [4], the main contribution is that the concept of the membership degree has been proposed; subsequently, the theory has been widely recognized and developed rapidly, and various forms have been expanded, such as the interval-valued fuzzy set [5], the n-type fuzzy set [6], the intuitionistic fuzzy set [7], the interval intuitionistic fuzzy set [8], the hesitant fuzzy set [9], and the probabilistic linguistic set [10]. e main features of these fuzzy sets can be briefly summarized as follows: the membership degrees are described by interval values in the interval-valued fuzzy set; the membership degrees are represented by sets in the n-type fuzzy set; both the membership degree and the nonmembership degree can be considered in the intuitionistic fuzzy set; beyond that, the hesitation degrees, which are denoted by interval values, are included in the interval intuitionistic fuzzy set. e hesitations of the decision-makers can be described in the hesitant fuzzy set; in addition, the structure is concise and efficient, and therefore, the theory of the hesitant fuzzy set has become one of research hotspots in recent years. e probabilistic linguistic set is developed on the basis of the hesitant fuzzy set, and it adds occurrence probabilities to membership degrees, so as to increase further descriptions for membership degrees.
Mathematics is recognized as one of the best analytical tools. In order to use mathematical tools to carry out researches, scholars have put forward several basic mathematical concepts for fuzzy sets. Xia and Xu first gave the mathematical definition of the hesitant fuzzy set [11], and Liao and Xu defined some special hesitant fuzzy sets from the perspective of solving practical problems [12], such as the empty set O * , the complete set E * , and the meaningless set Θ * . Unfortunately, fuzzy sets cannot be added, subtracted, multiplied, and divided directly; for this reason, several basic operation methods for fuzzy sets are proposed by scholars. Torra defined the complement, union, and intersection operations for hesitant fuzzy elements [13]. Xu and Xia conducted further researches and proposed the addition, multiplication, number multiplication, and power operations for hesitant fuzzy elements [14]; on this basis, Liao and Xu proposed the definitions of subtraction and division [15].
In addition, fuzzy elements cannot be compared directly like real numbers. erefore, Xia and Xu have put forward the concept of the score value, which provides a method for comparing different fuzzy elements; however, when the score values are equal, it needs to be further judged with the help of the variance values [16], which was proposed by Liao et al.
Unfortunately, the basic operation methods mentioned above can only meet simple aggregation requirements and would be unable to finish the calculation when a large number of fuzzy elements participate. erefore, researchers proposed several effective fuzzy aggregation operators. Xia and Xu proposed the hesitant fuzzy-weighted averaging (HFWA) operator and the hesitant fuzzy hybrid averaging (HFHA) operator in the paper listed in the Reference [11] mentioned above, considering the importance of location and data simultaneously. Liao and Xu defined a series of new hesitant fuzzy mixed integration operators and studied their boundaries and relationships [17]. Zhu and Xu proposed the hesitant fuzzy Bonferroni average operator and the weighted hesitant fuzzy Bonferroni average operator from the perspective of logical relationships, and studied their monotonicity, commutativity, and boundedness [18].
In particular, due to its outstanding structure, the theory of the probabilistic hesitation fuzzy set has been developing rapidly. Zhang et al. studied the preference relationships, ranking methods, basic operation rules, and aggregation operators [19]. Hao et al. studied the basic properties of the probabilistic dual hesitant fuzzy sets and proposed the entropy measurement methods, the comparison methods, and the aggregation operators [20], such as the weighted average operator and the geometric average operator. On this basis, Garg and Kaur studied the distance measurement methods of probabilistic dual hesitant fuzzy sets [21]. Ye proposed the correlation coefficients of probabilistic hesitant fuzzy sets in discrete and continuous cases, respectively [22]. Li and Wang proposed the concept of the probability hesitation fuzzy likelihood [23]. ese theories have built a solid foundation for the probabilistic hesitation fuzzy theory.
Scholars have also conducted in-depth discussions on decision-making methods. e main idea can be simply summarized as using operators to aggregate estimation data and then rank alternatives according to the score values. ese methods can be roughly divided into two categories: (1) optimize the aggregation operators and (2) innovate decision-making methods. For the first category, Jiang and Ma proposed the probability hesitation fuzzy frankweighted average operator and the probability hesitation fuzzy frank-weighted geometric operator, and then discussed the relationships between them [24]. Zhao et al. considered the psychological preferences of decision-makers and proposed the probabilistic hesitant fuzzy Einstein aggregation operator [25]. Shao et al. proposed the probabilistic hesitation fuzzy priority integration operator after considering the internal correlations of indicators [26]. Li et al. proposed a new probabilistic hesitant fuzzy priority aggregation operator, which can make full use of the priority relationships among indicators [27]. For the second category, on the one hand, several commonly used methods in the field of the decision-making have been extended to the probabilistic hesitation fuzzy environment, such as the TOPSIS method, the QUALIFLEX method, and the LIN-MAP method; on the other hand, other theories or methods are introduced into the probabilistic hesitation fuzzy environment and make the theory more diversified. Zhou and Xu introduced several financial concepts into fuzzy sets and 2 Computational Intelligence and Neuroscience then applied the hybrid algorithm to the practice of the stock investment decision-making [28]. Tian et al. established a consensus process based on the probability hesitation fuzzy preference relationships and the prospect theory, and then applied it to financial venture investment [29]. Wu et al. introduced the GM (1,1) model of the grey theory and applied it to coal mine safety production [30]. Guo et al. introduced time series analysis and established a time series prediction model based on hesitation probability fuzzy sets [31]. For this article, we not only optimize the aggregation operators but also innovate decision-making methods; by comparison, the main work of the paper is to innovate the decision-making methods, and especially, the closed-loop control model is combined with the fuzzy decision-making algorithm.

The Basic Theories
is section will briefly introduce some important basic theories, which will be used in the following chapters, and it is helpful for other researchers to better understand the algorithm proposed in this paper.

e Continuously Probabilistic Linguistic Set.
e continuously probabilistic linguistic set is an extended form of the probabilistic linguistic set, which overcomes the disadvantage of the limited number of possible values in the probabilistic linguistic set. e definition of the continuously probabilistic linguistic set (CPLS) can be mathematically described by the following equation: In the above definition, the evaluation value is recorded by the symbol c l and its corresponding probability is recorded by the symbol p l ; the restraint condition c l ∈ [0, 1] points out the range of evaluation values, and the greater the value of the c l , the higher the evaluation acquired from experts; similarly, the restraint condition p l ∈ [0, 1] points out the range of probability values, and the greater the value of the p l , the greater the occurrence probability of the corresponding evaluation value; the pair of the symbol c l |p l can be called the continuously probabilistic linguistic element (CPLE); the restraint condition l � 1, 2, · · · , m indicates the value range of the l, and the symbol m indicates the total number of evaluation values in the CPLS; the restraint condition m l�1 p l � 1 indicates that the sum of all the probability values in any CPLS must equal to 1.
Unlike real numbers, CPLSs cannot be directly compared with each other, how to compare CPLSs is a difficult problem in front of researchers. e score function, which is first proposed by Farhadinia, can handle this problem effectively [32], and the calculation results are real numbers; therefore, they are easy to compare with each other. e definition of the score function can be mathematically described as equation (2). Generally, the score value of the CPLS represents the final evaluation result.
It is also necessary to briefly introduce several other commonly used calculation formulas of CPLSs, which are listed as follows: We can find that only one CPLS is involved in the first and the second calculation formulas; while there are two CPLSs involved in the third and the fourth calculation formulas, more calculation formulas can be obtained according to these four basic formulas.

e Collaborative Decision-Making Problem.
e definition of the collaborative decision-making can be simply described as a process in which several experts try to find the most appropriate alternative from multiple alternatives according to values of key indicators [33]. e experts can be denoted as E � E 1 , E 2 , · · · , E m , and the alternatives can be denoted as A � A 1 , A 2 , · · · , A n mathematically. e emergency decision-making is an important branch of collaborative decision-making problems, and they have many similarities [34], while there are great differences in complexity between them. e main difference is that the emergency decision-making problem has strict restrictions on the time, and the information acquired by experts is limited; even worse, it is always difficult for experts to evaluate alternatives with single values, and they often hesitate among multiple values. Fortunately, the introduction of the continuously probabilistic linguistic set can handle this problem efficiently [35], and all the possible evaluation information for an alternative given by experts can be recorded, which avoids the loss of the original information.
A simple example is given to illustrate the above theory. Supposing dangerous chemicals suddenly leak on the highway and the emergency threatens the safety of people around and causes damage to the surrounding environment. Several experts are urgently summoned to find solutions for the incident, and then, they are asked to assess each solution within a limited time. It is assumed that there are three experts and four alternatives available to handle this incident, which can be denoted as e situation of emergencies always changes dynamically over time [36]; therefore, decisions need to be made according to the actual situations at different stages, and these problems will be discussed in detail in the next chapter of this paper.

e Information Aggregation Operators.
e scattered information given by experts separately must be aggregated and obtained the final evaluation value for each alternative [37]. At present, there are several different aggregation methods [38], and the dynamic hesitant probability fuzzy weighted arithmetic (DHPFWA) operator is selected in this paper after comparisons because of its simple and intuitive characteristics.
Supposing a total of k experts have, respectively, given their evaluation information for the alternative A r , which can be denoted mathematically as L r � L r1 , L r2 , · · · , L rk , the weights of experts can be denoted as ω � (ω 1 , ω 2 , · · · , ω k ), which can be obtained according to their past experiences and authorities in this field; the greater the value is, the more important the evaluation information given by the expert is [39]; and the weights satisfy the constraints, which are ω i ∈ (0, 1) and k i�1 ω i � 1. Equation (3) gives the specific calculation method of the DHPFWA operator.
where l 1 � 1, 2 · · · , m 1 , l 2 � 1, 2 · · · , m 2 , l k � 1, 2 · · · , m k , and we must point out that the values of m 1 , m 2 , · · · , m k are not necessarily equal to each other, which means that the total number of elements in different CPLSs can be completely unequal with each other. Let us give a simple example to illustrate the above theories, supposing the CPLSs { } are the evaluation information for the alternative A r given by three experts, respectively. We can find that a total number of elements in the three CPLSs are m 1 � 3, m 2 � 1, and m 3 � 2, respectively, and they are totally different from each other. Now further assume that the weights of the three experts are ω � (0.32, 0.27, 0.41), and the aggregated value of the three CPLSs can be calculated according to equation (3), which is shown as follows: We can find that the aggregated value is also in the form of CPLS and cannot be compared with other values directly [40], the score value can be further calculated according to equation (2) mentioned in Section 2.1, which is shown as follows: e form of the score value is very simple, and it is a real number, which is easy to be compared with other values and perform algebraic operations.

2.4.
e Decision-Making Problem with Cost Constraints. Obviously, the cost is one of the most important constraints in the decision-making process, which cannot be ignored [41]. Although every alternative for dealing with emergencies is costly, while there are still wide gaps among different alternatives. e more rigorous the alternative is designed; usually, the better effect can be acquired, while the disadvantage is also obvious, which often have a great adverse impact on the local economy and increase burdens on the people and the government [42]. e costs include not only economic costs but also casualties, labour costs, environmental pollution, and expected income loss and so on; particularly, the casualties are the most important cost and must be seriously considered in the decision-making process [43].
rough the above analysis, we believe that the most appropriate alternative is not necessarily the one that just has the best effect, the cost and the effect must be considered comprehensively, which is more in line with the actual situation [44]. e main idea of dealing with the decision-making problem with cost constraints can be briefly described as follows: first, we reorder all the alternatives according to their costs, which can be denoted as A � A 1 , A 2 , · · · , A n ; the estimated costs of these alternatives can be denoted as Δη � Δη 01 , Δη 12 , · · · , Δη k−1k , in which the symbol Δη i−1i indicates the estimated cost from the time point t i−1 to the time point t i ; the estimated effects acquired by implementing these alternatives can be denoted as Δτ � Δτ 01 , Δτ 12 , · · · , Δτ k−1k }, and similarly, the symbol Δτ i−1i indicates the estimated effect acquired from the time point t i−1 to the time point t i . We give the definition of the effect per cost (EPC), which can be described as ψ � ψ i−1i , i � 1, 2, · · · , k , ψ i−1i � Δτ i−1i /Δη i−1i . e definition of the EPC firstly proposed in the paper can consider the cost and effect comprehensively, and we believe that the most appropriate alternative in the current time point is the one that has the lowest EPC.

e Closed-Loop Control System.
e closed-loop control system is a concept of the automatic control theory in the engineering technology. Its principle can be briefly described as follows: part or all of the output signals will be sent back to the input of the system, the differential signals between the original input signals and the feedback signals will be calculated, and then, they will be input into the system to automatically adjust relevant parameters [45], which is helpful to avoid the system from deviating from the predetermined goal.
We find that there are always differences between the values estimated at the previous time point and the values measured currently, the closed-loop control system provides a way to solve this problem, and we try to construct a closedloop control system in the decision-making field [46]. Specifically speaking, we calculate the differences of the values estimated at the previous time point and the values measured currently and then input the differences into the decision-making system; thus, the relevant parameters of the system will be automatically adjusted in time according to the differences, and this is helpful to improve the evaluation accuracy of the system [47]. is is also one of the important improvements between the algorithm proposed in the paper and other decision-making methods.

The Closed-Loop Method of Collaborative Decision-Making
In this section, we will introduce the algorithm proposed in this paper in detail and build the mathematical model.

Mathematicize the Decision-Making Problem.
Usually, it is impossible to achieve the expected goal by taking measures only once for dealing with emergencies, we need to adjust measures in time with the development of the situation. First of all, we make the following assumptions: the initial time point is denoted as T 0 , and the time point of achieving the expected goal is denoted as T k , and all time points are recorded in the set T � T 0 ,T 1 , · · · , T k . All the time intervals are recorded in the set ΔT � ΔT 01 , ΔT 12 , · · · , ΔT k−1k , and they can also be called periods. Generally, they are equal to each other, while, in some special cases, such as, when a major unexpected event occurs suddenly, a new time point must be inserted immediately.
e experts invited to deal with the emergency are denoted as E � E 1 , E 2 , · · · , E m , and their corresponding weights are denoted as ω � ω 1 , ω 2 , · · · ω m ; the alternatives proposed by experts at the time point T i are denoted as ; the values of the parameter i(i � 0, 1, · · · k) indicate different time points; and the values of the n i (i � 1, 2, · · · k) are not necessarily equal to each other. e experts will measure the current severity of the emergency according to the information acquired at each time point, these measurements will be denoted as τ � τ 0 , τ 1 , · · · , τ k , and each value τ i in the set τ is in the form of CPLS.

e Subtraction between Any Two CPLSs.
In order to build the feedback network, first of all, we need to calculate the differences between the estimated values made at the previous time point and the values measured at the current time point. Both data are in the form of CPLSs, and therefore, the subtraction between any two CPLSs must be required [48]; however, this theory is rarely mentioned by other researchers, and for this reason, the paper proposes a subtraction method between any two continuously probabilistic linguistic sets, which is shown as equation (4). We suppose the L rs and the L pq are two ordinary continuously probabilistic linguistic sets.
We find that the calculation result obtained by equation (4) is also a set, which can be called a special continuously probabilistic linguistic set. e main difference is that the values satisfy the constraint condition, which is −1 ≤ c l 1 − c l 2 ≤ 1 in the subtraction set, while the values satisfy the constraint condition, which is 0 ≤ c l ≤ 1 in any ordinary continuously probabilistic linguistic set. It can be further illustrated by a simple example, supposing that there are two ordinary CPLSs, which are recorded as L rs � 04|02, 041|08 { }, respectively, and the subtraction result can be calculated according to equation (4) and the result is as follows: We can find that some values are greater than zero, while other values are less than zero, which is different from the definition of the ordinary continuously probabilistic linguistic set. e sum of probabilities is also equal to one, which is the same with the ordinary continuously probabilistic linguistic set. However, the above result is still not intuitive enough to reflect the differences; therefore, the score value of the special continuously probabilistic linguistic set needs to be further calculated. We must point out that the method mentioned in equation (2) is still applicable to the calculation of the special continuously probabilistic linguistic set, and the result is called as the special score value. e only difference is that the value range is 0 ≤ S(L) ≤ 1 for any ordinary CPLS, while the value range will be −1 ≤ S(L d ) ≤ 1 for the special CPLS. For example, the special score value of the above example can be calculated according to equation (2), and the result is as follows: Computational Intelligence and Neuroscience When the score value is less than zero, it indicates that the value measured currently is better than the value estimated at the previous time point; when the score value is greater than zero, it indicates that the value measured currently is worse than the value estimated at the previous time point; when the score value is equal to zero, it indicates that the value measured currently is exactly equal to the value estimated at the previous time point; however, this ideal situation is almost impossible to happen.

3.3.
e Method of Obtaining the Most Appropriate Alternative. Let us illustrate the algorithm proposed in the paper in the chronological order. At the initial time point T 0 , the current severity of the emergency measured by experts is denoted as τ 0 , the specific form can be denoted as τ 0 � τ 1 0 , τ 2 0 , · · · , τ m 0 , each value τ i 0 given by the corresponding expert E i is in the form of the continuously probabilistic linguistic set, and the specific form of the τ 0 is further described in Table 1. e alternatives proposed at the time point T 0 are denoted as A 0 � A 1 0 , A 2 0 , · · · , A n 0 0 , and the estimated severities for the next time point T 1 are denoted as When using different alternatives, each value τ i/ ′ 1 in the set τ 1 ′ is also a set, which can be denoted as indicates the severity that the expert E j estimated at the time point T 1 by using the alternative A i , and the specific form of the τ 1 ′ is further described in Table 2. We must point out that all the elements in Table 2 are also in the form of the continuously probabilistic linguistic set. Each value τ i/ ′ 1 (i � 1, 2, · · · , n 0 ) consists of the elements in the corresponding row of Table 2. For the sake of simplicity, the specific forms of the elements in Table 2 are not given and they are similar to the elements in Table 1.
All the scattered information provided by experts can be aggregated by the DHPFWA operator, and then, the score value can be further calculated, and these theories have already been introduced in Section 2.3. Equations (5) and (6) are specific expansion forms for this problem. e calculation result of the DHPFWA(τ 0 ) is in the form of the continuously probabilistic linguistic set, and the symbol m indicates the total number of elements in the DHPFWA(τ 0 ).
Similarly, the score values of the estimated severities at the time point T 1 can also be calculated, which can be ; then, all the estimated effects can be calculated according to equation (7).
Each value in the set Δτ 01 satisfies the constraint, which is −1 ≤ Δτ i/ ′ 01 ≤ 1; when the value is negative, it indicates that the emergency has become worse after using the corresponding alternative A i ; when the value is positive, it indicates that the emergency has been alleviated after using the corresponding alternative A i ; and when the value is zero, it indicates that the emergency has not changed after using the corresponding alternative A i . e cost of each period is recorded in the set Δη � Δη 01 , Δη 12 , · · · , Δη k−1k . e symbol Δη ii+1 indicates the estimated cost from the time point T i to the time point i for dealing with the emergency will produce different costs, and the Δη ii+1 is also a set, which can be denoted as . For the first period, the effect per cost ψ 01 of using different alternatives can be calculated according to equation (8); obviously, the result is a set.
e most appropriate alternative at this time point is the one that has the lowest EPC, which is shown as follows: Similarly, the most appropriate alternative at other time points can be obtained by this method.

e Construction of the Closed-Loop System.
e most appropriate alternative A j found in the previous step will be implemented immediately. e current severity of the emergency will be measured again at the time point T 1 , Table 1: e current severity of the emergency at the initial time point.
Experts Experts estimated severities Computational Intelligence and Neuroscience which can be denoted as τ 1 . e τ 1 is a set that contains several values τ 1 1 , τ 2 1 , · · · , τ m 1 given by different experts, respectively, according to the information acquired at the time point T 1 , and the specific form of the τ 1 is further described in Table 3.
e differences between the values estimated at the initial time point T 0 and the values measured at the first time point T 1 will be calculated, and the calculation method is shown in equation (10), and its specific form is further described in Table 4.
We must point out that all the τ i 1 (i � 1, 2, · · · , m) and the τ j/ ′ 1i (i � 1, 2, · · · , m) are in the form of CPLS; therefore, each calculation equation is a subtraction between CPLSs, and they must be calculated according to equation (4) mentioned in Section 3.2. All the differences d i 1 (i � 1, 2, · · · , m) are also in the form of CPLS, and they will be aggregated according to equation (5) and equation (6) to obtain the total difference of the first period, which can be denoted as S(d 1 ).
e flow chart of the closed-loop submodule is shown in Figure 1. At this time, the system will enter the automatic adjustment stage. Four parameters that are denoted as λ 1 , λ 2 , ε, and ς will be set in advance, and the inequalities −1 ≤ λ 1 ≤ − ε ≤ 0 ≤ ε ≤ λ 2 ≤ 1 and 0 ≤ ς ≤ 1 hold. e smaller the value of ε is set, the higher the system accuracy is required; the larger the value of λ 1 is set, the easier it is for the system to conduct conservative evaluation; the larger the value of λ 2 is set, the easier it is for the system to conduct optimistic evaluation; and the greater the value of ς is set, the easier the predetermined goal can be achieved. If the inequality |S(d 1 )| ≤ ε holds, it indicates that the system works well and no adjustment is required; if the inequalities |S(d 1 )| > ε and λ 1 ≤ S(d 1 ) ≤ λ 2 hold, it indicates that only minor adjustments are needed and the automatic adjustment method will be activated immediately; if the inequality λ 2 < S(d 1 ) ≤ 1 holds, it indicates that the system is too optimistic, experts are not fully aware of the severity and the development trend of the accident, and the system can be adjusted from two aspects: the first suggestion is that experts must propose more stringent alternatives, and the other suggestion is that experts should reduce the estimated values; if the inequality −1 ≤ S(d 1 ) < λ 1 holds, it indicates that the system is too pessimistic, the alternative used has achieved better results than expected. Similarly, the system can also be adjusted from two aspects: the first suggestion is that the experts can propose looser alternatives with lower costs, and the other suggestion is that the experts should appropriately raise the estimated values.

e Automatic Adjustment Algorithm.
e symbol ε mentioned above is called the acceptable threshold. In this section, we propose an automatic adjustment algorithm for the estimated values and its specific steps are listed as follows: Step 1. Appropriate values will be set for the system parameters λ 1 , λ 2 , and ε according to the actual situation of the emergency.
Step 2. Calculate the total differences of the current period S(d) � S(d i )|i � 1, 2, · · · , k by using the method mentioned in Section 3.4.
Step 3. Let us take the first period as an example to illustrate the algorithm, suppose the inequality λ 1 ≤ S(d 1 ) ≤ λ 2 holds, and the inequality |S(d 1 )| ≤ ε does not hold.
Step 4. It can be divided into two categories according to the value of the S(d 1 ). When the inequality λ 1 ≤ S(d 1 ) < − ε holds, first of all, the maximum value must be found from all the estimated values, supposing the symbol c i represents the maximum value, then increase m × |S(d 1 )| to the value, and the symbol m represents the total number of experts. On the other hand, when the inequality ε < S(d 1 ) ≤ λ 2 holds, similarly, the maximum value should decrease m × S(d 1 ). We can summarize that the adjustment method can be unified for both categories after the above analysis, which can be shown as follows: Step 5. Similarly, the total difference S ′ (d 1 ) can be calculated again according to the updated estimated values, and the step 3 and the step 4 will be repeated until the inequality |S(d 1 )| ≤ ε holds.
Step 6. e qualified estimated values will be obtained after several rounds of automatic adjustments. e automatic adjustment algorithm has two advantages: the first advantage is that the algorithm is efficient and highly automated, and another advantage is that the original estimated information given by experts is minimally modified compared with other algorithms. e flow chart of the automatic adjustment submodule is shown in Figure 2.

e Brief Summary of the Algorithm Proposed in the Paper.
e overall flow chart of the algorithm proposed in the paper is shown in Figure 3. e whole algorithm is divided into multiple time points, which are denoted as T 0 , T 1 , · · · , T k , and the time that spans any two adjacent time points can be called a period, such as ΔT 0 � [T 0 , T 1 ].  Experts

Computational Intelligence and Neuroscience
At the time point T 0 , the current severity of the emergency will be measured by experts, and the data can be called measured values for short; then, the algorithm will judge whether the predetermined goal has been achieved or not according to the measured values. If the goal has been achieved, the algorithm will be terminated immediately; if the goal has not been achieved, experts will estimate the severities at the next time point when using different alternatives and the data obtained can be called estimated values for short. e estimated effects of different alternatives can be calculated according to the measured values and the evaluated values, and the cost of each alternative can be estimated according to specific measures. After above preparation, the effect per cost of each alternative can be calculated. Finally, the most appropriate alternative that has the lowest EPC will be found and it will be implemented immediately.
Similarly, at the time point T 1 , experts will measure the current severity of the emergency, and then, they will judge whether the predetermined goal has been achieved or not again. If the goal has been achieved, the algorithm will be terminated; if the goal has not been achieved, the total differences between the values estimated at the previous time point and the values measured currently will be calculated, and the corresponding automatic adjustment submodules will be activated according to the differences. e following processing methods are similar to the above steps, and the most appropriate alternative of this time point will be found and implemented.
From the time point T 2 to the time point T k−1 , the algorithm will repeat the above processes and the severity of the emergency will gradually decrease. e emergency will be effectively controlled after several rounds of treatment.
At the time point T k , experts will measure the current severity of the emergency, and they find that the inequality |1 − S(T k )| ≤ ς holds, which indicates that the predetermined goal has been achieved, the algorithm will be terminated immediately.
e parameter ς is called the completion threshold. e emergency has been handled effectively with the lowest cost.

e General Description of the Emergency.
e whole world is facing the severe challenge of the COVID-19 (corona virus disease 2019), and the latest prediction shows that the epidemic will lead to a global economic recession and large-scale unemployment. It has caused a large number of infections; even worse, various prevention and control methods are not mature enough to fundamentally eradicate the infectious disease.
At present, the COVID-19 has been basically controlled in China; however, we found that the epidemic is still breaking out occasionally in some areas of China and has the trend of further expansion, and it has added great resistance to the employment and the economic development of China.
e Chinese government has taken various measures to deal with the epidemic for many years; however, the epidemic situation is changing continuously over time. Obviously, this problem belongs to the dynamic decision-making problem; Propose and adopt propose more stringent alternatives Yes Figure 1: e flow chart of the closed-loop submodule. 8 Computational Intelligence and Neuroscience in addition, we can hardly hope to solve this problem through only a round of measures, and therefore, the multiround decision-making algorithm discussed in the paper is suitable to deal with this problem. e specific steps of the proposed algorithm will be introduced in this section according to the chronological order.

e Processing
Methods at the Time Point T 0 . Let us take one of universities in the high-risk areas as an example to illustrate the algorithm and the university is facing the threat of the epidemic. e appropriate alternatives must be found out at different time points to prevent and control the epidemic. Supposing that a total of three experts are summoned to deal with this emergency, and they have put forward four response alternatives, which can be denoted as e predetermined goal is to minimize the adverse impact of the COVID-19 on normal teaching and student activities. Table 5 lists the alternatives proposed by experts for handling the emergency at the initial time point (T 0 ). We can find that the measures in the table have gradually become more and more stringent from top to bottom, and we must admit that the latter alternative is indeed better than the former alternative in controlling the epidemic situation; however, the disadvantage is that the cost will be higher; once again, we point out that the most appropriate alternative is not necessarily the most stringent alternative. e current severities of the emergency measured by experts separately according to the available information are listed in Table 6, and the weights of experts are also given. Obviously, the predetermined goal has not been achieved. e scattered information can be aggregated according to equations (5) and (6). e score value obtained ranges from 0 to 1, and the symbol "0" indicates that the situation is extremely bad, while the symbol "1" indicates that the situation is perfect. e current severity is 0.1554, and the specific calculation processes are shown as follows: e values of the estimated severities at the time point T 1 when using different alternatives are listed in Table 7. Similarly, the score values are calculated and their specific calculation steps are shown as follows: e score values of the estimated severities at the time point T 1 can be recorded as S ′ (T 1 ) � 0.2333, 0.3031, 0.3587, 0.3908 { }. Subsequently, the estimated effects at the period ΔT 01 can be calculated according to equation (7), which are shown as follows:  1.2, 1.7, 2).
Obviously, the alternative A 4 has the best effect; however, its cost is also the highest, and therefore, the most appropriate alternative cannot be determined directly, and the effects per cost of all alternatives need to be further calculated according to equation (8), which are shown as follows: Similarly, the experts will measure the severities again at the time point T 1 and their values are listed in Table 8.
Obviously, the predetermined goal has still not been achieved. In order to test and improve the accuracy of the system, the differences between the values estimated at the T 0 and the values measured at the T 1 will be calculated and their values are listed in Table 9.
e system parameters are set as λ 1 � −0.001, λ 2 � 0.001, ε � 00005, and ς � 0.04. We can find that the inequality λ 1 < S(d 1 ) < λ 2 holds; therefore, major adjustments are not required, and however, the inequality −ε ≤ S(d 1 ) ≤ ε does not hold, which indicates minor adjustments are still required and the automatic adjustment module will be activated immediately. According to the algorithm, the maximum estimated value of the alternative A 2 0 in Table 7 can be found, the value 0.32 will increase to 0.3216081827 according to equation (11), and other values remain unchanged. e updated severities are shown in Table 10. e total difference will be calculated again according to the data in Table 11, and the specific steps are shown as follows: We can find that the inequality −ε < S ′ (d 1 ) < ε holds at this time, which indicates that the automatic adjustment module works well.
e updated values in Table 10 can provide references for experts in the next estimation.
Since the inequality λ 1 < S(d 1 ) < λ 2 holds, the most appropriate alternative at this time point is the same as the one at the previous time point; therefore, the alternative A 2 is still the most appropriate alternative at the time point T 1 and it will be implemented immediately. Table 12 lists the estimated severities at the time point T 2 when using different alternatives. Since all the alternatives proposed by experts have not changed, the costs remain unchanged.

e Processing Methods at the Time Point T 2 .
In the same way, the experts will measure the severities again at the time point T 2 and their values are listed in Table 13.
Obviously, the predetermined goal has not been achieved. e differences between the values estimated at the time point T 1 and the values measured at the time point T 2 will be calculated, which are shown in Table 14. e total difference will be aggregated according to the data in Table 14.
We can find that the inequality S(d 2 ) < λ 1 holds, which indicates that the actual effects of the alternative are much better than the estimated effects and major adjustments must be required. Experts need to check the system carefully to find out whether any important information for decision-making is missing. e alternative with lower cost should be adopted, if the alternative adopted in the last round of decision-making is already the cheapest alternative, experts should propose a new and cheaper alternative. Since the inequality Δη 1 23 < Δη 2 23 holds in this case, which indicates that the alternative with lower cost exists, therefore, there is no need to propose a new alternative, and the alternative A 1 will be the most appropriate alternative at the time point T 2 , and it will be implemented immediately.
Duo to the good effect of the alternative, the experts will give more optimistic estimated values in the next round of estimations, which are shown in Table 15.

Achieve the Predetermined Goal.
e experts will measure the severities of the emergency again at the time point T 3 , and their values are listed in Table 16, and then, the score value will be calculated.
We can find that the inequality |1 − S(T 3 )| ≤ ς holds, which indicates that the emergency has almost been eliminated, and only routine inspections are required and the algorithm will be terminated.

The Comparisons and Discussions
Many scholars have also proposed several outstanding algorithms in the field of decision-making from various perspectives, and these algorithms have their characteristics and suitable application scopes [49]. e comparisons between the algorithms proposed in the paper and others will be made in this section, which will be helpful for finding out the advantages and disadvantages of the algorithm proposed in this paper.

e Hesitant Fuzzy Set and Its Processing
Methods. e hesitant fuzzy set, a classic data structure, is one of the important definitions in the fuzzy mathematics [50], and its information aggregation operators and comparison methods are also quite mature; particularly, many complex data structures are developed from it. Unfortunately, the probability information of the evaluation values cannot be recorded together in the hesitant fuzzy set. Table 17 lists the conversion values of Table 7 when the data are recorded in the form of hesitant fuzzy sets.
We find that only the evaluation values can be recorded, and all the corresponding probability information is missing. From the other point of view, it can be considered that all the probability values are equal to each other in any hesitant fuzzy set. erefore, the hesitant fuzzy set is a special case of the continuously probabilistic linguistic set, and the continuously probabilistic linguistic set can record more detail information, which will make the algorithm more accurate fundamentally.

e Probabilistic Linguistic Set and Its Processing Methods.
e probabilistic linguistic set (PLS) is also one of the efficient data structures, and it is widely used in the field of dealing with fuzzy problems, especially the collection and storage of the fuzzy data [51]. e total number of the possible evaluation values in the PLS is limited [52], and all the possible evaluation values are contained in the additive linguistic term set, which are denoted as S � s α |α � 0, 1, · · · , 2τ , and the symbol τ indicates a positive integer. e definition of the probabilistic linguistic set can be described mathematically as follows: Obviously, the data structure CPLS proposed in the paper is developed from the probabilistic linguistic set and it not only inherits the advantages of the PLS but also overcomes its disadvantages, and it expands the number of possible evaluation values from limited to countless.
For the case mentioned above, the additive linguistic term set can be set as S � s α |α � 0, 1, 2, 3, 4 , the symbol s 0 indicates "terrible"; the symbol s 1 indicates "bad"; the symbol s 2 indicates "moderate"; the symbol s 3 indicates "good"; and the symbol s 4 indicates "perfect." Let us also take the data in Table 7 as an example to illustrate the data structure, and the estimated values cannot be directly converted to the additive linguistic term sets; therefore, first, we should establish the transformation rules, which can be described as follows: the values will be set as s 0 if the in- if the inequality 0.6 ≤ τ 1 ′ < 0.8 holds; and the values will be set as s 4 if the inequality 0.8 ≤ τ 1/ ′ ≤ 1 holds. Table 18 lists the transformed values when the data are recorded in the form of the probabilistic hesitant fuzzy sets. We find that the values in the A 1 1 , A 2 1 , A 3 1 are equal to each other and all the evaluation values given by different experts are s 1 and s 2 ; obviously, the discrimination ability of this method is poorer than the algorithm proposed in the paper.

e Decision-Making Algorithms without the Cost Limitation.
e cost limitation in the decision-making process is one of the characteristics of the algorithm proposed in the paper. Although many other algorithms have considered costs, they only take the cost as one of decisionmaking indicators and do not list it separately [53]. In some cases, we found that the increase in cost does not improve any effect. For the case discussed in the paper, the most appropriate alternatives will be A 4 ∼ A 4 ∼ A 4 if only the effects are considered, the total cost will be η � Δη 4 01 + Δη 4 12 + Δη 4 23 � 6. e final result is A 2 ∼ A 2 ∼ A 1 , which is obtained by the algorithm proposed in the paper, and the total cost is η ′ � Δη 2 01 + Δη 2 12 + Δη 1 23 � 3.4. We can find that the same goal has been achieved, but the cost is saved by 43.3%, which verifies the superiority of the algorithm proposed in the paper from the perspective of the cost.

e Open-Loop Decision-Making Algorithms.
At present, most decision-making algorithms adopt the open-loop mode; in other words, they fail to establish a set of feedback mechanisms [54]. Now we will demonstrate the method without feedback mechanisms to solve the above case and point out the differences between the method and the algorithm proposed in the paper. e alternative A 2 will still be the most appropriate alternative at the time point T 0 . e estimated values cannot be compared with the measured values at the time point T 1 ; therefore, the accuracy of the system cannot be verified, the automatic adjustment module proposed in the paper cannot be activated, the system cannot be adjusted in time, and the error rate will be higher and higher with the increasing of  The T 0 and T 1 constitute a complete processing cycle, process like this cycle time. One of the noticed differences will occur at the time point T 2 , the A 2 instead of the A 1 will be the most appropriate alternative if the feedback mechanism fails work, and the conclusion that only the alternative with lower cost is needed and the estimated values must be improved in the next estimation cannot be drawn, and this will directly lead to the increase of costs and processing cycles.
In short, the feedback mechanism is effective for timely verifying the correctness of the system, and it can save the In addition to the A 3 0 , we suspend all the courses and cancel all unnecessary activities among students Table 6: e current severities measured at the initial time point.
Experts   Experts   total cost and reduce time effectively [55], which verifies the superiority of the algorithm proposed in the paper from the perspective of the accuracy.

Conclusions
When faced with emergencies, especially disasters, it is crucial to make timely and appropriate decisions; however, it is not easy to achieve this goal because of the limited time for making decisions and the fuzzy information that can be acquired.
e accuracy of data can directly affect the quality of the final decision, while we find that it is hard to record data accurately and scientifically. How to improve the accuracy of the collected data is the first problem to be solved. e data structure, the continuously probabilistic linguistic set, is adopted to save original data after comparisons. is data structure allows multiple possible values can be stored together in a record; meanwhile, the probability information of each possible value can also be stored together, and these characteristics can overcome the uncertainty and fuzziness        (1)) (s 1 (1)) (s 1 (1)) A 2 1 (s 1 (1)) (s 1 (1)) (s 1 (1)) A 3 1 (s 1 (1)) (s 1 (1)) (s 1 (1)) A 4 1 (s 1 (0.5), s 2 (0.5)) (s 1 (0.3), s 2 (0.7)) (s 1 (0.6), s 2 (0.3)) in the process of data acquisition, which can improve the data quality to the greatest extent and lay a solid foundation for the later decision-making.
At present, most decision-making models adopt the linear structure and single-round mode, although these models have been elaborately designed, an important defect cannot be ignored; that is, it is impossible to verify the accuracy of the estimated results given by the system in time. In order to solve this problem, a new structure is proposed in the paper. e whole decision-making process is divided into multiple sub-decision-making stages, and each estimated result can be verified at the next decisionmaking time point. e estimated values and the current measured values are two different types of signals used in the system, the differences of the values estimated at the previous time point, and the values measured currently will be calculated by the fuzzy subtraction proposed in the paper. In general, there are certain differences between them, and the greater the difference, the lower the accuracy of the system. Due to time constraints, it is almost impossible for experts to reevaluate alternatives; fortunately, the paper proposes an automatic repair algorithm, which can solve this problem. e repair algorithm contains several submodules according to different situations, when the inequality S(d) ≤ |ε| holds, which indicates that the system works well and does not need any adjustment; when the inequalities λ 1 ≤ S(d) < − ε or ε < S(d) ≤ λ 2 hold, which indicates that the system needs minor adjustments and the automatic adjustment algorithm will be activated immediately; when the inequality λ 2 < S(d) ≤ 1 holds, which indicates that the system is too optimistic and the actual situation is more serious than estimated; and when the inequality −1 ≤ S(d) < λ 1 holds, which indicates that the system is too prudent and the actual effect is much better than estimated. e closed-loop decision-making system can be constructed through the establishment of the feedback mechanisms, and the accuracy of the whole model will be improved effectively. e cost is one of the most important factors in the decision-making process, and we must point out again that the cost mentioned in the paper refers to the generalized cost, not just the economic cost. e effectiveness of each alternative will be evaluated separately in each round of decision-making. Generally, the rigorous alternative can achieve better results, while it may also cause a lot of losses; thus, it is not necessarily the most appropriate alternative. Based on these considerations, the paper proposes the definition and calculation method of the effect per cost, when the predetermined goal can be achieved, we believe that the most appropriate alternative must be the one that has the lowest cost. e establishment of the above theory is also one of the innovations of this paper.
We have to point out some limitations of the paper. As one of the initial conditions, the estimated cost is essentially a fuzzy value, which is difficult to be accurately described by a simple value. us, the problem discussed in the paper is actually a double fuzzy problem and more fuzzy variables need to be considered. Further researches will be conducted by our team for this problem in the near future.

Data Availability
e data used to support the findings of this study are included within the article, and they are obtained through practical investigations.

Conflicts of Interest
e authors declare that there are no conflicts of interest.

Authors' Contributions
M.F. conceptualized the data; L.F.W. supervised the study; X.N.C. involved in project administration; B.Y.Z. validated the study; X.X.Z. investigated the study; and S.S.Y. wrote original draft preparation. All authors have read and agreed to the published version of the manuscript.