MPEMathematical Problems in Engineering1563-51471024-123XHindawi10.1155/2021/99572799957279Research ArticleSolving Multiobjective Game in Multiconflict Situation Based on Adaptive Differential Evolution Algorithm with Simulated Annealinghttps://orcid.org/0000-0001-6595-2801LiHuimin1https://orcid.org/0000-0001-8724-6449XiangShuwen12JiaWensheng1YangYanlong1HuangShiguo13GaoHao1College of Mathematics and StatisticsGuizhou UniversityGuiyang 550025Chinagzu.edu.cn2College of Mathematics and Information ScienceGuiyang UniversityGuiyang 550005Chinagyu.cn3Department of Mathematics and Information ScienceZhengzhou University of Light IndustryZhengzhou 450002Chinazzuli.edu.cn2021227202120212432021185202146202122720212021Copyright © 2021 Huimin Li et al.This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

In this paper, we study the multiobjective game in a multiconflict situation. First, the feasible strategy set and synthetic strategy space are constructed in the multiconflict situation. Meanwhile, the value of payoff function under multiobjective is determined, and an integrated multiobjective game model is established in a multiconflict situation. Second, the multiobjective game model is transformed into the single-objective game model by the Entropy Weight Method. Then, in order to solve this multiobjective game, an adaptive differential evolution algorithm based on simulated annealing (ADESA) is proposed to solve this game, which is to improve the mutation factor and crossover operator of the differential evolution (DE) algorithm adaptively, and the Metropolis rule with probability mutation ability of the simulated annealing (SA) algorithm is used. Finally, the practicability and effectiveness of the algorithm are illustrated by a military example.

National Natural Science Foundation of China7196100312061020Qian Jiaohe YJSCXJH029Guizhou University49
1. Introduction

As we all know, in many fields such as optimal control, engineering design, economy, and arms race, deciders usually have not merely considered one objective  but integrated many objectives in decision-making. So multiobjective decisions are more consistent with reality than a single-objective decision. Multiobjective game  is an effective method to solve reciprocity among many deciders in the real society. Therefore, it is significant to study the multiobjective game. For example, in the military, armed forces will fight with the enemy on multiple battlefields at the same time. Due to the military strength, equipment, and other constraints, its multiple battlefield situations are interrelated and mutually constrained. According to the game theory, a conflict situation can be described as a game environment. In this way, the multiconflict situation can be expressed as multiple interconnected games, and the constraints can be regarded as objective criteria of the game. In the game, the player’s strategy choice and the payoff function are the keys to analyzing the game decision. Therefore, studying the synthesis of strategies and synthetic payoff function is of great significance for analyzing the multiobjective game in a multiconflict situation.

For the game in a multiconflict situation, Inohara et al.  discussed the relationship between strategy sets and studied the synthesis of finite strategies. However, they did not provide a specific strategic integrated model. Under the single objective, Yanjie et al.  established an integrated game model for the conflict situation described by multiple bimatrix games [10, 11] and gave an application in the military field. Yexin et al.  established a game-integrated model in the multiobjective and multiconflict situation, and they solved the game by using the equivalent relationship between quadratic programming and equilibrium solution of the game. When solving these game models in the above literature, the numerical calculation methods and Lingo software were generally used, and less attention has been paid to intelligent algorithms to solve these problems. The paper aims to fulfill this gap, that is, the multiobjective game in multiconflict situation is transformed into a simpler optimization problem, and then an efficient intelligent algorithm is proposed to solve it.

In this paper, in order to solve the integrated game model better, the integrated multiobjective game model is transformed into a single-objective game model by using the Entropy Weight Method [13, 14]. Then, the game is transformed into an equivalent optimization problem with constraints . Finally, an adaptive differential evolution algorithm based on simulated annealing (ADESA) is proposed to solve this game. The main operation steps of this hybrid algorithm can be divided as follows: first, the mutation and crossover operator of the differential evolution (DE) algorithm are adaptively improved [19, 20]. Then, the Metropolis process of the simulated annealing (SA) algorithm  is applied to the DE algorithm, which has the characteristic that could accept not only the good solutions but also the inferior solutions with a certain probability. It not only improves the convergence speed and accuracy of the DE but also improves the diversity of the population. Thus, the hybrid algorithm has a strong global search capability and avoids the occurrence of premature phenomena. At the end of this paper, we use a military game example to demonstrate the practicability and effectiveness of the algorithm in solving the multiobjective game in a multiconflict situation.

2. Game Synthetic Model2.1. Multiobjective Game Model in a Multiconflict SituationDefinition 1.

The N-person noncooperative game with a single objective denoted by Γ=N,Si,Ui,Xi,fi:i=1,,n, where

N=1,,n is the set of players and n is the number of players.

Si=si1,,simi,iN is the pure strategy set of player i, mi represents the number of strategies available to player i, S=i=1nSi, and each pure strategy profile meets s1m1,s2m2,,sij,,snmnS,sijSi.

Ui:S,iN represents the payoff function of player i.

Xi=xi=xi1,,xik,,ximi:xik0,k=1,,mi,k=1mixik=1,iN is the set of mixed strategies of player i. X=i=1nXi and each mixed strategy profile meets x1,x2,,xnX.

fi:X,iN represents the expected payoff function of player i:

(1)fix1,,xn=k1=1m1kn=1mnUis1k1,,snkni=1nxiki,where fix1,,xn represents the expected payoff value of player i when he chooses a mixed strategy xi=xi1,,ximiXi. Uisiki,,snkn represents the payoff value of player of i when each player chooses pure strategy sikiSi,i=1,,n.

Denote by(2)fixsikifix1,,xi1,siki,xi+1,,xnk1=1m1ki1=1mi1ki+1=1mi+1kn=1mnUis1k1,,snknx1k1xi1ki1xi+1ki+1xnkn,where siki1kimi is a pure strategy of player i, xsiki represents siki of player i instead of xi, and the other players do not change their own mixed strategy.

Definition 2.

If there is x=x1,,xnX, fixi,xi=maxuiXifiui,xi,iN, then x is the Nash equilibrium of n-person finite noncooperative game, where i=N\i,iN.

Conclusion 1.

A mixed strategy x is the Nash equilibrium point of a game if and only if every pure strategy siki1kimi of each player satisfies fixfixsiki.

Theorem 1 (see [<xref ref-type="bibr" rid="B18">18</xref>]).

A mixed strategy xX is the Nash equilibrium point of the game Γ if and only if x is the optimal solution to the following optimization problem, and the optimal value is 0:(3)minfx=i=1nmax1kimifixsikifix,0,ki=1mixiki=1,0xiki1,i=1,,n;ki=1,,mi.

Especially, for the two-player matrix game, it can be seen from Theorem 1 that finding the Nash equilibrium x1,x2 of the game is equivalent:(4)minfx=maxmax1im1Aix2Tx1Ax2T,0+maxmax1jm2x1Bjx1Bx2T,0,k1=1m1x1k1=1,k2=1m2x2k2=1,0xiki1;i=1,2,where A and B are payoff matrices of players, Ai is the ith row of matrix A, and Bj is the jth column of matrix B.

Definition 3.

The multiconflict situation of the two persons with multiobjective refers to the situation in which player 1 and player 2 have conflict under the LL2 objectives in the KK2 situations. Each conflict situation can be described by a multiobjective bimatrix game model. Suppose that in the kthk=1,,K conflict situation, the payoff matrices of player 1 and player 2 under the lthl=1,,L objective are Alk and Blk, respectively:(5)Alk=a11lka12lka1nklka21lka22lka1nklkamk1lkamk2lkamknklk,Blk=b11lkb12lkb1nklkb21lkb22lkb1nklkbmk1lkbmk2lkbmknklk,where mk and nk are the number of strategies of player 1 and player 2, respectively.

Definition 4.

If player 1 chooses a pure strategy αikk in the kthk=1,,K game Gk and combines pure strategies in all conflict situations, the feasible strategy string of player 1 in the multiconflict situation is obtained, which is recorded as αiαi11;αi22;;αikk;;αiKK,ik1,2,,mk. Similarly, the feasible strategy string of player 2 in a multiconflict situation is βjβj11;βj22;;βjkk;;βjKK,jk1,2,,nk.

Definition 5.

The feasible strategy sets of player 1 and player 2 in a multiconflict situation are recorded as S1=α1,α2,,αt and S2=β1,β2,,βr, where t and r represent the number of feasible strategy strings of player 1 and player 2, respectively.

Definition 6.

A synthetic strategy in a multiconflict situation is αi,βj, where αi is a feasible strategy string selected by player 1 from S1, and βj is a feasible strategy string selected by player 2 from S2. The synthetic strategy space consists of a set of all synthetic strategies of player 1 and player 2, which is recorded as S=S1×S2.

Definition 7.

In a synthetic strategy, a strategic combination of player 1 and player 2 in each game is called the substrategy of the synthetic strategy. For example, the substrategies of αi,βj are αi11,βj11, αi22,βj22,,αiKK,βjKK.

The synthetic payoff value of the player in a synthetic strategy is the sum of the payoff values of all substrategies in the same objective. Let the synthetic payoff function values of the players 1 and 2 be cijl and dijl under the lthl=1,2,,L objective when players choose the synthetic strategy αi,βj, and(6)cijl=k=1Kaikjklk,dijl=k=1Kbikjklk.

Definition 8.

The integrated model of the multiobjective bimatrix game in a multiconflict situation is G=S1,S2,Cl,Dl, where(7)Cl=c11lc12lc1rlc21lc22lc1rlct1lcr2lctrl,Dl=d11ld12ld1rld21ld22ld1rldt1ldt2ldtrl,l=1,2,,L.

In order to solve the integrated game model in a multiconflict situation, the Entropy Weight Method is introduced below.

2.2. Integrated Game Model Based on the Entropy Weight Method

The payoff matrix of each objective in the integrated model is transformed into a standardized matrix by using the extreme difference method. For payoff matrices Cl and Dl of the lthl=1,2,,L objective, take θ¯l=max1it,1jrcijl,dijl, θ¯l=min1it,1jrcijl,dijl.

For the negative objective,(8)eijl=θ¯lcijlθ¯lθ¯l,fijl=θ¯ldijlθ¯lθ¯l,1it,1jr.

For the positive objective,(9)eijl=cijlθ¯lθ¯lθ¯l,fijl=dijlθ¯lθ¯lθ¯l,1it,1jr.

Let El and Fl be the normalized matrices corresponding to Cl and Dl, respectively, and El=eijlt×r, Fl=fijlt×r.

For El and Fl, the weight of each objective is calculated by the Entropy Weight Method [13, 14].

Calculate the entropy value of the lthl=1,2,L objective:(10)hl=λi=1tj=1reijlRllneijlRl+fijlRllnfijlRl,

where λ=1/ln2tr, Rl=i=1tj=1reijl+fijl. If eijl=0 or fijl=0, then eijl/Rllneijl/Rl=0 or fijl/Rllnfijl/Rl=0.

Calculate the difference index of the lth objective:(11)gl=1hl,l=1,2,L.

Calculate the weight of the lth objective:(12)ωl=gll=1Lgl,l=1,2,,L.

Weighting and summing El and Fl of all objectives in the synthetic model:(13)E=eijt×r=l=1LwlEl,F=fijt×r=l=1LwlFl,

where eij=l=1Lwleijl and fij=l=1Lwlfijl.

By weighted summation, the above-integrated model of a multiobjective bimatrix game in a multiconflict situation is transformed into a single-objective matrix game model G=S1,S2,E,F. In order to solve the game model more conveniently, the ADESA algorithm is proposed.

In this section, we outline a novel DE algorithm, ADESA algorithm, and explain the steps of the algorithm in detail.

3.1. Differential Evolution (DE) Algorithm

The DE algorithm is first introduced by Storm and Price . Due to its outstanding characteristics, such as simple structure, robustness and speediness, being easy to understand and implement, and fewer control parameters, it has become more and more popular and has been extended to handle a variety of optimization problems. The DE algorithm mainly consists of four operations: initialization population, mutation operation, crossover operation, and selection operation.

3.1.1. Initialization

In the study of the DE algorithm, it is generally assumed that the initial population conform to a uniform probability distribution. Each individual X=xi1,,xij,,xiD,i=1,,N,j=1,,D can be expressed as(14)xi,j=rand0,1xi,jUxi,jL+xi,jL,where N and D denote population size and space dimension, respectively, and xi,jL and xi,jU, respectively, are the lower and upper bound of the search space.

3.1.2. Mutation

The mutation operation is mainly executed to distinguish DE from other evolutionary algorithms. The mutation individual V=vi1,,vij,,viD is generated by the following equation:(15)vit+1=xr1t+Fxr2txr3t,where r1r2 and r3 are randomly generated integers within 1,N, r1r2r3i, F is a constriction factor to control the size of difference of two individuals, and t is the current generation.

3.1.3. Crossover

We use the crossover between the parent and offspring with the given probability to generate a new individual U=ui1,,uij,,uiD:(16)uijt+1=vijt+1,ifrandjCRorj=rnbri,xij,otherwise,where randj0,1 is random values, CR0,1 is crossover operator, rnbri is a randomly selected integer on 1,D, which ensures at least one component of new individual is inherited from the mutant vector.

3.1.4. Selection

In the problem with boundary constraints, it is necessary to ensure that the parameter values of the new individuals are in the feasible region. There is a simple method is boundary treatment, in which the new individuals beyond the bounds are replaced by the parameter vectors randomly generated in the feasible region. Then, the offspring Xit+1 is generated by selecting the individual and parent according to the following formula (for a minimization problem):(17)Xijt+1=Uit+1,iffUit+1<fUit,Xit,otherwise,where f is the fitness function. The pseudocode of the standard DE algorithm is shown in Algorithm 1.

<bold>Algorithm 1:</bold> DE.

Input: Parameters N, D, T, F, CR, ε

Output: The best vector (Solution) Δ

t0 (Initialization)

fori=0 to Ndo

forj=0 to Ddo

xi,jt=rand0,1xi,jUxi,jL+xi,jL

end for

end for

whilefΔε or tTdo

fori=1 to Ndo

(Mutation and Crossover)

forj=1 to Ddo

vi,jt=Mutationxi,jt (formula (15))

ui,jt=Crossovervi,jt,xi,jt

end for

(Selection)

iffui,jt<fxi,jtthen

xi,jtui,jt

else

Δxi,jt

end if

end for

t=t+1

end while

returnthe best vector  Δ

Although the DE algorithm is widely used in optimization problems, with the increase of the complexity of solving problems, the DE algorithm also has some disadvantages, such as slow convergence, low accuracy, and weak stability. Therefore, in order to solve the multiobjective game better, the DE algorithm is improved.

Since the DE algorithm mainly performs the genetic operation through differential mutation operator, the performance of the algorithm mainly depends on the selection of mutation and crossover operations and related parameters. Many scholars have verified that the mutation factor F and crossover operator CR directly affect the searching capability and solving efficiency of the DE . In order to make the algorithm have better global search capability and convergence speed, adaptive mutation and crossover operator are adopted:(18)λ=e1T/T+1t,F=F02λ,CR=CR02λ,where t is the current iterate time, T is the maximum number of iteration, and F0 and CR0 are the initial mutation factor and crossover operator, respectively.

There are two main ways of traditional mutation operations:(19)iDE/rand/1/bin:vit+1=xr1t+Fxr2txr3t,iiDE/best/1/bin:vit+1=xbestt+Fxr2txr3t,where xbestt represents the best individual in the current generation, that is, the optimal position searched by this individual so far . The first mutation method has a strong global search capability but a slow convergence speed. The second method has a fast convergence speed, but it is easy to fall into local optimum values . In order to overcome these shortcomings, many researchers have improved the mutation strategy [28, 29], and a new mutation operation is proposed in this paper:(20)vit+1=λxr1t+1λxbestt+Fxr2txr3t.

The new mutation strategy has strong global search ability and fast convergence speed and can find better solutions.

3.3. Simulated Annealing Algorithm (SA)

SA is not only a statistical method, but also a global optimization algorithm. It is first proposed by Metropolis in 1953 , and Kirkpatrick first used SA to solve combinatorial optimization problems in 1983 . SA is derived from the simulation of the solid annealing cooling process. The main feature is to accept inferior solutions with a certain probability according to the Metropolis rule, which can avoid the algorithm falling into the local optimum and “premature” phenomenon.

The Metropolis rule defines the internal energy probability PijM of an object state i transferring to state j at a certain temperature M, it can be expressed as follows:(21)PijM=1,EjEi,eEjEi/KM,otherwise,where Ei and Ej represent internal energy of solid in states i and j, respectively. K is attenuation parameter, and ΔE=EjEi represents increment of internal energy.

When the combined optimization problem is simulated by solid annealing, the internal energy E is simulated as the objective function value, and the temperature M becomes a parameter. That is,(22)Pi+1M=efXi+1fXi/KM,where fXi is the value of the function and M is the temperature at the ith iteration. When fXi+1fXi, we select fXi+1 with probability 1; otherwise, we select the inferior solution fXi with probability Pi+1M. In this paper, the SA is applied to the DE to enhance its global optimization ability.

3.4. The ADESA Algorithm Experimental Steps

The pseudocode of the ADESA algorithm is shown in Algorithm 2, and the specific steps are described in detail as follows:

Step 1: set the parameters of the ADESA, such as N, D, F0, CR0, K, M, T, ε.

Step 2: initialize the population and each individual satisfies(23)ki=1mixiki=1,xiki0,i=1,,N;ki=1,,mi.

Step 3: calculate the fitness function value fx of each individual in population Pt and determine xbestt and fxbestt.

Step 4: the next generation population P1(t) is generated by mutation of formula (20), and population P2(t) is generated by crossover of formula (16).

Step 5: the offspring population P(t+1) is selected from the P(t) and P2(t) populations according to formulas (21) and (22), and calculate the fitness function value of population P(t+1).

Step 6: determine whether to end this procedure according to the accuracy and the maximum number of iterations and output the optimal value; otherwise, turn to step 3.

Input: Parameters N, D, T, K, M, F0, CR0, ε

Output: The best vector (Solution) Δ

t0 (Initialization)

fori=0 to N do

forj=0 to D do

xi,jt=rand0,1xi,jUxi,jL+xi,jL

end for

end for

whilefΔε or tTdo

fori=1 to Ndo

(Mutation and Crossover)

forj=1 to Ddo

vi,jt=Mutationxi,jt (formula (20))

ui,jt=Crossovervi,jt,xi,jt

end for

(Metropolis Selection)

if fui,jt<fxi,jtthen

xi,jtui,jt (accept the new solutions)

Let P1=efUi,jtfXi,jt/KM

else {P1>rand}

xi,jtui,jt (accept the inferior solutions)

else {fxi,jt<fΔ}

Δxi,jt

end if

end for

M=KM; t=t+1

end while

returnthe best vectorΔ

According to Algorithm 1, it can be judged that the experimental steps of the ADE algorithm are the same as the DE algorithm, so the time complexity of the ADE algorithm does not change. Comparing the implementation process of Algorithms 1 and 2, we can see that the ADESA algorithm has only one more judgment in the selection operation than the DE algorithm. Therefore, the time complexity remains unchanged. In conclusion, the computational complexity provides a guarantee for the performance of the ADESA algorithm proposed in this paper. In the next section, the superiority of the proposed algorithm is verified by calculating an example of the multiobjective military game.

4. Experimental Design and Results4.1. Military Example

Suppose that there are conflicts between Red and Blue armies on islands A and B. Under the dual objectives of attack time and damage effectiveness, Red’s strategies on the two islands are no-attack and attack, and Blue’s strategies are retreat and defend. Two parties have different strategic purposes on the two islands; therefore, the degree of preference is different under the same objective. In addition, due to the limitation of military equipment, it is assumed that the Red army cannot choose to attack on both islands at the same time, and the Blue army cannot choose to defend on both islands at the same time. In this way, the available strategies for both Red and Blue on A island are α1A,α2A=noattack,attack, β1A,β2A=retreat,defend. On island B, the available strategies for both Red and Blue are α1B,α2B=noattack,attack, β1B,β2B=retreat,defend.

Aiming at the attack time l=1, the payoff matrices of the Red and Blue on the islands A and B are, respectively (the payoff values are expressed in the preference order of −2, −1, 0, 1, 2), as follows:(24)A11=1022,B11=1201,A12=2110,B12=1210.

Aiming at the damage effectiveness l=2, the payoff matrices of the Red and Blue on the two islands A and B are, respectively, as follows:(25)A21=0121,B21=2110,A22=1120,B22=2011.

Obviously, considering real situations of the two islands A and B, the available strategy sets of Red and Blue are(26)S1=α1,α2,α3=α1A;α1B,α1A;α2B,α2A;α1B,S2=β1,β2,β3=β1A;β1B,β1A;β2B,β2A;β1B.

Using formula (6), the synthetic payoff matrices of the Red and Blue under the two objectives are, respectively, as follows:(27)R1=302211410,B1=231011120,R2=110201312,B2=423312112.

Using formulas (8) and (9), the above synthetic payoff matrices can be transformed into the standardized matrices as follows:(28)E1=0.251.000.500.500.750.7500.751.00,F1=0.500.250.751.000.750.750.750.501.00,E2=0.4000.200.600.200.400.800.400.60,F2=1.000.600.800.800.400.600.4000.60.

By formulas (10)–(12), the entropy values of the lth (l = 1,2) objective are h1 = 0.9581, h2 = 0.9311. The difference indices are g1=0.0419, g2=0.0689. The weights are ω1=0.3779, ω2=0.6221. Finally, by formula (13), we can get(29)E=0.34330.37790.31340.56220.40780.53230.49770.53230.7512,F=0.81110.46770.78110.87560.53230.65670.53230.18890.7512.

Use formula (4) to solve the following optimization problem:(30)minfx=maxf11f1,f12f1,f13f1,0+maxf21f2,f22f2,f23f2,0,x11+x12+x13=1,x21+x22+x23=1,0x11,x12,x13,x21,x22,x231,where(31)f1=0.3433x11+0.5622x12+0.4977x13x21+0.3779x11+0.4078x12+0.5323x13x22+0.3134x11+0.5323x12+0.7512x13x23,f11=0.3433x21+0.3779x22+0.3134x23,f12=0.5622x21+0.4078x22+0.5323x23,f13=0.4977x21+0.5323x22+0.7512x23,f2=0.8111x11+0.8756x12+0.5323x13x21+0.4677x11+0.5323x12+0.1889x13x22+0.7811x11+0.6567x12+0.7512x13x23,f21=0.8111x11+0.8756x12+0.5323x13,f22=0.4677x11+0.5323x12+0.1889x13,f23=0.7811x11+0.6567x12+0.7512x13.

4.2. Results and Discussion

In order to solve the game, the DE, ADE, and ADESA algorithms are used to calculate the above optimization problem. According to [24, 31, 32] and experimental experiences, the parameters are set to N=20, D=6, CR=0.8, F=0.6, CR0=1, F0=0.4, K=0.998, M=100, T=60, ε=108. The calculation results and the corresponding Nash equilibria are shown in Table 1. Figure 1 is a comparison of solving this problem with DE, ADE, and ADESA algorithms, and Figure 2 is the result of the first ten iterations of Figure 1.

Results of the game GS1,S2,E,F.

NumberRedBlueSynthetic strategyStrategy string
Nash equilibrium 1(0, 0, 1)(0, 0, 1)(α3, β3)(α2A; α1B), (β2A; β1B)
Nash equilibrium 2(0, 1, 0)(1, 0, 0)(α2, β1)(α1A; α2B), (β1A; β1B)

The first ten iterations of Figure 1.

It can be intuitively seen from Table 1 that when we solve this military game by using these three algorithms, two Nash equilibria are obtained. One is α3,β3, the other is α2,β1, and their corresponding feasible strategy choices are (α2A; α1B), (β2A; β1B) and (α1A; α2B), (β1A; β1B). The feasible strategy α1A;α2B,β1A;β1B means the Red attacks island B and the Blue retreats on both islands. Given the fact that the Blue is unlikely to retreat on both islands, this solution will be abandoned. The other solution (α2A; α1B), (β2A; β1B) represents that the Red attacks island A and the Blue defend on it, which is consistent with the real battle situation, so the final result of this game is (α2, β1).

By analyzing the results obtained above, it can be seen that the algorithm ADESA proposed in this paper can well find the equilibrium solutions under the two objectives. And comparing this algorithm with DE and ADE algorithms, we find that this algorithm has a faster convergence speed and avoids falling into local optimum. It provides a reference for the study of solving more complex multiobjective games in the future.

5. Conclusions

In this paper, we study the multiobjective game in a multiconflict situation. Firstly, an integrated multiobjective game model is established in a multiconflict situation; then the multiobjective game is transformed into a single-objective game according to the Entropy Weight Method. Finally, using the equivalence theorem of Nash equilibrium, finding the Nash equilibria of the game is equivalent to solving the optimal solution of an optimization problem. According to the excellent performances of the DE algorithm solving the optimization problems, we choose the DE algorithm to solve this problem. Because the game itself is a complex decision-making problem, and this paper studies the multiobjective game in a multiconflict situation, therefore, the DE algorithm is improved, namely, the ADESA algorithm, which applies the Metropolis rule of the SA algorithm to the selection operation of the ADE algorithm. Moreover, the ADE algorithm is a self-adaptive improvement to the control parameters of the DE algorithm. At the end of the paper, the computational results of a military example show that our proposed ADESA algorithm solves the multiobjective game more quickly and effectively, which provides a reference for future research on related issues. In the future, people may be in the situation of multiobjective mutual transfer in reality, so we will further consider the more complex dynamic multiobjective game in a multiconflict situation. In addition, it could be interesting to develop other algorithms and compare them with the ADESA algorithm.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare no conflicts of interest.

Acknowledgments

This study was supported by the National Natural Science Foundation of China (Grant nos. 71961003 and 12061020), the Qian Jiaohe YJSCXJH (029), and the Scientific Research Foundation of Guizhou University (49).

BaumannM.WeilM.PetersJ. F.Chibeles-MartinsN.MonizA. B.A review of multi-criteria decision making approaches for evaluating energy storage systems for grid applicationsRenewable and Sustainable Energy Reviews201910751653410.1016/j.rser.2019.02.0162-s2.0-85063282109PengY.ShiY.Editorial: multiple criteria decision making and operations researchAnnals of Operations Research201219711410.1007/s10479-012-1104-72-s2.0-84857128984RashidiE.JahandarM.ZandiehM.An improved hybrid multi-objective parallel genetic algorithm for hybrid flow shop scheduling with unrelated parallel machinesInternational Journal of Advanced Manufacturing Technology2010499–121129113910.1007/s00170-009-2475-z2-s2.0-77956226560ShiH.DongY.YiL.Study on the route optimization of military logistics distribution in wartime based on the ant colony algorithmComputer and Information Science20103113914310.5539/cis.v3n1p139FragnelliV.PusilloL.Multiobjective games for detecting abnormally expressed genesMathematics20208335010.3390/math8030350CaiX.ZhaoH.ShangS.An improved quantum-inspired cooperative co-evolution algorithm with muli-strategy and its applicationExpert Systems with Applications202117111462910.1016/j.eswa.2021.114629LeeC. S.Multi-objective game-theory models for conflict analysis in reservoir watershed managementChemosphere201287660861310.1016/j.chemosphere.2012.01.0142-s2.0-84862794983InoharaT.TakahashiS.NakanoB.Integration of games and hypergames generated from a class of gamesJournal of the Operational Research Society199748442343210.1057/palgrave.jors.26003832-s2.0-0031124636YanjieW.YexinS.XianhaiZ.Integration model of bimatrix games in two-person multi-conflict situationsJournal of Naval University of Engineering20092112225NishizakiI.SakawaM.Equilibrium solutions in multiobjective bimatrix games with fuzzy payoffs and fuzzy goalsFuzzy Sets and Systems200011119911610.1016/s0165-0114(98)00455-22-s2.0-0002821653VidyottamaV.ChandraS.BectorC. R.Bi-matrix games with fuzzy goals and fuzzyFuzzy Optimization and Decision Making20043432734410.1007/s10700-004-4202-42-s2.0-22044436896YexinS.Integration model of multi-objective bimatrix games in multiconflict situationsJournal of Huazhong University of Science and Technology (Nature Science Edition)20093763235TianJ.TaoL.JiaoH.Entropy weight coefficient method for evaluating intrusion detection systemsProceedings of the International Symposium on Electronic Commerce and SecurityAugust 2008Guangzhou, China592598DingS.ShiZ.Studies on incidence pattern recognition based on information entropyJournal of Information Science2005316497502PavlidisN. G.ParsopoulosK. E.VrahatisM. N.Computing Nash equilibria through computational intelligence methodsJournal of Computational and Applied Mathematics2005175111313610.1016/j.cam.2004.06.0052-s2.0-11044233594StrekalovskiiA. S.EnkhbatR.Polymatrix games and optimization problemsAutomation and Remote Control201475463264510.1134/s00051179140400432-s2.0-84899571911EnkhbatR.TungalagN.GornovA.AnikinA.KononovA.The curvilinear search algorithm for solving three-person gameProceedings of DOOR 2016 (CEUR-WS)2016RussiaVladivostokHuiminL.ShuwenX.YanlongY.Differential evolution particle swarm optimization algorithm based on good point set for computing Nash equilibrium of finite noncooperative gameAIMS Mathematics20216213091323QinA. K.HuangV. L.SuganthanP. N.Differential evolution algorithm with strategy adaptation for global numerical optimizationIEEE Transactions on Evolutionary Computation200913239841710.1109/tevc.2008.9277062-s2.0-59649083826ZhaoF.XueF.ZhangY.MaW.ZhangC.SongH.A hybrid algorithm based on self-adaptive gravitational search algorithm and differential evolutionExpert Systems with Applications201811351553010.1016/j.eswa.2018.07.0082-s2.0-85050101289MetropolisN.RosenbluthA. W.RosenbluthM. N.TellerA. H.TellerE.Equation of state calculations by fast computing machinesThe Journal of Chemical Physics19532161087109210.1063/1.16991142-s2.0-5744249209StornR.PriceK.Minimizing the real functions of the ICEC’96 contest by differential evolutionProceedings of IEEE International Conference on Evolutionary ComputationMay 1996Nagoya, Japan824844WuD.JunjieX.SongY.ZhaoH.Differential evolution algorithm with wavelet basis function and optimal mutation strategy for complex optimization problemApplied Soft Computing202110010672410.1016/j.asoc.2020.106724MohamedA. W.MohamedA. K.Adaptive guided differential evolution algorithm with novel mutation for numerical optimizationInternational Journal of Machine Learning and Cybernetics201910225327710.1007/s13042-017-0711-72-s2.0-85061187359YanpingZ.An adaptive differential evolution algorithm and its applicationComputer Technology and Development20197119123WuY.WuY.LiuX.Couple-based particle swarm optimization for short-term hydrothermal schedulingApplied Soft Computing20197444045010.1016/j.asoc.2018.10.0412-s2.0-85056179339Yi ChaoH.WangX. Z.LiuK. Q.WangY. Q.Convergent analysis and algorithmic improvement of differential evolutionJournal of Software201021587588510.3724/SP.J.1001.2010.034862-s2.0-77953260026MohamedA. W.SuganthanP. N.Real-parameter unconstrained optimization based on enhanced fitness-adaptive differential evolution algorithm with novel mutationSoft Computing201822103215323510.1007/s00500-017-2777-22-s2.0-85027330502DengW.ShangS.CaiX.ZhaoH.SongY.XuJ.An improved differential evolution algorithm and its application in optimization problemSoft Computing20212575277529810.1007/s00500-020-05527-xKirkpatrickS.CelattD.VecchiM.Optimization by simulated annealingScience19862201983671680GamperleR.MullerS. D.KoumoutsakosP.A parameter study for differential evolutionAdvances in Intelligent Systems, Fuzzy Systems, Evolutionary Computation200210293298IngberL.Simulated annealing: practice versus theoryMathematical and Computer Modelling199318295710.1016/0895-7177(93)90204-c2-s2.0-43949164756