A Novel Self-Adaptive Mixed-Variable Multiobjective Ant Colony Optimization Algorithm in Mobile Edge Computing

Mobile edge computing (MEC) provides physical resources closer to end users, becoming a good complement to cloud computing. *e booming MEC brings many multiobjective optimization problems. *e paper proposes a multiobjective optimization (MOO) algorithm called SAMOACOMV, which provides a new choice for solving MOO problems of MEC. We improve the ACOMV algorithm that is only suitable for solving mixed-variable single-objective optimization (SOO) problems and propose a MOACOMV algorithm suitable for solving mixed-variable MOO problems. And aiming at the dependence of MOACOMV algorithm performance on parameter setting, we proposed the SAMOACOMV algorithm using a self-adaptive parameter setting scheme. Furthermore, the paper also designs some mixed-variable MOO benchmark problems for the purpose to test and compare the performance of the SAMOACOMV algorithm. *e experiments indicate that the SAMOACOMV algorithm has excellent comprehensive performance and is an ideal choice for solving mixed-variable MOO problems.


Introduction
In recent years, mobile edge computing (MEC), as a powerful computing paradigm, provides sufficient computing resources for the internet of things (IoT) [1]. Edge computing extends traditional cloud services to the edge of the network and closer to users and is suitable for network services with low latency requirements. ere are many multiobjective optimization (MOO) problems in MEC, and the research on MOO for MEC is also a hot topic. Liu et al. [1] propose a multiobjective resource allocation method, named MRAM, and the method is leveraged to optimize the time cost of IoT applications, load balance, and energy consumption of MEC servers. Huang et al. [2] present a multiobjective whale optimization algorithm (MOWOA) based on time and energy consumption to solve the optimal offloading mechanism of computation offloading in MEC. Fan et al. [3] propose an algorithm based on particle swarm optimization (PSO) to solve the MOO of the container-based microservice scheduling, aiming to optimize network latency among microservices, reliability of microservice applications, and load balancing of the cluster.
Xu et al. [4] present a multiobjective computation offloading method (MOC) for internet of vehicles (IoV) in MEC to realize the multiobjective optimization of decreasing the load balancing rate and reduce the energy consumption in ECDs and shorten the time during processing the computing tasks.
is paper studies the multiobjective optimization algorithm, which provides a new choice for MOO in MEC. e classic MOO algorithm converts the multiple objective function values into a single value according to certain rules and then applies single-objective optimization algorithms to solve them [5]. ere are three common converting rules [6]: weighted sum of multiple objective function values, calculating the distance between the objective function value vector and a given decision vector and finding the maximum value of the relative difference between the respective objective function values and their corresponding given values. e classic MOO algorithm is essentially a single-objective optimization algorithm, which cannot really solve the MOO problem. Most of the modern MOO algorithms are heuristic algorithms that can find the Pareto solution set. Some famous algorithms are NSGA-II [7], SPEA2 [8], PAES [9], and NSGA-III [10] based on an evolutionary algorithm; SMPSO [11] and OMOPSO [12] based on particle swarm algorithm; GDE3 [13], MOEAD [14], and MOEA/D-IEpsilon [15] based on differential evolution algorithm; MOACO [16], P-ACO [17], MACS [18], Monaco [19], and SACO [20] based on ant colony algorithm; and so on. Other heuristic MOO algorithms include: MOO algorithms based on simulated annealing, tabu search and immune algorithms, and new algorithms obtained by improving or mixing various algorithms. According to the no free lunch (NFL) theorems in [21], when dealing with MOO problems, the average performance of various algorithms is the same, but the algorithms can show different performances for different optimization problems. erefore, it is another hot spot for scholars to study the applicable algorithms for specific optimization problems or to study the applicable problems according to the characteristics of optimization algorithms.
Refer to the literature [22] for the classification of optimization problems, MOO problems can be divided into four categories according to whether their variable domains are continuous or not: (i) Continuous-variable (CV) MOO: the range of all variables is the continuous domain. ese continuous variables are usually mapped to real numbers (ii) Pseudo-discrete variable (PDV) MOO: the range of all variables is ordered discrete domain, which means that the variable values are arranged in ascending or descending order according to certain rules. e pseudo-discrete variables are usually mapped to integers.
(iii) Real-discrete-variable (RDV) MOO: the range of all variables is a disordered discrete domain, which means that the variable values cannot be arranged according to certain rules. e discrete variables are usually called categorical variables. (iv) Mixed-variable MOO: the range of the variables includes continuous domain and discrete domain.
According to the NFL theorem, in order to obtain better optimization performance, different types of MOO problems should use different types of optimization algorithms. e research on continuous-variable MOO and pseudodiscrete variable MOO is relatively mature. Most of the aforementioned heuristic algorithms or their variants are suitable for solving these two types of problems. ere are a few studies on mixed-variable MOO.
Manson et al. [23] present a novel Bayesian multiobjective algorithm (MVMOO) capable of simultaneously optimizing both discrete and continuous input variables. e algorithm utilizes Gaussian processes as surrogates in combination with a novel distance metric based upon Gower similarity. MVMOO was able to perform competitively when compared to NSGA-II with a substantially reduced experimental budget, providing a viable, efficient option when optimizing expensive mixed-variable multiobjective optimization problems.
Li et al. [24] propose an improved version of OLAR-PSO-d named OLAR-PSO-DE. e OLAR-PSO-DE utilizes a modified stagnation strategy and a dynamic hybridization strategy. e OLAR-PSO-DE is employed to optimize the design of the engine hood, which is a high-dimensional, multiobjective, and mixed-variable optimization problem. e comparative study and final hood optimization results prove that the proposed method can effectively solve complicated engineering problems.
Khokhar et al. [25] modify the continuous-variable version of the PSP algorithm to handle mixed variables. e performance of PSP was tested using a set of quality indicators with a benchmark test suite. And the performance was compared with the state-of-the-art multiobjective optimization algorithms. e modified PSP is found to be competitive when the total number of function evaluations is limited but faces an increased computational challenge when the number of design variables increases.
However, there are relatively few studies on discrete variable MOO and mixed-variable MOO, but such MOO problems are often encountered in engineering. erefore, the research on these two types of MOO algorithms is of great significance. is paper proposes the SAMOACO MV algorithm by improving the ACO MV algorithm [26]. e main work of the author is as follows: (i) Improve the ACO MV algorithm used to solve mixed-variable SOO problems to make it suitable for solving mixed-variable MOO problems (ii) Propose a self-adaptive parameter setting scheme for the algorithm and verify the superiority of the self-adaptive parameter setting scheme by comparison with the manual parameter adjustment scheme (iii) Design some mixed-variable MOO benchmark problems to test and compare the performance of the SAMOACO MV algorithm (iv) Apply SAMOACO MV algorithm to solve spring design engineering problems and compare the algorithm performance with other well-known MOO algorithms

ACO MV Algorithm.
e ACO MV algorithm [26] is an ant colony optimization algorithm proposed by K. Socha and M. Dorigo for solving mixed-variable problems. e algorithm has excellent comprehensive performance when dealing with mixed-variable optimization problems, but for pure continuous optimization or pure discrete optimization, it has weaker performance than some specialized algorithms. e basic process of the ACO MV algorithm is as follows: the first step is to initialize the solution archive by randomly creating some solutions and storing them in the solution archive. In the second step, the ants construct some new solutions based on the solution archive. Many algorithms, such as local search, gradient descent, can be used to construct and improve the quality of new solutions. e third step is to refresh the solution archive with the new solutions, and the best solutions will be stored in the solution archive. Repeat steps 2 and 3 until the termination criteria are met.

e Structure, Initialization, and Refresh of Solution
Archive. ACO MV maintains a solution archive T, whose dimension |T| � k can be set in advance. Assume that there is an n-dimensional continuous optimization problem that has k feasible solutions, ACO MV stores n variable values of each feasible solution and its objective function value in the solution archive. Figure 1 depicts the structure of the solution archive, where s i j represents the value of the i-th variable of the j-th solution and w j represents the weight of the j-th solution. e solutions in the solution archive are sorted by their quality (such as the value of the objective function), so the position of the solution in the archive reflects its preference (pheromone).
Before the algorithm starts, k solutions are randomly generated and stored in the solution archive T. In each iteration of the algorithm, m ants generate m new solutions.
e new solutions and the solutions from the solution archive T form a solution set including k + m solutions and take the k solutions with the best quality (such as objective function value) from the solution set to refresh solution archive T. e solutions in the solution archive are always sorted by their quality, and the best quality solution is at the top. In this way, the search process will always tend to find the best quality solution, so as to achieve the solution of the optimization problem.

Constructing New Solutions
Probabilistically. Each ant constructs a new solution incrementally, that is, selects the value of the solution variable one by one. First, the ants select a solution from the solution archive based on the selection probability. e selection probability of the j-th solution is as follows: where ω j can be calculated by using various formulas. In this paper, the Gaussian function g(μ, σ) � g(1, qk) is selected, which formula is as (2). Besides, q is the algorithm parameter, and k is the number of solutions in the solution archive.
en, construct a new solution based on the selected solution. According to the probability density function P(x) for each dimension variable of the solution, the ant probabilistically extracts a new value in the neighborhood of the variable value of the solution, and these new values form a new solution. For different types of variables, the structure of the probability density function is different. e P(x) of continuous variables is as follows: where g(x, μ, σ) represents the Gaussian function with the variable x, μ is the mean value, σ is the mean square error, and ξ is the algorithm parameter. e P(x) of ordered discrete variables is the same as (3), but it needs to be modified as follows: (i) e variable x is the index number of the ordered discrete variable value in its range. If the variable value range of x is {large, medium, small}, then x � 1 when the variable value is "large", x � 2 when the variable value is "medium", and x � 3 when the variable value is "small". (ii) e new value obtained by probability extraction according to P(x) needs to be rounded to the closest value of the index number in the domain. If the extracted value is 2.3, it needs to be rounded to 2, which corresponds to "middle." e probability density function of disordered discrete variables is as follows: where O i l represents the probability of selecting the l-th variable value from the domain D i � v i 1 , . . . , v i c i of the i-th variable of the solution. And ω l is the weight associated with the l-th available value; it is calculated as follows: where ω jl is the weight corresponding to the best quality solution in the solution archive whose value of the dimension variable is not empty, and it can calculate as (2). In particular, if this dimension variable of all solutions is empty, then ω jl is taken as 0. u i l is the number of solutions whose value of this dimension variable is not empty in the solution archive. q is an algorithm parameter, which is the same as q in (2). η is the number of unused values in the domain D i of the dimension variable.

MOACO MV Algorithm.
Improve the single-objective optimization algorithm ACO MV and obtain the MOA-CO MV algorithm suitable for solving MOO problems. e main improvement of the MOACO MV algorithm is to introduce the Pareto set into the solution archive, which is the non-inferior solution set [27]. e specific method is to sort according to the Pareto characteristics of the solution in the solution archive, and the solution with the best quality is placed at the top of the solution archive. After improvement, the probability of selecting a good solution is higher so that the MOACO MV algorithm can find noninferior solutions. e solutions in the solution archive are arranged according to the following two rules: (i) e solutions in the solution archive are sorted according to the non-inferior order, and the solutions with the smaller order value are arranged at the top of the solution archive. Referring to reference [28], the definition of non-inferior order of the solution is in Definition 1. (ii) For solutions with the same non-inferior order, they are sorted according to the degree of congestion of the solution, and the solution with a lower degree of congestion is ranked at the top of the solution archive. Referring to reference [9], the definition of the congestion degree of the solution is in Definition 2.
In the above two rules, the first rule ensures that the algorithm can find non-inferior solutions, and the second rule ensures that the distribution of these non-inferior solutions is as uniform as possible. e MOACO MV algorithm designed according to the above rules has excellent comprehensive performance.

Definition 1. Non-inferior order of one solution NIO(S j ):
In the solution set T � S 1 , . . . S j . . . S k , take out its noninferior solutions to form a solution set TU(z) whose sequence number z � 0, and the remaining solutions refresh the solution set T; repeat the above process until T is an empty set, and every time it is repeated, z increases by 1. en the NIO(S j ) is the sequence number z of the non-inferior solution set TU(z) in which the solution S j is.
. Calculate the distance between F(S x ) of one solution S x and F(S y ) of other solutions and take out the minimum distance d(x, T). e calculation process of d(x, T) is as equation (1). en the C D(S x ) is d(x, T) multiplied by the adjustment coefficient α; the calculation process is as equation (2).

SAMOACO MV Algorithm.
e ant colony algorithm needs to set some parameters, which have a huge impact on the performance of the algorithm. Since the convergence speed of the algorithm and the diversity of the solution are always contradictory, how to obtain a compromised excellent performance through proper parameter settings is the purpose of studying parameter settings.
is paper adopts the self-adaptive parameter control method to adjust the parameters of the MOACO MV algorithm according to the quality of the solution archive and the convergence speed of the algorithm. And we call this MOO algorithm as SAMOCO MV algorithm.
e SAMOACO MV algorithm needs to set four parameters, which are: the convergence speed ξ, the size of search solution archiving area q, the number of ants m, and the solution archive size k. In this paper, to balance the diversity and convergence abilities of SAMOACO MV , two modifications for four parameters are proposed.

Set Method for Parameters ξ and q.
e parameter ξ is used to adjust the convergence speed of the algorithm, and the parameter q is used to change the size of the search area.
ese two parameters are in conflict. When the search area increases or the convergence speed decreases, more Pareto solutions can be found with higher probability, but the calculation time becomes longer, and vice versa. In order to obtain a good Pareto solution archive with reasonable calculation time, we calculate the quality index of the solution archive and adjust the parameter ξ and q according to the value of the quality index. e set method for parameters ξ and q is shown in Algorithm 1: In Algorithm 1, the quality index P i (T) of the solution archive for the i-th iteration is calculated firstly. e P i (T) is the mean value of the weighted sum of each objective function and congestion degree of all solutions in the solution archive. Next, the quality index's increment ΔP i (T) and the parameters' increment Δe i , Δq i is calculated. Finally, the new parameter values are set by subtracting the product of Δξ i , Δ q i , step size constant B, and random number r from the old parameter values.

Set Method for Parameters m and k.
e parameter m is the number of ants, and the parameter k is the solution archive size. e larger the values of these two parameters are, the higher the probability of obtaining more Pareto solutions is, but the larger parameters' value will also bring more calculations and increase time-consuming. We set the expected number of Pareto solutions according to the complexity of the problem and then adjust these two parameters in real time according to the difference between ENUM and the actual number of Pareto solutions. ENUM is the number of non-inferior solutions expected from the solution archive. e set method for parameters m and k is shown in Algorithm 2.
In Algorithm 2, count the number of solutions whose non-inferior order NIO i (S j ) are zero in the solution archive. en calculate the ratio factors rateArchive and rateAnt, which represent the size of the solution archive and the number of ants needed to produce one non-inferior solution, respectively. Finally, set the new parameters' value to the product of the old parameters' value, the ratio factors, adjustment coefficient C, and expected number ENUM.

Experiment Results and Discussion
e application field and performance of the algorithm are usually studied by comparing the performance of different MOO algorithms when solving benchmark problems. Referring to some existing mixed-variable MOO algorithms [29][30][31][32][33], this paper designs some problems for algorithm experiments, besides comparing with other wellknown MOO algorithms to verify the performance of the algorithm.

Experimental Environment.
e operating environment of the experiment is as follows: inkpad T470p computer; Core i7-7700HQ CPU (4cores) * 2; 24 GB memory; 512 GB solid hard disk; and equipped with Windows 10 operating system. e programming tool is Microsoft Visual Studio 2017, and the programming language is C#.
e variables of the eight benchmark problems are all continuous variables. In order to test MOO algorithms with mixed variables, we modify the problems to make some variables as PDV and some variables as RDV; then the continuous problems become mixed problems. PDV and RDV are calculated by the following equations: where N is the number of equal divisions of the value range. In order to make the variable a value of 0, N takes a positive even number. RND(N) is a random nonnegative integer not greater than N. In order to make the distribution range of x RDV larger, we need to take every number in 0, . . . , N { } once. e domain of x PDV is a set of N + 1 ordered discrete variables increasing from x MIN to x MAX , and the domain of x RDV is a set of N + 1 disordered discrete variables between x MIN and x MAX .
If N is large enough, the Pareto set of the mixed problems is similar to the Pareto set of the continuous problems.

Performance Metrics.
Convergence and diversity are usually the two most important criteria for the evaluation of MOO algorithms. e convergence refers to the distance from the non-dominated front generated by the ALGORITHM 1: Set method for parameters ξ and q. Security and Communication Networks optimization algorithm to the true Pareto front; the diversity involves coverage area and uniformity; and a front with wide coverage and good uniformity is always pursued.
We have used generational distance (GD) [35] and inverted generational distance plus (IGD + ) [36] for measuring convergence and spread for measuring coverage.
GD: let T * � F * 1 , . . . F * i . . . F * |T * | be a set of uniformly distributed Pareto optimal points in the true PF(TPF), and T � F 1 , . . . F i . . . F |T| be a non-dominated front of the problems. e GD of T is the average distance from each solution in T to the nearest reference point: where F i is the objective function corresponding to the solution S i and F � ( is the Euclidean distance between F j and F * i . IGD + : the IGD + of T is the average distance from each reference point in T * to the nearest solution.
In IGD + , the distance between a reference point . . , f v ) is calculated in the objective space for the v-objective minimization problem as follows: Generalized Spread (see [36]). e generalized spread is an indicator that measures the distribution and spread of the obtained non-dominated front of the problems with two or more objectives: where {e 1 , e 2 , . . ., e m } are m extreme solutions in T * and 3.4. Performance Improvement of SAMOACO MV . In order to test the performance of SAMOACO MV , some experiments are carried out under the same conditions, for example, when the problem is the modified Fonseca problem, the maximum number of algorithm iterations is the same.  Table 3 shows the performance of the 6 experiments for the Fonseca test problem. For each major cell of Table 3, the first column indicates the mean of 25 runs, the second column indicates the standard deviation, and the third column indicates the rank. Figure 2 shows the Pareto points obtained with reference to the true Pareto frontier graphically using results from 1 of the 25 runs. MOACO MV5 generates only a few Pareto points, so it is not shown in the figure.
It can be seen from the figure and the table: (i) e figure shows that the Pareto points generated by the SAMOACO MV algorithm are right on TPF, and the table shows the overall rank value of the SAMOACO MV algorithm is minimum, that is, the performance of the SAMOACO MV algorithm is the best in all experiments. (ii) When the MOACO MV algorithm adopts setting schemes 3 and 4, the algorithm performance is basically the same as that of the SAMOCO MV algorithm, but when other schemes are used, the algorithm performance is very poor, which shows that the performance of the MOACO MV algorithm relies heavily on parameter settings.
More experiments show that the performance of the SAMOCO MV algorithm is better than that of the MOA-CO MV algorithm; especially, this advantage is more obvious when the values of m and k are small.

Performance Comparison Using Benchmark Problems.
In order to test the performance of the algorithm, this paper compares the SAMOCO MV algorithms with the well-known MOO algorithm NSGAII¸SPEA2¸SMPSO, MOEAD, NSGAIII, and MOEA/D-IEpsilon. ese algorithm programs come from jMetal [30], and the two algorithms can only be used to deal with CV MOO.
In order to compare the multiobjective optimization algorithms, each algorithm is allowed to run for the test problems for a constant number of function evaluations. e performance metrics are calculated for each algorithm run.
is procedure is repeated for 20 runs, and the mean and standard deviation of the performance metrics are recorded for each algorithm.

Results Based on Schaffer, Fonseca, and Kursawe
Problems. Tables 4-6 show the mean and standard deviation of generational distance, inverted generational distance plus, and generalized spread for different algorithms, respectively. e SAMOACO MV fetches good performance metric values in terms of the Schaffer problem, while other algorithms cannot obtain or only obtain a few Pareto points. It may be because the only variable of Schaffer problem is changed to a discrete variable, and other algorithms cannot solve the pure discrete variable problem. For the Fonseca and Kursawe problems, compared with other techniques, SAMOACO MV obtains excellent GD and IGD + values, only slightly weaker than MOEA/D-IEpsilon, but obtains relatively poor generalized spread value. Figures 3-5 provide a graphical visualization of the Pareto points obtained for Schaffer, Fonseca, and Kursawe problems, respectively. For the Schaffer problem, none of the other algorithms apart from SAMOACO MV was able to produce any Pareto points close to the TPF. For the Fonseca     problem, the performance of each algorithm is very good, and the generated Pareto points right on TPF. For the Kursawe problem, the performance of each algorithm is also very good, except that some Pareto points generated by SMPSO and MOEAD deviate slightly from TPF.

Results Based on ZDT (ZDT1-ZDT3) Problems.
From Tables 4 and 5, SAMOACO MV ranks 1 for ZDT problems, which means that the SAMOACO MV outperforms other algorithms on the performance metrics GD and IGD + . From Table 6, SAMOACO MV performed slightly worse on generalized spread for ZDT problems, ranking 3.
From Figures 6-8, all the algorithms have good performance, and the obtained Pareto front is basically consistent with the TPF. Some algorithms do not perform well on certain problems, such as SPEA2 and MOEAD produce some points that deviate slightly from the TPF for ZDT1 and ZDT2 problems.

Results Based on Viennet2 and Viennet3 Problems.
As shown in Table 4, the mean of GD of SAMOACO MV for Viennet2 and Viennet3 problems are about 0.000032 and  Security and Communication Networks 0.000021, respectively, which are only slightly worse than the mean values of NSGAIII but far better than the corresponding performance metric values of other algorithms. It can be seen from Table 5 that similar to GD, the SAMOACO MV has almost the best IGD + mean for Viennet2 and Viennet3 problems, around 0.0007 and 0.0005, respectively, only slightly worse than the mean of NSGAIII. From Table 6, SAMOACO MV performed worse on generalized spread for Viennet2 and Viennet3 problems, ranking 4 and 3, respectively. In Figures 9 and 10, the approximated Viennet2 and Viennet3 fronts of each algorithm are shown. It is clear that SAMOACO MV obtained much more Pareto points, they converge well to the TPF, and they widely and uniformly distribute along the TPF, which illustrates that it has better convergence and diversity compared with the other algorithms.
In summary, with GD, IGD + , and generalized spread taken into consideration, SAMOACO MV is quite a competitive algorithm in terms of the convergence of the generated Pareto solution set; the overall rank is 1. But SAMOACO MV is slightly weaker than other algorithms in the coverage performance; the overall rank is 3.

Experiment Results on Spring
Design Problem e spring design problem is a common engineering practice problem and widely used MOO algorithm  performance verification example [37,38], and it is a mixed-variable MOO problem containing continuous and discrete variables. We use the spring design problem to test the performance of the SAMOACO MV algorithm in this paper.

Problem Description.
e spring design problem consists of two discrete variables and one continuous variable. e objectives are to minimize the volume of the spring and minimize the stress developed by applying a load. Variables are the diameter of the wire (d), the diameter of the spring (D), and the number of turns (N). Denoting the variable vector x → � (x 1 , x 2 , x 3 ) � (N, d, D), formulation of this problem with two objectives and eight constraints is as follows [38]: Subject to wherex 1 is an integer, x 2 is a discrete variable, and x 3 is a continuous variable. e parameters used are as follows:

Experiment Results
From Table 7, the mean of GD, IGD + , and generalized spread of SAMOACO MV for the spring design problem are about 0.0014, 0.064, and 0.3532, respectively, much smaller than other algorithms. e values of the three performance metrics of SAMOACO MV for the spring design problem are all ranked first, and its overall rank is also the first, which shows that SAMOACO MV is optimal in convergence and coverage.    Figure 11: Pareto front for the spring design problem.
As shown in Table 8, for the spring design problem, the number of archive points and the number of Pareto points of SAMOACO MV rank 2, and the percentage of Pareto points in archive ranks 1, which means that SAMOACO MV has the highest comprehensive efficiency in finding Pareto points. e obtained Pareto frontier is plotted in Figure 11. e TPF represents the set of non-inferior solutions obtained by merging all experimental results from all independent runs of all algorithms and removing the inferior solution. SMPSO can only obtain a few Pareto points, so it is not shown by the figure. It can be seen from Figure 11 that many points of NSGA-II, SPEA2, and GDE3 do not converge to the TPF, and some points of SPEA2 and GDE3 are far away from TPF.
e Pareto points of NSGA-II, SPEA2, and GDE3 have poor distributions, and SPEA2 and GDE3 only cover part of TPF. In contrast, the Pareto points obtained by SAMOACO MV widely and uniformly distributed along the TPF, which illustrates that it has better convergence and diversity compared with the other algorithms.

Conclusion
In this work, we have modified the single-objective optimization algorithm ACO MV to handle mixed-variable MOO problems and proposed a self-adaptive parameter-setting scheme.
en the performance of SAMOACO MV was thoroughly tested using a set of performance metrics with a well-designed benchmark test suite. Its performance was compared with the state-of-the-art multiobjective optimization algorithms. For all benchmark problems, the SAMOACO MV algorithm has good convergence performance, and its GD and IGD + are almost the best. However, the generalized spread of SAMOACO MV is slightly worse, which means that the coverage performance of SAMOA-CO MV is slightly weaker than other algorithms. For spring design problem, the SAMOACO MV algorithm can get widely and uniformly distributed Pareto front, and it has the best convergence and coverage performance.
In general, the SAMOACO MV algorithm is an excellent MOO algorithm, which adds a new choice for solving MOO problems.

Data Availability
Some or all data, models, or codes that support the findings of this study are available from the corresponding author upon reasonable request. Disclosure e approach proposed in this paper has been published at the 2020 IEEE International Congress on Cybermatics (i ings/GreenCom/CPSCom/SmartData/Blockchain-2020) [39]. Based on the conference paper, this paper mainly expands as follows: a new congestion degree of the solution is defined to rank the solutions in the archive, modified the self-adaptive strategy to set the parameters m and k of the SAMOACO MV algorithm, and designed some new mixedvariable MOO benchmark problems to test and compare the performance of the SAMOACO MV algorithm. New performance metrics such as GD, IGD + , and Generalized Spread are used to evaluate the performance of the algorithms. All experiments are redone, and the corresponding described text, figure, and table of experimental results are updated.

Conflicts of Interest
e authors declare that there are no conflicts of interest regarding the publication of this paper.