Analysis of Multitasking Evolutionary Algorithms under the Order of Solution Variables

Recently, it was demonstrated that multitasking evolutionary algorithm (MTEA), a newly proposed algorithm, can solve multiple optimization problems simultaneously through a single run, breaking through the limitations of traditional evolutionary algorithms (EAs), with good convergence and exploration performance. As a novel algorithm, MTEA still has a lot of unexplored space. Generally speaking, the order of solution variables has no signiﬁcant inﬂuence on the single-tasking EAs. To our knowledge, the eﬀect of the order of variables in the multitasking scenario has not been explored. To ﬁll in this research gap, three orders of variables in the multitasking scenario are proposed in this paper, including full reverse order, bisection reverse order, and trisection reverse order. An important feature of these orders of variables is that an individual can recover as himself after two times of changing the order of variables. In order to verify our idea, these orders of variables are embedded into MTEA. The experiment results revealed that the eﬀect of the diﬀerent orders of variables is universal but not signiﬁcant enough in the practical application. Furthermore, tasks with high similarity and high degree of intersection are sensitive to the order of variables and get great impact between tasks.


Introduction
Optimization problems exist in all fields of science, engineering, and industry. In many cases, such optimization problems involve various decision variables, complex structured goals, and multiform constraints [1,2]. In general, traditional mathematical optimization techniques will encounter difficulties in solving such practical optimization problems in their original form. Inspired by the natural selection mechanism of "survival of the fittest" and the laws of genetic information transmission in the process of biological evolution, evolutionary algorithms (EAs) were proposed to solve this kind of complex optimization problem. Evolutionary algorithms simulate the process of species reproduction through program iteration and consider the problem to be solved as the environment, seeking the optimal solution through natural evolution. EA has many advantages such as being powerful, efficient, flexible, and reliable, and the research work of evolutionary algorithm has been very rich [3][4][5][6].
us, evolutionary algorithms have been widely used in real-life applications, including battery health diagnosis and management [7][8][9][10]. e emergence of evolutionary algorithms has helped solve many tricky practical problems, but there is still room for improvement. In practice, real-world problems are rarely isolated, but some of them are similar or related. erefore, we can handle some similar or complementary problems at the same time. As a new paradigm in evolutionary computation, multitasking evolutionary algorithm (MTEA) is very different from the traditional evolutionary algorithm, which utilizes experience (individual and global optimal guidance) and selective pressure intelligence to solve singleobjective or multiobjective problems [11]. e biological basis of EA is single gene inheritance [12,13]. In contrast, MTEA incorporates the concept of coordinated evolution between genes and cultures. Intuitively, multitasking is the equivalent of a multigene environment. e task can use the similarity of different gene evolution to learn information which is beneficial to its own evolution when multiple tasks are optimized at the same time. Obviously, knowledge drawn from past learning experiences can be constructively applied to more complex or invisible tasks. Multitasking optimization (MTO) significantly improves the efficiency of evolutionary optimization by solving multiple problems at once [11][12][13]. In the process of MTO, both gene migration and diversity play an important role when multiple tasks are optimized at the same time. In contrast, positive or negative gene migration demonstrates the nature of MTO [14]. In literature [13], MTEA is used to solve the practical application of multiobjective problems. At the same time, MTEA has a good universality and can be combined with operators with good search ability in evolutionary algorithms. For example, both the classical evolutionary algorithm and particle swarm optimization (PSO) can be combined with the MTEA [15].
In traditional single-tasking optimization, the order of variables has less significant influence on the solution process [16][17][18]. Undoubtedly, under the action of selective pressure, the solution will eventually approach the global optimum [12]. For single-objective optimization and multiobjective optimization problems, the order of variables does not obviously play a major role in EAs. In a sense, the implication of new order of variables for a special objective function can be thought of as a different objective function with the same optimal solution value. As a result, it has been intentionally or unintentionally ignored in the community of evolutionary algorithm and optimization. On the contrary, the situation is significantly different for MTO problems. In MTO, the optimization process of one task can affect the optimization process and the results of other tasks. At present, the influence of the order of variables on MTO has not been studied. Keeping this in mind, we demonstrate the effect of variable order on single-tasking and multitasking evolutionary algorithms in this paper.
In the experimental part, three kinds of transformation methods are designed to explore the influence of the order of variables on the quality of the solution in multitasking scenarios. e experiments show that the variable order affects the multitasking optimization process, and the degree of influence is related to the correlation degree between tasks, such as the task similarity and the same magnitude of the optimal solution between tasks. As a result, the actual impact is not significant. Furthermore, experiments have also shown that this effect is universal in the multitasking scenario.
To summarize, the core contributions of the current work are multifaceted, which are outlined as follows. (1) e influence of the order of solution variables on single-tasking optimization problems and multitasking optimization problems is analyzed in the view of evolutionary mechanism. e order of variables has no impact on the single-tasking evolutionary algorithm while has an impact on multitasking evolutionary algorithm. (2) ree orders of variables are proposed in this paper, including full reverse order, bisection reverse order, and trisection reverse order. An important feature of these orders of variables is that an individual can recover as himself after two times of changing the order of variables.
(3) In order to verify our idea, these orders of variables are embedded into MTEA. e experiment results revealed that the effect of the different orders of variables is universal but not significant enough in the practical application. Furthermore, tasks with high similarity and high degree of intersection are sensitive to the order of variables and get great impact between tasks. e rest of this paper is organized as follows. Section 2 gives a brief overview of the original multifactorial evolutionary algorithm (MFEA) and introduces MTEA based on the multipopulation evolution model. In addition, multifactorial differential evolution (MFDE) and the related works of MTEA are also explained in this part. In Section 3, the influence of the order of solution variables on singletasking optimization and multitasking optimization algorithms is demonstrated. Besides, three orders of variable are also introduced here. Next, we carried out relevant experiments to verify our conjectures including the design ideas of the experiment and the discussion of the experimental results in Section 4. Finally, Section 5 summarizes this paper and looks forward to the future research field.

Multifactorial Evolutionary Algorithm.
In practice, many problems are related to each other to varying degrees. In other words, universal similarity between problems is the motivation for multitasking optimization. With the utilizability of similarity, the multitasking optimization algorithm makes solving multiple problems at the same time come true. Assume that there are K tasks: T 1 , T 2 , . . ., and T K . A realistic task T j can be represented by the function f j (x); here j ∈ 1, 2, 3, . . . , |K| { }. Mathematically, a multitasking optimization problem can be expressed as G(X) � min f 1 (x), f 2 (x), . . . , f K (x) . MFEA maps multiple problems into a unified space by the uniform randomkey scheme [13], and each individual in the search space has the following four characteristics. Definition 1. Factorial fitness: the factorial cost φ k i denotes the objective fitness or value of an individual p i on a particular task T k . Every individual will calculate K factorial fitness based on K tasks. Definition 2. Factorial rank: the factorial rank r k i simply denotes the index of individual p i in the list of population members which is sorted in ascending order with respect to their factorial costs on task T k . It should be noted that in the process of multiobjective multitasking optimization, the factor ordering is calculated according to the nondominant ordering and the crowding distance. is paper only focuses on single-objective multitasking optimization, and thus it will not be stated here.

Complexity
Definition 3. Skill factor: the factorial rank τ i of individual p i represents the task corresponding to the most advanced index in the order of factorial rank. Skill factor is regarded as the computational equivalent of cultural characteristics. In the principle of meme calculation, the cultural characteristics of one individual can be transmitted to another.
Definition 4. Scalar fitness: the scalar fitness of individual p i is calculated by c i � 1/τ i . As shown in Algorithm 1, it is assumed that K optimization tasks have to be performed simultaneously. First, we initialize N individuals in the search space Y and then evaluate the initial population current-pop by calculating the factorial fitness φ k i of each individual p i in the current-pop, where i ∈ 1, 2, 3, . . . , |N| { }. en, we calculate the skill factor of p i in the population according to φ k i . After the initialization, as shown in line 5 in Algorithm 1, the iteration begins to produce the offspring-pop which contains N children according to Algorithm 2. Algorithm 3 is used to assign skill factors to each individual. Next, the offspring and the parent are merged to form the transitional-pop which have 2N individuals, and then the N best individuals are selected as the new population for the next generation. e traditional genetic algorithm applies crossover and mutation operators, which are also used in the multitasking evolutionary algorithm. Furthermore, how the offspring are produced depends on the skill factor of the parents and random mating probability (rmp). e detailed description is given in Algorithm 2. e offspring will be produced using crossover directly when both parents have the same skill factors. Otherwise, either the offspring will be generated through crossover given the random mating probability or the offspring will produced by mutation when the parents have different skill factors. e parameter rmp permits crosscultural mating among different tasks. A larger rmp means more knowledge exchanging between two tasks, while a smaller value indicates the opposite. An appropriate rmp can balance the thorough scanning of small areas in the search space with the exploration of the whole space. Unlike traditional evolutionary algorithms, it is not evaluated straightforward after offspring have been generated. After the offspring generation, skill factors are assigned to the offspring according to the mating mode of the offspring selection, and this procedure is detailed in Algorithm 3.

MTEA as Multipopulation Evolution
Model. Different from the classical MFEA, the multipopulation multitasking evolutionary algorithm no longer adopts one population but initializes K subpopulations according to the number of tasks. Individuals in a subpopulation evolve for a specific task throughout the optimization process [19]. Figure 1 illustrates a multipopulation optimization model with two optimization tasks. Clearly, task 1 and task 2 have their own populations. e dotted lines in Figure 1 represent possible scenarios. In reproductive selection, parents may come from the same task-specific subgroups or other groups. In this way, knowledge sharing and optimization efficiency can be improved during reproduction. A core feature of the multipopulation evolution model is that the crossover or mutation operators are deemed to help exchange information and assist to find the promising solutions.
Another important feature is that parental and progeny individuals must belong to the same subpopulation and evolve within the same subpopulation. One of the benefits of this is to keep the population as stable as possible. At the end of optimization, the optimal solution of subpopulation corresponding to the task is solved.
It should be noted here that interpopulation crossover probability (icp) in MTEA which controls the density of knowledge transfer between different tasks is different from rmp in MFEA. Parameter icp in MTEA directly controls knowledge transfer between tasks, whereas rmp is used to control knowledge sharing when parents have skill factors in MFEA.

Differential Evolution and Multifactorial Differential
Evolution. As a branch of stochastic EAs, differential evolution (DE) originally proposed by Price and Storn in 1995 [20] has been proven to be an effective, robust, and reliable global optimizer. DE distinguishes itself from other EAs with the individual difference-based mutation and crossover. DE produces a new candidate solution component based on the weighted difference between two randomly selected population individuals that is added to a third individual. For the original DE, mutation, crossover, and selection are the three key components in DE which are described as follows.
e mutation operation enables DE to explore the search space and maintain diversity. In [21], five mutation strategies have been commonly used, which are minutely given as follows: where V i,g denotes the mutant vector with respect to each individual X i,g at generation g, D is the dimension of problem, r1, r2, r3, r4, and r5 are random and mutually exclusive integers chosen from the interval [1, D], F is the scaling factor which controls the amplitude of the difference vector, and X best,g gives the best individual found so far at generation g. e goal of crossover operator is to build trial vectors by recombining the current vector and the mutant one. e family of DE algorithms employs two crossover schemes: exponential crossover and binomial crossover. e binomial Complexity 3 crossover is utilized in this paper and is briefly discussed below. In binomial crossover, the trial vector is defined as follows: where CR ∈ [0, 1] is the predefined crossover rate, rand is a uniform random number within [0, 1], and rand j ∈ 1, 2, . . . , D { } is a randomly selected index which is used to ensure that at least one dimension of trial vector is changed.
After the crossover, a greedy selection mechanism is used to select the better one between the parent vector X i,g and the trial vector U i,g according to their fitness values as described below.
In recent research, MTO has been conducted with differential evolution, named multifactorial differential evolution (MFDE). MFDE is similar to the classic MFEA process, except for the operation of children production. e (1) Generate N individuals in Y to form initial population P 0 as the current-pop (C). For P i in T Update the scalar fitness (c i ) and skill factor (τ i ) of P i . end Select N fittest members from T to form C. Set gen � gen + 1 (6) end while ALGORITHM 1: Basic framework of the MFEA.
Consider candidate parents c1 and c2 in C (1) Generate a random number rand between 0 and 1. Complexity assortative mating of MFDE for producing children is summarized in Algorithm 4. Firstly, in the first generation, a random number is generated in the range of [0, 1]. en, if this arbitrary number is less than rmp, two solutions x ′ g r 2 and x ′ g r 3 which have different skill factor with V g r 1 will be randomly selected to generate the new solution V g i . V g r 1 is a randomly selected solution that shares common skill factor with V g i . Otherwise, V g r 1 will be generated by the original DE/ rand/1 strategy.

Related Works on MTEA.
Since MFEA was first proposed in 2016, the research work of multitasking EA has gained wide attention. As a universal framework, multitasking EA can be implemented in a variety of ways [22]. At present, the main research directions of this community are algorithm framework, algorithm improvement, and typical application. At present, the algorithm framework is mainly divided into two frameworks, one is the multitasking optimization algorithm framework based on dynamic subgroups, and the directional optimization task of individuals in the population will continuously change during the whole optimization process [23]. e other is that individuals in the population will directly assign a task based on the number of optimized tasks and follow the assigned task throughout the optimization process. Experimental results showed that the multitasking evolutionary algorithm based on multipopulation is also very advantageous [22,[24][25][26].
Many improvements to the multitasking optimization algorithm have been proposed [27][28][29][30]. Generally speaking, the operation of knowledge transfer between different tasks has a crucial impact on the algorithm performance [31]. Migration mode and frequency are the directions of multitasking optimization research. Multitasking evolutionary algorithm can also be combined well with other evolutionary algorithms to absorb the advantages of other EAs. For example, differential evolutionary and particle swarm optimization can well combine with MTEA and perform better than MFEA [32][33][34]. In addition, the improved crossover operator and search mechanism can be combined with a MTEA prejudice to improve the algorithm performance [35]. At the same time, the idea of machine learning can be well combined with multitasking optimization [36,37].
Multitasking optimization algorithm is not only proved to be feasible in theory [38] but also shows good efficiency in practical problems. Multitasking algorithm can be used to solve complex engineering design and expensive optimization problems. In the literature, there exist a lot of works to apply MTEA to tackle real-world problems, such as vehicle routing problem [39][40][41][42], optimization and control of photovoltaic systems [43], bilevel optimization problem [44], complex supply chain network management [45], double-pole balancing problem [46], and composite manufacturing problem [47].

Single-Task Optimization Evolution Model.
We take single-objective optimization problem as an example to investigate the effect of the order of variables. In general, an optimization problem can be formulated as subject to where h i (x) and g j (x) are the equality constraints and inequality constraints, respectively. When the order of variables of candidate solution x is changed, the new solution x new can be identified as an individual of the other function f new (x). It is noteworthy that two functions have the same search space and optimal solution (after coordinate transformation). For example, as shown in Figure 2 Even more important is that, as illustrated in Figure 2, no matter what genetic mechanism (crossover, mutation, etc.) is involved for a given evolutionary algorithm, their offspring are identical due to the same input of the given genetic mechanism. From this, we can discuss the effect of (1) Generate a random number rand between 0 and 1.
Complexity 5 order on single-task optimization. We have no reason to doubt that, although the change in the order of variables is quite a transformation of the objective function in a singletask optimization scenario, there is virtually no change in the optimization problem itself. Actually, the optimization result obtained by any evolutionary algorithm will not be influenced by changing the order of variables.

Multitasking Optimization Problem.
Mathematically, a multitasking optimization problem can be defined as follows: where f k (x k ): X k ⟶ R D represents the k-th optimization task with search space X k and x k � x k,1 , x k,2 , . . . , x k,D k is a feasible solution in the solution space, in which D k is the dimensionality of search space X k . In a multitasking optimization problem, the effect of changes in the order of variables is no longer insignificant. Before analyzing, we need to explain the concepts of orderdependent functions and order-independent functions. According to the role of variable order, the optimization functions can be divided into two classes: order-independent function and order-dependent function. For order-independent functions, the order of variables does not have an effect on the objective function of special forms. For instance, for Sphere function f(x) � D i�1 x 2 i , no matter how the order of its variables changes, the objective function is kept unchanged. us, it cannot influence the performance of any multitasking optimization algorithm, including MFEA. Correspondingly, the order of variables has an effect on order-dependent functions, such as Rosenbrock function Here, we take a simple example to further understand the variable order change on multitasking. As shown in Figure 3, we take 2-task optimization as an example to further investigate the effect of the order of variables. Two objective functions are f 1 (x) � x 1 + x 2 2 and f 2 (x) � x 3 1 + x 4 2 . Expressly, two functions can change their order of variables in the same way. However, this is a trivial case that is not of interest. us, only the order of variables of the first function f 1 (x) is changed in the next discussion. Note that the order of variables has an effect on two functions and the first function is selected to change the order of its variables.
Generally speaking, for any multitasking optimization algorithm, two parent candidates will undergo intrapopulation and interpopulation reproduction processes, such as crossover in MFEA. Based on the analysis in Section 3.1, when the individuals in population undergo intragenetic mechanism, they will produce the same offspring even after changing the order of variables. At the same time, these individuals may undergo intergenetic mechanism. In this case, two parent candidates come from different populations. As shown in Figure 3, two individuals x A and x C undergo inter-crossover in the original situation, and two individuals x new A and x C undergo inter-crossover after changing the order of variables. Due to the changed order of variables of the first objective function f 1 (x), they will produce different offspring, such as dotted lines ① vs. ⑤, ② vs. ⑥, ③ vs. ⑦, and ④ vs. ⑧ in Figure 3. us, it is obvious that ① and ⑤ have a wealth of opportunity to produce different offspring through cross mutation operator. e same one goes for the other three control groups. erefore, changes in the order of variables in a multitasking scenario are analyzed to have a real impact on the optimization process.

ree Orders of Variables.
In this paper, our purpose is to study the influence of the order of solution variables on the multitasking optimization. As we have analyzed before, the order of variables makes the original function become a new one, so the way to solve the order transformation has an impact on the optimization process. In our work, we design and study three orders of variables. ey are defined as equations (7)- (9). For the first order named full reverse order, all variables of function are coded in reverse order. Similarly, for the bisection/trisection reverse order, all variables are divided into two/three parts evenly and then coded in reverse order in each part.
x A new = (2, 1) x B new = (4, 3) Figure 2: Corresponding relationship between two individuals before and after changing the order of variables for single-objective optimization problem.
6 Complexity ere are two advantages of these variable order transformations. On the one hand, the important feature of these orders of variables is that an individual can recover as himself after changing the order of variables twice. Let x � (x 1 , x 2 , . . . , x D ) be an arbitrary point in D-dimensional space; mathematically, x � FuR (FuR(x)), x � BiR(BiR(x)), and x � TrR (TrR(x)). On the other hand, the complexity of the transformation of the incremental design solution can lay a foundation for studying the influence of the complexity of the solution order transformation on the optimization process and facilitate analysis. Specifically, the corresponding output of the transformation of the order of solutions can be obtained by the above three equations.
Here, we need to make a small distinction between our designed transformation strategy for variable order and the strategy in [48]. First of all, the application scenarios are different. Literature [48] is about the multiobjective problem in single task, but our work mainly focuses on multiple tasks. Secondly, in [48], it studies the effect of principle selection and variable priority selection on convergence when different principles learned during innovization [49] involve different numbers of variables or different rules involve the same variables. e goal of the variable related strategy in [48] is to explore how dimensional repair order and repair rules combined can improve algorithm convergence to the greatest extent in a multiobjective environment. However, in this paper, three variable order change strategies are designed to explore the sensitivity of the whole multitasking optimization process to different variable orders, and the change of variables is not dynamic. Obviously, these three strategies are not intended to improve the convergence performance of MTEA.

Experiment Setup.
In this paper, the basic algorithm adopted is multipopulation MFEA to highlight the direct effect of solution order in the multitasking scenario. In order to eliminate the randomness in individuals' generation, the initial seed of the random function is set to a fixed value so that it can always provide the same individual setting. In other words, the fixed result is obtained by using a given algorithm. In addition, MFDE is also used to verify the universality of the effect of the variable orders in the paper.
Seven commonly used optimization functions are used as components of 9 synthetic MTO problems. Among them, two functions (Rosenbrock and Griewank) are order-dependent functions that affect the four MTO problems. ese problems can be divided into three categories: complete intersection (CI), partial intersection (PI), and no intersection (NI). Moreover, based on the similarity between the fitness landscapes, they can also be categorized into three groups: high similarity (HS), medium similarity (MS), and low similarity (LS). For more details about these benchmark problems, one can refer to the technical report [11]. For experimental convenience, task 1 and task 2 in the fifth task group are swapped so that the first task in all task groups is sequential dependent functions.
All parameter settings are listed as follows: (1) Population size: N p � 100 (2) Maximum number of function evaluations: MaxF � 10 5 (3) Parameter settings in MFDE (i) Differential amplification factor: F � 0.5 (ii) Crossover probability: CR � 0.9 (iii) Random mating probability: rmp � 1.0 (4) Parameter settings in MFEA (i) Index of simulated binary crossover: mu � 2 (ii) Index of polynomial mutation: mum � 5 (iii) Probability of mutation: p m � 1 (iv) Interpopulation crossover probability: icp � 1.0 It is noted that, in order to minimize the effect of stochastic nature on each measured metric, the reported result is the average over 50 trials. Lastly, the empirical studies presented in this paper are conducted under Windows10 using a computer with a 2.4 GHz Intel Corei5 processor and 4 GB RAM.

Parameter Sensitivity Analysis.
In order to test the effectiveness of interpopulation crossover probability, experiments on a suite of single-objective multitasking benchmark problems are carried out in this part. First, the importance of parameter icp in the multitasking scenario is analyzed. After that, the analysis and discussion of the empirical results are also presented.
In the process of simultaneous optimization of multiple tasks, the similarity and complementarity between tasks play an important role [4]. High similarity between tasks tends to influence each other in a positive way, while complementarity helps tasks skip unnecessary searching. In the multifactorial optimization algorithm based on multipopulation, the crossover probability among the populations controls Crossover, mutation, etc.
Crossover, mutation, etc. Complexity 7 the influence of knowledge transfer between multiple tasks. e closer the crossover probability is to 1, the greater the mutual influence will be. Conversely, the smaller the probability of crossover between populations, the smaller the interaction between tasks. When the similarity between task groups is low, the high probability of interpopulation crossover will make the optimization result counterproductive. Of course, high similarity and icp are not necessarily conducive to the optimization of the problem. For example, when an optimization problem falls into local optimization, other tasks may also fall into local optimization.
In this paper, we set the value of the icp as a linear increment to explore the influence of icp on MFEA. It should be noted that the intrapopulation crossover probability is set to 0.5 in the experiment. Table 1 shows the performance of MFEA with various probabilities between different populations. e best results are shown in bold. It can be seen that in the fully intersecting multitasking groups, the optimal results always occur simultaneously in the case of the same population crossover probability, indicating that a bigger icp will enhance the interaction between tasks. However, the results are not ideal to achieve good performance when the parameter icp reaches the maximum. e reason may be that one problem in the multitasking group appears to stall, which affects the optimization process of other tasks. On the other hand, in the case that the optimization problem in multitasking group does not intersect at all, the effect between the tasks is opposite, and the algorithm performance is better in the case that the arbitrary mating rate between the populations is relatively low. erefore, in order to enhance the influence of variable order on multitasking optimization, we set the crossover probability between populations as 1 in the following experiments.

Comparison of ree Orders of Variables.
In this section, empirical studies are conducted to compare the solution quality of the MFEA with three orders of variables. And the experimental results are presented and discussed. Subsequently, we have made a proper cause analysis of the experimental results. e optimal solutions obtained by MFEA on the 9 singleobjective MFO benchmark problems are summarized in Table 2, the corresponding standard deviation is also given in the brackets, and the Wilcoxon rank sum test is adopted at a significance level of 5%. T1 and T2 denote the two tasks contained in MTO benchmark. Not surprisingly, as can be seen from Table 2, MFEA algorithm with different variable orders can obtain different optimal solutions, but the gap between these optimal solutions is very small. In a total of 18 optimization problems, the third order of variables performed better than the others seven times. e first and second ones do better three and five times, respectively. However, the difference between the results obtained by different variable orders is not regular. Better results can be obtained in all three orders, but the frequency in the third order is relatively high. For some reason, the complexity of the variables order has an impact on the optimization algorithm. And the multitasking algorithm is sensitive to the complexity of the order of variables.
Surprisingly, on the other hand, the effect on algorithm performance in practice is not significant for all cases. Some of the possible reasons include the following. (1) Essentially, the optimal solution is fixed after changing the order of variables. It means that the algorithm with different variable orders can evolve in the same direction. (2) At the operational level, when across-population crossover is executed in one generation, the offspring of different order strategies of variables are very similar because their parents come from the fixed-position individuals. Figures 4-12 show the convergence plots of 9 groups of multitasking problems under different variable orders. Original order means there is no change in the order of variables, and order1, order2, and order3 indicate full reverse order, bisection reverse order, and trisection reverse order, respectively. It can be seen from Figures 4-12 that the convergence curves of the three variables orders are consistent. And the order of variables has no significant effect on the convergence of the algorithm. With the increase of generation, the convergence of the algorithm is almost the same as that without the change of the order of variables. After the evolution of a generation, the fitness value of the problem is the same in the order of different variables. Only in PI + LS, the convergence of the order of different variables has obvious difference in the later period.
is may be because the PI + LS category has a low intertask similarity.

Universality of the Effect of Variable Order.
In order to further verify the universal effect of variable order, we give the performance of MFDE algorithm with three different variable orders. Relevant experimental parameters about MFDE have been introduced in Section 4.1. Its performance on nine single-objective MTO benchmarks is summarized in Table 3, and the convergence of MFDE is illustrated in Figures 13-21.
It can be seen from the experimental results that the effect of variable order on MFDE is similar to MFEA. e relevant experimental data are shown in Table 3, which are the optimal value, the mean value in brackets, and the Wilcoxon rank sum test at a significance level of 5%. Notably, MFDE algorithm with different variable orders can also get different results but the difference between the different results is small. In the total 18 optimization tasks, the third variable order performs better than other orders four times. e first and second categories outperform others three times, respectively. In general, the third variable order has advantages over the other two, which is consistent with the performance of MFEA algorithm. Furthermore, the effect on the algorithm performance is also not significant. In the MFDE algorithm, there is no obvious difference in the convergence curve trend under different variable orders. Only in CI + LS and PI + LS, the convergence of different variable order shows a significant difference in the later stage, which is closely related to the similarity of the multitasking group itself. e effect of different variable orders is similar in MFDE and MFEA. e effect of variable order on 8 Complexity                  16 Complexity multitasking optimization does not change significantly due to the change of the framework of the multitasking algorithm.

Conclusion
In this paper, we investigate the effect of order of variables on the algorithm performance when solving multitasking optimization problems. e corresponding relationship between two individuals before and after changing the order of variables for single-task optimization problem and MTO problems is analyzed, respectively. When the order of variables of one optimization function is changed, different offspring will be generated for the MTO problem. erefore, we design and study three orders of variables, namely, full reverse order, bisection reverse order, and trisection reverse order. An important feature of these orders of variables is that an individual can recover as himself after two times of changing the order of variables twice.
e experiment results showed that the effect of the order of variables on MFEA algorithm is not significant for all MTO problems in practice. Keeping this in mind, we analyze the difference of optimization results among different variable orders in different evolution stages. We find that the algorithm convergence does not change significantly because of the order of variables. In order to prove further, we also carried out the same order change on MFDE, and the results obtained are basically consistent with MFEA. However, MTO with high degree of similarity and intersection is more susceptible to be influenced by variable order and is more sensitive to complex variable order.
For future work, we would like to further study the effect of the order of variables under other situations, such as the position of optimal solution and multitasking optimization problem which contains more than two problems.

Data Availability
e data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest
e authors declare that they have no conflicts of interest.