A Novel Fruit Fly Optimization Algorithm with Evolution Strategy for Magnetotelluric Data Inversion

,


Introduction
Te magnetotelluric (MT) technology is an active electromagnetic exploration method using the natural electromagnetic feld as the feld source [1,2].Te MT method can be applied to explore the electrical structure of the earth's interior according to the theory that electromagnetic waves have diferent frequenciesat diferent skin depths.Because the MT method has the characteristics of large detection depth (from the surface to thousands of kilometers underground), is not afected by the shielding of high resistance layer, and is sensitive to the low resistance layer, it has become one of the main methods to understand the deep electrical structure of the Earth.It is widely used in solid mineral exploration, oil and gas exploration, and other felds [3].Te inversion of MT data is to infer the underground geoelectric structure from the observation data.Te inversion process is essentially an optimization problem, and an optimization algorithm is needed to realize it.
Inversion methods can be divided into linear inversion methods and nonlinear inversion methods.Te early MT inversion methods are based on linear inversion theory, but the inversion problem of MT inversion is nonlinear in nature.Te linear inversion method needs to linearize the nonlinear inversion problem and then use the optimization method to fnd the extreme value.Te linearization process will inevitably produce errors, so the linear inversion algorithm depends on the initial model and is easy to fall into the local minimum.Compared with the linearization method, the nonlinear inversion method has advantages.Te nonlinear inversion method treats the inversion problem as a direct solution of the nonlinear problem without systematic error and involves no matrix inversion calculation, thus reducing the computational complexity and improving the inversion accuracy.Although the dependence of nonlinear inversion algorithm on the initial model is greatly weakened, the search space increases geometrically with the increase of model parameters.Terefore, the development of efcient nonlinear inversion algorithm has important theoretical and practical signifcance.
Fruit fy optimization algorithm is a novel swarm intelligence optimization algorithm proposed by Pan [6].Te algorithm is based on the foraging behavior of fruit fies, using their sense of smell and vision to sense their surroundings and food.Te algorithm is easy to understand, easy to implement, and computationally efcient.Terefore, FOA is widely used in many felds such as in solving power load forecasting model [36], GRNN parameter optimization [37], joint replenishment problems [38], electricity consumption forecasting [5], multidimensional knapsack problem [39], structural optimization of transmission line tower [40], and energy consumption prediction [41].
Although FOA algorithm is widely used, it still has some problems such as insufcient global optimization ability, low precision, and slow convergence speed.Terefore, many scholars have improved FOA algorithm in their own ways.Pan et al. [42] brought a new control parameter and an efective solution generating method to improve the efectiveness of the FOA algorithm for solving continuous function optimization problems.Yuan et al. [43] employed multiswarm behavior to signifcantly improve the performance of FOA, named multiswarm fruit fy optimization algorithm (MFOA), and several subswarms moving independently in the search space with the aim of simultaneously exploring the global optimum at the same time and the local behavior between subswarms were also considered.Marko et al. [44] improved the standard FOA by introducing a novel parameter integrated with chaos.To improve the convergence performance of FOA, a normal cloud modelbased FOA (CMFOA) was proposed by Wu et al. [45].Lv et al. [46] proposed an improved FOA based on hybrid location information exchange mechanism (HFOA) aiming at improving the swarm diversity in a more efcient way and balancing the global search and local search abilities.Du et al. [47] proposed an improved FOA based on linear diminishing step and logistic chaos mapping (named DSLC-FOA) for solving constrained structural engineering design optimization problems.To solve both continuous function optimization and clustering parameter problems, Han et al. [48] proposed a novel FOA with trend search and coevolution (CEFOA); trend search was applied to enhance the local searching capability of fruit fy swarm, and coevolution mechanism was employed to avoid the premature convergence and improve the ability of global searching.Hu et al. [41] used the normal distribution function to improve the search mode of the FOA, named normal distribution fruit fy optimization algorithm (NFOA).It enhances search accuracy in the central area and efectively expands the search scope.Experimental results show that the accuracy and stability of the algorithm were improved.
Te structure of this paper is as follows.Firstly, the basic FOA algorithm is introduced, then the improved strategy based on DE algorithm is proposed, and the test function is used to test the improved algorithm.Te test results are compared with the results of other algorithms, and the results show that the optimization ability of the improved FOA algorithm is better than the above algorithms.Finally, the algorithm is applied to the MT data inversion problem.

The Basic Theory of the Fruit Fly Optimization Algorithm
Te FOA is a new kind of global optimization algorithm that simulates the foraging process of a fruit fy.According to the process of a fruit fy searching for food, shown in Figure 1, the basic idea of the FOA can be found as follows.
(1) Olfaction search stage: the fruit fy has well-developed olfaction.First, it uses its olfaction to detect various odors in the surrounding environment.(2) Visual positioning stage: the fruit fy fies to the vicinity of the food through the olfactory positioning within its visible range, accurately locates the food position through its vision, and fies to the food.Terefore, the FOA can be summarized into the following steps.
Step 1. Initialization: it includes the fruit fy swarm size, swarm location, and maximum number of iterations.
InitX axis, InitY axis. (1) Step 2. Provide a random direction and distance of individual fruit fy foraging.i stands for the ith fruit fy.
Step 3. Since the location of the food is unknown, frst calculate the distance Dist between the individual fruit fy and the initial position and then calculate the smell concentration judgment value S according to Dist (Dist i and S i represent the distance between the ith fruit fy and the initial position and the smell concentration judgment value, respectively). (3)

Journal of Mathematics
Step 4. Substitute the taste concentration judgment value S into the ftness function to calculate the taste concentration value of the individual fruit fy.
Step 5. Find the fruit fy with the best favor concentration in the fruit fy swarm: Step 6. Te optimal smell concentration value is saved along with the corresponding x, y coordinates, and the fruit fy fies to that location using its vision.
Smellbest � bestSmell, Step 7. Execute the optimization repeatedly from Steps 2 to 5, each time, judge if the smell concentration is superior to the previous one, and if so, implement Step 6.

The Improvement of Fruit Fly Optimization Algorithm
Te FOA has a wide range of applications in the feld of engineering due to its advantages of fewer control parameters and no constraints.However, the evolutionary process of the FOA learns only from the best fruit fy in the evolutionary iteration process of the entire fruit fy swarm.When the optimal individual for this evolution is found, all individuals will gather towards the optimal individual and will search randomly within this small area.If the optimal individual of this evolution is only a local extremum, it will cause the algorithm to fall into the local optimum, from which it cannot leave, which makes the algorithm converge prematurely.Terefore, the algorithm needs to be improved.Te diferential evolution (DE) algorithm is also a kind of swarm intelligent optimization algorithm [16,49,50].Its core idea is to use the diference between diferent individuals in the population to carry out evolutionary optimization through crossover and mutation.Compared with the traditional genetic algorithm, it has the advantages of fast convergence, fewer control variables, being easy to understand, and programming implementation, so it has a wide range of applications [19,51,52].
Te performance of DE algorithm is greatly afected by the control parameters of the algorithm.DE algorithm mainly has three control parameters: population size, variation scale factor, and crossover probability.Population size is always an important control parameter of evolutionary algorithms, which directly afects the population diversity of algorithms.If the population size is large enough, the population diversity performance is guaranteed, the optimization ability of the algorithm is enhanced, and the probability of obtaining the optimal solution is increased.However, the increase of population is accompanied by the decrease of computational efciency.So, there is a trade-of between population diversity and computational efciency.
Variation scale factor F and crossover probability are important control parameters of DE algorithm.Similar to the search step size, the size of the scale factor directly afects the optimization ability of the DE algorithm, and the global search ability and local search ability of the algorithm can be balanced by adjustment.Te scale factor of the standard DE algorithm is fxed in the optimization process, and diferent scholars have proposed appropriate thresholds for some parameters according to the actual situation of the problem to be solved.Considering that most parameter settings depend on the problem, the artifcial adjustment is extremely time-consuming.At present, the adaptive parameter adjustment method has become a hotspot in the research of DE algorithm.Many improvements to DE algorithms are based on adaptive scaling factors.In the early stage of algorithm evolution, appropriately setting a large scale factor will help the algorithm quickly search the near optimal solution in the early stage and reduce the scale factor in the later stage, so that the algorithm can quickly fnd the optimal solution.
In order to enhance the optimization ability of FOA, this study introduces the crossover and mutation process from the DE algorithm into the FOA and proposes a novel FOA algorithm based on the DE algorithm.
In the standard FOA, each search step size of a fruit fy is a random value under a fxed step size, which leads to the poor ability of the algorithm to jump out of the local extreme values.Terefore, we replaced the standard FOA with a mutation operation.After comparing several mutation strategies, we chose the "DE/best/2" mutation strategy [53].After the mutation operation, the obtained diference vector and the optimal individual of the parent generation were crossed again to obtain the fnal search step size.In the DE algorithm, in order to improve the optimization ability of the Journal of Mathematics algorithm, the method of gradually decreasing the scale factor was adopted.Te improvement strategy was as follows.

Search
Step Size Improvement Strategy.Te search step size of the standard fruit fy optimization algorithm is fxed, that is, the individual search step size is randomly generated under a given step size value, the mutation operation of the diferential evolution algorithm was introduced, and the step size was updated based on the diference of the population individuals, enhancing the algorithm's optimization capabilities.
where r 1 , r 2 , r 3 , r 4 are random numbers between the intervals [1,NP] and NP represents the swarm size.In order to further increase the diversity of the fruit fy swarm, we continue to use the crossover operation to update the mutation operation.
where rand is a random number uniformly distributed in the interval [0, 1], CR is the crossover probability, randn (j) is a random integer between [1, N], and N represents the dimension of the unknown quantity.

Variation Scale Factor Improvement Strategy.
In the early stage of the algorithm, since the swarm position is far from the optimal solution, a larger step size can improve the global search ability of the algorithm, so that the fruit fy individual can quickly fnd the vicinity of the optimal solution.Te step size can also improve the local search ability of the algorithm, to quickly fnd the optimal solution.Terefore, based on the above ideas, the variation scale factor was improved so that it gradually decreased in the iterative process of the algorithm, and the local search ability of the algorithm was increased while ensuring the diversity of the population.Te scaling factor improvement formula is where t is the current number of iterations, t max is the maximum number of iterations, and F 0 is the initial scale factor, which is generally in the range of [0.4,0.9].

Te Steps of the Improved Fruit Fly Optimization Algorithm (IFOA)
Step 1. Initialization: it includes fruit fy swarm size, swarm location, maximum number of iterations, and mutation scale factor.
Step 4. Update the positions of fruit fies using the candidate values from Step 3.
Step 5. Execute the iterative optimization repeatedly from Steps 2 to 4, each time judging if the smell concentration is superior to the previous smell concentration until the maximum number of iterations is reached.

Experiments and Numerical Analysis
Step (πω Journal of Mathematics efectiveness of the improved strategies.Te population is set to 100.Due to the need to gradually decrease, the initial F of IFOA is set to 1, the F of IFOA-1 is set to the constant 0.5, and the number of iterations is 100.Te results of mean and standard deviation are shown in Tables 2-4.
Trough the above comparative test, it is found that compared with FOA, the convergence speed of the improved algorithm is faster, which is mainly due to the crossover and mutation strategies.Te improvement of F has little efect on the algorithm.

Comparison with Other Algorithms.
In this section, we verify the performance of the IFOA algorithm.Te above test functions have both unimodal and multimodal functions, so the global search ability and local search ability of the algorithm are considered the key to fnding the optimal solution quickly.For comparison, we choose three common intelligent optimization algorithms, DE, PSO, and GWO.In order to test the performance of the algorithm in high-dimensional space, we set two schemes with parameter dimensions of 30 and 50, respectively.Te parameter settings of each algorithm are shown in Table 5. Te number of iterations is set to 5000 times, each algorithm runs independently for 30 times, and then the mean value and standard deviation of the optimal solution are obtained.Te results are shown in Tables 6 and 7.
As shown in Table 6, when Dim � 30, for functions F1-F14, except F5 and F11, IFOA algorithm performs well, and the mean and standard deviation results are superior to other algorithms.For most test functions, IFOA can fnd the true optimal solution directly.For F5, the DE algorithm is superior, and the case of Dim � 50 is similar to that of Dim � 30.
In order to more intuitively compare the performance of these algorithms, we sort and score them according to their optimization ability (the smaller the optimal solution, the higher the score; the same optimal solution, the same score).Te results of Dim � 30 are shown in Table 8.It can be seen from the results that, except for some functions, the scores of IFOA are all above 4 points.IFOA scores 63.Te fnal ranking results show that IFOA is an excellent optimization algorithm, which not only has global search ability and local search ability but also has good robustness and universality.

MT Forward Modeling.
Maxwell's equations are the theoretical basis of all electromagnetic methods.According to Cagniard's classical magnetotelluric theory, the feld source is assumed to be a plane electromagnetic wave incident vertically on the ground, the Earth medium is a uniform horizontal layered model, and the electrical properties of each medium are uniform and isotropic.Consider a one-dimensional layered medium; assuming that the underground medium consists of n layers of horizontal layered medium, there are a total of 2n − 1 parameters.
where ρ is the conductivity, h is the depth, and h n � ∞; for the 1D layered model, the apparent resistivity ρ a (ω) and phase φ(ω) can be observed on the ground and the formulas are as follows: where ω � 2π/T is the angular frequency, μ is the magnetic permeability, and Z(ω) is the wave impedance, which can be calculated by the following formula: where Z 0m is the characteristic impedance of layer m, k m is the complex propagation constant of layer m, and Z m is the wave impedance at the top of layer m.

MT Inversion Teory.
Te geophysical inversion problem is to study the theory and method of using the obtained geophysical observation data to deduce the characteristics of the underground geophysical model.Te purpose of the inversion problem is to fnd the appropriate geophysical model to ft the observation data.Terefore, geophysical inversion is essentially an optimization problem.Te inversion problem can be expressed as where d is the N-dimensional observation data vector, m is the M-dimensional model parameter vector, and F is the model forward response function.Geophysical inversion aims to fnd a reasonable underground model parameter vector m to ft the observation data d.Te inversion problem is an ill-posed problem, so it is necessary to obtain a stable solution through the regularization method.According to regularization theory, we adopted the Occam-like regularization [54].Terefore, the objective function of the inversion problem is as follows: where Φ d is the data ftness, Φ m is the model parameter constraint, and λ is the regularization factor, set as the tradeof between the model and data misft to regulate the model roughness.d obs are the observation data, and F(m) are the predicted response data.

Numerical Results of the 1D Inverse Modeling.
In order to test this new algorithm, we designed two theoretical models of layered media, and the test environment was a personal computer.Te model parameters are shown in Table 9. Te model parameter search interval was set to [0.5 m, 2 m], where m represents the true value of the model parameters, the fruit fy swarm size was 10, the crossover factor CR � 0.9, and the variation factor F � 0.5.Model 1 was a three-layer geoelectric model.Te maximum number of iterations was 1000.In order to ensure the reliability of the results, the calculation was repeated 20 times continuously, and the average value was taken as the result.Te inversion results are shown in Figure 2. It can be seen from the fgure that both Bostick and IFOA obtained a good result.Te inversion results of the IFOA were closer to the real model than the Bostick inversion results.Te low resistivity layer was well inverted, and the apparent resistivity and phase results of the prediction model ft well with the observed data.9, and it has S equivalence.Te maximum number of iterations was set to 5000.To illustrate the infuence of data errors on the inversion results, 10% and 20% of Gaussian random noise were added to the original model forward results, respectively.We repeated the calculation 20 times in a row and took the average value as the result.Te inversion calculation results are shown in Table 10.It can be seen from the table that the IFOA inverted the model parameters well and had good robustness.

Field Data.
In order to further verify the feasibility of the algorithm, the IFOA was used to invert the real MT data.Te data came from the open-source data, US Array MT stations, by Oregon State University [55].Te inversion result is shown in Figure 3.It can be seen from the fgure that the apparent resistivity and phase of the inversion model ft the measured data well.Two low-resistivity layers were well inverted.

Conclusions
As a swarm intelligence global optimization algorithm, IFOA can be fully applied to MT data inversion through the inversion of a theoretical model and feld data.Trough improvement, the algorithm overcame the equivalence problem of the geoelectric model and accurately restored the deep high-conductivity and low-resistance layers.Tis shows that the IFOA can be used for one-dimensional inversion of MT data.Te IFOA avoids the defect of the linearization optimization theory needing to calculate the Jacobian matrix, which easily falls into local extreme values, and it has better global optimization ability.Te inversion results of the measured data confrmed the capability of IFOA.
Te following research will continue in the future.First, research on an intelligent optimization algorithm will be conducted.At present, due to the progress of science and technology and the development of computer hardware, the complete nonlinear global optimization algorithm has been a research hotspot in the feld of mathematics and engineering, asking how to improve the optimization ability of the algorithm and how to combine diferent optimization algorithms, to learn from each other, and put forward a more suitable optimization method for geophysical inversion.Tese are questions for the next step.Second, the twodimensional inversion of the fruit fy optimization algorithm will be realized.Since time is limited, this paper implements only the one-dimensional inversion of the optimization algorithm, which is not enough; the next step will continue to excavate the potential of the FOA with the study of two-dimensional magnetotelluric data inversion.Due to the data, the model parameters of 2D case, and the need to consider the parameters of the space constraints, this is a challenge for a global optimization algorithm.

Fly 1 Figure 1 :
Figure 1: Te interactive food search process of fruit fy swarm.

Figure 3 :
Figure 3: Te IFOA optimization of the measured MT sounding.(a) represents the results of the resistivity from the IFOA and Bostick inversions.(b) and (c) are the response results of the apparent resistivity and phase, respectively.
fxed as a constant and the comparison algorithm was named IFOA-1.Tree classic benchmark functions (from the functions in Table1) were also used to test the three algorithms of standard FOA, IFOA, and IFOA-1 to verify the 4Journal of Mathematics

Table 2 :
Optimization iteration curve of Zakharov function.

Table 3 :
Optimization iteration curve of Rastrigin function.

Table 4 :
Optimization iteration curve of Griewank function.

Table 5 :
Parameter setting for comparison algorithms.

Table 6 :
Te performance on benchmark functions with dimension � 30.

Table 7 :
Te performance on benchmark functions with dimension � 50.

Table 8 :
Ranking of FOA, DE, PSO, IFOA, and GWO on 14 benchmark functions with Dim � 30 according to their performance.

Table 10 :
Te IFOA optimization results of the synthetic data from model 2. m)