Application of Improved Manta Ray Foraging Optimization Algorithm in Coverage Optimization of Wireless Sensor Networks

For the shortcomings of the manta ray foraging optimization (MRFO) algorithm, like slow convergence speed and difficult to escape from the local optimum, an improved manta ray foraging algorithm based on Latin hypercube sampling and group learning is proposed. Firstly, the Latin hypercube sampling (LHS) method is introduced to initialize the population. It divides the search space evenly so that the initial population covers the whole search space to maintain the diversity of the initial population. Secondly, in the exploration stage of cyclone foraging, the Levy flight strategy is introduced to avoid premature convergence. Before the somersault foraging stage, the adaptive t-distribution mutation operator is introduced to update the population to increase the diversity of the population and avoid falling into the local optimum. Finally, for the updated population, it is divided into leader group and follower group according to fitness. The follower group learns from the leader group, and the leader group learns from each other through differential evolution to further improve the population quality and search accuracy. 15 standard test functions are selected for comparative tests in low and high dimensions. The test results show that the improved algorithm can effectively improve the convergence speed and optimization accuracy of the original algorithm. Moreover, the improved algorithm is applied to wireless sensor network (WSN) coverage optimization. The experimental results show that the improved algorithm increases the network coverage by about 3% compared with the original algorithm, and makes the optimized node distribution more reasonable.


Introduction
With the advancement and development of intelligent information technology, the scale and complexity of data are also increasing. Traditional numerical optimization methods are difficult to solve complex optimization problems, resulting in higher and higher calculation costs. In recent years, swarm-based intelligent optimization algorithms have been favored by many researchers because of their simplicity and high efficiency [1]. Swarm intelligence algorithms can effectively solve many complex optimization problems in the field of engineering, and are mainly used in network optimization [2], feature selection [3], image processing [4], automatic control [5], and other fields. In recent years, swarm intelligence optimization algorithms have been proposed, including butterfly optimization algorithm (BOA) [6], whale optimization algorithm (WOA) [7], sine cosine algorithm (SCA) [8], sparrow search algorithm (SSA) [9], marine predator algorithm (MPA) [10], African vultures optimization algorithm (AVOA) [11], manta ray foraging optimization (MRFO) algorithm [12], and so on. e MRFO algorithm is a new swarm intelligence optimization algorithm proposed by Weiguo Zhao et al. in 2020. e inspiration of this algorithm is based on intelligent behaviors of manta rays. e foraging (optimization) process is divided into three stages, namely chain foraging, cyclone foraging, and somersault foraging. Compared with some classical intelligent algorithms and most of the above algorithms, it has higher convergence accuracy and faster optimization speed. Although MRFO has the above advantages, it still has the problems of easy premature convergence and falling into local optimum. In order to solve these problems, many researchers have improved the basic MRFO algorithm. Davut Izci et al. [13] introduced the opposition-based learning strategy into the population initialization, which improves the quality of the population to a certain extent, but convergence accuracy needs to be improved. Biqi Sheng et al. [14] proposed a balanced manta ray foraging optimization (BMRFO) algorithm. BMRFO introduces the Levy flight strategy in the cyclone foraging stage, and improves the flip factor. Although the algorithm's ability to jump out of the local optimum is improved, the convergence speed is not significantly improved. Oguz [15] introduces the chaotic map into the foraging behavior of MRFO, which improves the optimization performance of the algorithm, but the improvement ability was limited.
In order to better solve the problems and improve the optimization accuracy and convergence speed of the MRFO algorithm, this paper combines the Latin hypercube sampling (LHS) method with the group learning strategy, and introduces the Levy flight and adaptive t-distribution disturbance strategy. erefore, an improved MRFO algorithm based on LHS and group learning (LGMRFO) is proposed. To verify the performance of the LGMRFO algorithm, 15 general test functions and 9 CEC2017 test suite functions are selected for low-dimensional and high-dimensional comparison tests.
Adaptive adjustment and deployment of sensor nodes in WSN can make them more evenly distributed in the detection area and have a higher coverage, so as to rationally allocate network space resources and better complete the tasks of environmental awareness and information acquisition. is is of great significance to improve network viability, improve network reliability, and save network construction costs. Generally, area coverage is the main criterion for evaluation. Optimized coordinate deployment of sensor nodes is carried out through optimization algorithm, and as few sensor nodes as possible are used to ensure the area coverage requirement and reduce the redundancy of sensor nodes. erefore, in order to improve the poor coverage effect caused by unreasonable deployment of WSN nodes, the LGMRFO is applied to the coverage optimization problem of WSN. e experimental results further verify the effectiveness of the algorithm. e rest of this paper is organized as follows. e MRFO algorithm is described in details in section "MRFO". Section "Related Works" detailly introduces some intelligence optimization algorithms. Section "LGMRFO" describes the improved strategies for MRFO in this work. e performance of LGMRFO is evaluated by optimizing 24 test functions in section "Numerical Simulation Analysis". Section "Coverage optimization of WSN Based on LGMRFO" presents the simulations and performance evaluation of LGMRFO for WSN coverage. At last, Section "Conclusion" summarizes this paper.

Related Works
Based on the source inspiration, the intelligence optimization algorithms can be divided into four classes of [16]: (a) physics-based, (b) math-based, (c) human-based, and (d) swarm-based. Physics-based methods tend to perceive the landscape as a physical phenomenon and move the search agents using formulae borrowed from physical rules or theories.
e Archimedes optimization algorithm [17] is devised with inspirations from an interesting law of physics Archimedes' principle. An equilibrium optimizer (EO) [18] is inspired by control volume mass balance models used to estimate both dynamic and equilibrium states. Atomic orbital search (AOS) [19] is proposed based on some principles of quantum mechanics and the quantum-based atomic model. Transient search optimization (TSO) [20] is inspired by the transient behavior of switched electrical circuits that include storage elements.
Math-based algorithms are solely based on mathematical equations. ey are not inspired by a specific natural phenomenon. Runge Kutta optimizer (RUN) [21] is designed according to the mathematical foundations of the Runge Kutta method. Gradient-based optimizer (GBO) [22] is inspired by the gradient-based Newton's method. e golden sine algorithm (Gold-SA) [23] is inspired by sine that is a trigonometric function. e arithmetic optimization algorithm (AOA) [24] utilizes the distribution behavior of the main arithmetic operators in mathematics. Weighted mean of vectors (INFO) [25] is an efficient optimization algorithm based on weighted mean of vectors.
Inspired by the social behaviors of human beings, a lot of optimization algorithms have been proposed. Political optimizer (PO) [26] is inspired by the multiphased process of politics. e group teaching optimization algorithm (GTOA) [27] simulated the impact of teachers on learners' output in the classroom. Queuing search (QS) [28] is inspired from human activities in queuing. Student psychology based optimization (SPBO) [29] is inspired by the psychology of the students who are trying to give more effort to improve their performance in the examination up to the level for becoming the best student in the class.
Swarm-based approaches imitate the social behavior and communications within a group of species of animals, plants, or other living things. ese approaches have gained increasing popularity in terms of both application and new algorithm development. Some of the recently proposed algorithms that can be categorized under this approach are slime mould algorithm (SMA) [30], hunger games search (HGS) [31], Harris hawks optimization (HHO) [32], moth search algorithm (MSA) [33], monarch butterfly optimization (MBO) [34], golden eagle optimizer (GEO) [35], and tuna swarm optimization (TSO) [36].
Compared with these three types, swarm-based algorithms have superiority over other three types of algorithms. Manta ray foraging optimization (MRFO), with few adjustable parameters, is easy to implement, which in turn makes it very potential for applications in many engineering fields. So, this paper improves the manta ray foraging optimization (MRFO) algorithm named MRFO based on Latin hypercube sampling and group learning (LGMRFO). MRFO falls into the fourth class of optimization algorithms, as it originates from swarm behavior of manta rays (a kind of sea animal).

MRFO
MRFO updates the individual position by three foraging behaviors, including chain foraging, cyclone foraging, and 2 Computational Intelligence and Neuroscience somersault foraging. e mathematical models are described below.
3.1. Chain Foraging. Manta rays' line up head-to-tail and form a foraging chain. In each iteration, each individual is updated by the best solution found so far and the solution in front of it. is mathematical model of chain foraging is represented as follows: where, x d i (t) is the position of ith individual at t-th iteration, r is a random vector within the range of [0, 1], α is a weight coefficient, x d best (t) is the plankton with high concentration (the best solution found so far), and N denotes the population size.

Cyclone Foraging.
When manta rays find plankton in deep water, they form a long foraging chain and swim towards the food by a spiral. In the cyclone foraging behavior of manta rays, in addition to spirally move towards the food, each manta ray swims towards the one in front of it. e mathematical model of the exploitation stage of cyclone foraging behavior can be calculated by the following formula: where β is a weight factor, T is the maximum number of iterations, and r 1 is a rand number in [0, 1]. In equation (3), MRFO focuses on local exploitation. In addition, by taking the random position in the search space as the reference position, this behavior can also be used to improve the exploration mechanism of the algorithm. e mathematical model is as follows: where x d rand is a random position produced in the search space, Lb d and Ub d are the lower and upper limits of the dth dimension, respectively.

Somersault Foraging.
In this foraging behavior, the position of food is regarded as a pivot. Each individual tends to swim to and from around the pivot and somersault to a new position. e mathematical model can be created as follows: where S is the somersault factor that decides the somersault range of manta rays and S � 2, r 2 and r 3 are two random number in [0, 1]. MRFO balances the ability of global exploration and local exploitation by controlling the change in t/T, where, t is the current number of iterations and T is the maximum number of iterations. When t/T < rand, selecting the current optimal position as the reference position for global exploration behavior. When t/T ≥ rand, taking the optimal individual as the reference point, it focuses on the local exploitation ability of the algorithm.

LGMRFO
In order to improve the performance of MRFO, this paper improves it in three aspects: Firstly, the LHS method is used to initialize the population to enhance the diversity of the population; Secondly, in the exploration stage of cyclone foraging, Levy flight strategy is introduced to accelerate the convergence speed. Before the somersault foraging, an adaptive t-distribution mutation operator is added to update the population position to avoid falling into local optimization; Finally, the group learning strategy is set to improve the optimization accuracy of the algorithm.

LHS Method Population Initialization Strategy.
In the basic MRFO, the initial population is generated in a random way. e initial population generated by this method is often unevenly distributed or even overlaps individuals, which reduces the optimization performance of the algorithm to a certain extent. e LHS method is a multidimensional stratified sampling technology proposed by McKay et al. [37], which has the following advantages compared with simple random sampling method.
(1) e sampling points generated by LHS can achieve full space coverage and can be evenly distributed in the search space; (2) LHS has better robustness and stability. erefore, in order to enhance the diversity of the initial population and improve the performance, we adopt the LHS method to initialize the population.
Assuming that N initial individuals are generated in the d-dimensional space, the specific steps to initialize the population with the LHS method are as follows: Step 1. Firstly, the population size N and dimension d are determined.
Step 2. Determine the interval for individual x as [lb, ub], where lb and ub are the lower and upper bounds of the variable x, respectively.
Step 3. Divide the interval of variable x into N equal small intervals.
Step 4. Randomly select a point in each subinterval of each dimension.
Step 5. Combine the extracted points of each dimension to form initial population. It can be seen that the sample points generated by the LHS method can be more evenly distributed in the search space. erefore, using the LHS method to initialize the population of the MRFO algorithm, it can make the population position evenly distributed in the search space, and enhance the population diversity to improve the convergence performance of the algorithm.

Levy Flight.
In some cases, due to the random individual selection in each iteration, premature convergence may occur, thereby increasing the running time, so different mechanisms can be used to improve the MRFO algorithm.
is paper uses the Levy flight mechanism [38] for local disturbance, the mechanism is based on random walk behavior, and the mathematical model is as follows: where u and v come from the normal distribution, i.e., e values of σ u and σ v are as follows: where Γ is the standard gamma function.  Computational Intelligence and Neuroscience e position update formula of the cyclone foraging exploration stage with the addition of Levy flight strategy is as follows: where ⊗ denotes point-to-point multiplication.

Adaptive t-Distribution
. T-distribution is also called student distribution [39], and its distribution state is closely related to degrees of freedom. In order to enhance the diversity of the population and avoid falling into local optimum, this paper introduces the adaptive t-distribution strategy to disturb manta ray population before the somersault foraging behavior. e calculation formula is as follows: where x old is the original individual, x new is the new individual after mutation, and t(iter) is the t-distribution with the current iteration number iter as the degree of freedom.
In the early stage of the iteration, the degree of freedom is small (the number of iterations is small), and t-distribution is similar to Cauchy distribution. At this time, the update step size is larger, which can expand the search field of the individual and improve the global exploration ability. In the middle and later iteration, the degree of freedom gradually increases, and the performance of t-distribution is similar to Gauss distribution. At this time, the update step size is smaller, which helps the algorithm to search around the current individual neighbourhood, and the algorithm has better local exploitation ability.

Group Learning Strategy.
In the process of algorithm evolution, some individuals may reach the optimal position, and the fitness value of others may become more worse. In order to overcome this defect, inspired by the salp swarm algorithm (SSA) [40], individuals with poor location need to learn foraging skills from individuals with good location. Based on this idea, a group learning strategy is proposed. e population after somersault foraging is evenly divided into two groups according to the fitness value. e group with better fitness is called the leader group, and the group with poor fitness is called the follower group.

Leader Group Learning Strategy.
e differential evolution (DE) algorithm [41] has a good effect in solving complex optimization problems. In this paper, the differential evolution strategy is used to generate a new leader group individual, and the greedy strategy is used to select the optimal individual. e specific mathematical model is as follows: where x new is a new individual produced by mutation; x ' best is the optimal individual x best new individuals generated by randomly sorting dimensions; F is the scaling factor, and F � 0.5; x m and x n are two different leaders randomly selected from the leadership group, which are different from the current individual. e new individual generated by this strategy needs to be compared with the original individual, and the individual with better fitness should be selected as the current individual.
Compared with the mutation of whole individuals, this strategy has stronger selectivity, which can effectively enhance the local mining performance and improve the convergence accuracy of the algorithm.

Follower Group Learning Strategy.
Each follower in the follower group learns from the average of the two leaders. e mathematical model is described as follows: where x i new follower refers to the new individual generated after the ith individual of the following group learns from the leading group, x i leader represents the ith individual of the leadership group. e new follower individual needs to be compared with the original follower individual, and the individual with a better fitness value is selected as the current follower individual.
By learning from the leader group, the follower group can greatly improve the fitness, realize the conversion from follower to leader, and then improve the convergence speed of the algorithm.

LGMRFO Algorithm Implementation Steps.
e specific implementation steps of LGMRFO algorithm are as follows: Step 1. Set the relevant parameters: population size N, variable dimension D, maximum number of iterations T, and initialize the population position by the LHS method.
Step 2. e fitness value of each individual is calculated, and the initial optimal individual position and its optimal fitness value are obtained according to the fitness value.
Step 3. Enter the algorithm iteration process. When rand ≥ 0.5, chain foraging is performed and updates the individual position according to equation (1); Otherwise, Computational Intelligence and Neuroscience cyclone foraging is performed, when t/T < rand, the individual enters the exploration stage, introduces Levy flight strategy, and updates the individual position according to equation (10), when t/T < rand, the individual enters the development stage and updates the individual position according to equation (3).
Step 4. Before performing somersault foraging behavior, an adaptive t-distribution strategy is added, the individual position is updated according to equation (11), and the current individual is greedily selected.
Step 6. e group learning strategy is implemented, that is, the updated population is divided into a leading group and a following group according to the fitness value, and new individuals are generated by learning from equations (12) and (13), respectively. If the fitness becomes better after learning, the current individual position will be updated, otherwise, it will not be updated.
Step 7. Update the optimal location and its optimal fitness value of each generation.
Step 8. Judge whether the algorithm meets the iteration conditions. If so, the algorithm terminates; Otherwise, go to Step 3. e pseudocode of LGMRFO is shown in Algorithm 1.

Time Complexity of the LGMRFO.
e overall time complexity of MRFO is given as where, T is the maximum number of iterations, n is the number of individuals, and d is the number of variables.
LGMRFO proposed that in this paper only increases the computational complexity in adaptive t-distribution and group learning. erefore, the overall time complexity of LGMRFO is given as is shows that the time complexity of LGMRFO is consistent with that of MRFO.
Input: Initialize the size of population N, the maximal number of iterations T, and the manta rays X. Output: e best solution Xbest.
(1) Compute the fitness of each individual fi � f(Xi) and obtain the best solution found so far Xbest, where lb and ub are the lower and upper boundaries of problem space, respectively.
Perform the exploratory behavior of cyclone foraging according to equation (10)  (8) else (9) Perform the exploitative behavior of cyclone foraging according to equation (3)  (10) end (11) else (12) Perform the chain foraging according to equation (1)  (13) end (14) end (15) Greedy selection of the current individual (16) For i � 1 to N (17) Perform the adaptive t-distribution strategy according to equation (11)  (18) end (19) Greedy selection of the current individual (20) For i � 1 to N (21) Perform the somersault foraging according to equation (6)  (22) end (23) e population was divided into two groups according to the fitness value (24) Perform the group learning strategy according to equations (12) and (13)  (25) Compute the fitness of each individual f i � f(X i ) and obtain the best solution found so far Xbest (26) t � t + 1 (27) end (28) Return the best solution found so far Xbest ALGORITHM 1: LGMRFO Algorithm. 6 Computational Intelligence and Neuroscience

Results Evaluation of General Functions.
Simulation and comparison experiments of six algorithms were conducted in the Matlab R2018a environment. To avoid excessive chance errors, each benchmark function was chosen to run 30 times independently in the experiments, and the optimal value, the worst value, the average value, and standard deviation were used as evaluation indexes, and the population size was set to 30 and the maximum number of iterations was 500. Black highlights the greatest outcomes. F13 to F15 (fixed-dimensional multipeak function), D � 50 (low-dimensional), and D � 500 (high-dimensional) functions F1 to F12, respectively, are used to test and assess the algorithm. e iterative convergence curves of six algorithms at 500 dimensions under four single peak test functions, eight multipeak function test functions, and three fixed-dimension test functions are plotted in this research due to the article length constraint, as shown in Figure 3. Table 2 shows the test results for the three fixed-dimension multipeak functions from F13 to F15. Table 2 and Figures 3(e) and 3(f ) show that LGMRFO has faster convergence and better optimization-seeking accuracy than other algorithms, and its standard deviation is the smallest, indicating that it is more stable. e standard deviation reflects the algorithm's stability in solving, so LGMRFO is more stable.   8 Computational Intelligence and Neuroscience Table 3 shows a comparison of the algorithm's function test results in 50 dimensions. Both LGMRFO and basic MRFO can meet the theoretical optimal value in the single-peak lowdimensional function test, as shown in Table 3, and standard deviation is 0. is indicates that LGMRFO's optimizationseeking ability is more stable than other algorithms, and

Evaluation of Low-Dimensional Functions.
LGMRFO's convergence speed is significantly faster than other intelligent algorithms, including MRFO, indicating that the improvement strategy has significantly improved MRFO's convergence performance. Table 2 shows that LGMRFO can also get greater accuracy solutions in the multipeak low-dimensional function test, especially for functions F5, F6, F7, F8, and F10. Although the average solutions of functions F9, F11, and F12 do not approach the theoretical ideal value, LGMRFO's overall convergence performance ranks 2nd, 1st, and 2nd, respectively, when compared to other algorithms. e other algorithms have a better chance of escaping the local optimum. Except for functions F9 and F12, LGMRFO has the smallest standard deviation among the other functions, hence its robustness is higher in terms of stability.      Table 3 shows a comparison of the algorithms' outcomes in the 500dimensional function test. It is obvious from a comparison of the experimental findings of the low-dimensional function test that LGMRFO gets better outcomes in terms of both search accuracy and convergence speed. e increase in dimensionality of the function from a low-dimensional to a high-dimensional function will affect the algorithm's convergence performance. Table 3 shows that both LGMRFO and basic MRFO approach the theoretical optimum with a standard deviation of 0 in the single peaked high-dimensional test function. is indicates that LGMRFO and MRFO are stable, and the convergence speed of LGMRFO is faster than other algorithms, as shown in Figures 3(a)∼3(d), demonstrating the superiority of the improved strategy, whereas the convergence results of other compared algorithms are worse than the low-dimensional function. e standard deviation is also higher than in the low dimension, indicating that the other comparison algorithms are less robust on singlepeaked high-dimensional functions; LGMRFO ranks first in the multipeaked high-dimensional function test, except for function F9; and LGMRFO ranks first in the low-dimensional multipeaked function F12, indicating that the improvement strategy in higher ability. In terms of convergence performance under high-dimensional functions, LGMRFO still outperforms the other five techniques. Table 4 shows a comparison of the algorithms' outcomes in some CEC2017 test suite functions. Figure 4 shows the average convergence curves of some CEC2017 test suite functions. erefore, LGMRFO achieves the best results in CF2, CF4, CF7, CF8, CF10, CF17, and CF20. It shows that the overall performance of LGMRFO is powerful so that it can perform a smoother transition between exploration and exploration trends.

Wilcoxon Rank Sum Test.
e Wilcoxon rank sum test [43] is a nonparametric statistical test that is performed to see if the LGMRFO method is significantly different from others. As a result, the results of the five algorithms were tested 30 times independently on 15 test functions and 9 CEC2017 functions as samples, and the Wilcoxon rank sum test was used to determine the significant difference between the solution results of the five compared algorithms and the LGMRFO solution results for the 50 and 500-dimensional, fixed-dimensional functions, and 9 CEC2017 functions, respectively. Tables 5-7 show the outcomes of the tests. e null hypothesis is rejected when P < 0.05 indicates that the two algorithms are statistically different, whereas P > 0.05 implies that the two algorithms provide equivalent search results, according to the literature [44]. "NaN" implies that the associated algorithm searches for theoretical optimal solution, hence this hypothesis test is not applicable. In the 50-dimensional instance, the LGMRFO algorithm performs much better than the other examined algorithms, with the exception of MRFO, whereas in the 500-dimensional situation, the LGMRFO method performs significantly better than the 50-dimensional one. In the 9 CEC2017 functions situation, among the 45 data sets, 42 are less than 0.05, comprising 93.3% of the total data. is shows that LGMRFO has statistical advantages over the other   [45]. In this research, we calculate network coverage using the more standard Boolean model.
Assume that in a square WSN monitoring region with a side length of L, N isomorphic sensor nodes are randomly distributed. Assume that the set of nodes is V � v 1 , v 2 , . . . , v N }, with node v i 's location coordinates being    (x i , y i ), and each node's sensing radius being R s . e area is discretized into m × n target grid points to be covered to make the calculation easier, and the set of target points is indicated as u j � (x j , y j ), j ∈ 1, 2, . . . , m × n { }. e distance between the sensor node and the target point is specified as e target point has been covered if there is a node whose distance from the target point is less than or equal to the sensing radius R s . According to the Boolean model, the chance that the sensor node v i detects the target location is defined as When the target point is sensed by more than one sensor, the joint sensing probability of the target point is defined as e area network coverage is calculated by multiplying the sum of the total perceived probability of target points covered by a set of nodes by the entire number of target points in the area.  16 Computational Intelligence and Neuroscience As a result, the WSN coverage optimization issue can be defined as the coverage of complete target grid points by N sensor nodes on the monitoring area using an optimization technique, which can then be turned into a single objective optimization problem that maximizes equation. (17), i.e., 6.2. Analysis of Simulation. Two sets of experiments are used in this work to verify the efficiency of LGMRFO on WSN coverage optimization. As stated in Table 8, the experimental settings have been set. Figure 5 shows the results of sensor area coverage after algorithm optimization. e distribution at 30 nodes is shown in Figure 5(a) and 5(b), with MRFO covering 82.43 percent of the nodes and LGMRFO covering 84.78 percent. In the monitoring region, there are still coverage blind spots, and node overlapping coverage is more evident, as shown in Figure 5(a), but the optimized nodes in Figure 5 LGMRFO optimization, the coverage rate is 92.62%, and the node overlapping area is greatly reduced. Table 9 shows the coverage of MRFO and LGMRFO running independently for 20 times and each operation iteration for 500 times, respectively. As can be seen from Table 9, LGMRFO's final and initial coverage are higher than those of the MRFO algorithm, indicating that the LHS method's enhanced strategy and location update improve the algorithm's search accuracy. e average coverage iteration curves are given in Figure 6. LGMRFO coverage in the middle of iteration is slightly lower than MRFO at 30 nodes in Figure 6(a), which is owing to the premature maturity produced by MRFO converging too quickly. LGMRFO gradually surpasses MRFO after 300 iterations, suggesting that MRFO has entered the local optimum, whereas LGMRFO jumps out of the local optimum and optimization accuracy improves, demonstrating that the group learning technique is effective. e population's health (node distribution) has improved, and the coverage rate has continuously increased.
LGMRFO's coverage is greater than MRFO's when 35 nodes are deployed, which corresponds to an increase in individual dimension, and both the convergence speed and coverage are much greater than the MRFO algorithm's average optimization result.
In summary, by comparing the experimental results of deploying different numbers of nodes, LGMRFO achieves  Computational Intelligence and Neuroscience higher average network coverage under the same conditions, and the node layout is more reasonable, resulting in fewer coverage blind areas and overlapping areas, proving the effectiveness of the improved strategy.

Conclusion
To overcome the inadequacies of the manta ray foraging optimization method in terms of optimization accuracy, this work offers an improved manta ray foraging optimization algorithm (LGMRFO). Firstly, to improve the quality of the initial population, the LHS method is used to homogenize the population position distribution. Secondly, the Levy flight and adaptive t-distribution variation strategies are used before the cyclone foraging exploration phase and somersault foraging behavior, respectively, so as to improve the algorithm's ability to jump out of the local optimum. Finally, a group learning strategy is used for the updated population. On 24 typical test functions, the LGMRFO algorithm is compared to the other five algorithms, and the method significance level is validated using the Wilcoxon rank sum test.
LGMRFO greatly enhances convergence speed, optimization-seeking accuracy, and global optimization capability, according to the findings of the experiments. Finally, on the WSN coverage optimization problem, LGMRFO is compared to MRFO, and the experimental findings support the usefulness of the proposed improvement strategies. As future challenges, different applications other than WSN coverage optimization of LGMRFO can be explored and its capabilities in dealing with difficult test problems can be examined. Besides, new configurations of this algorithm can be considered as other researchers may have different viewpoints on the presented methodology.

Data Availability
e data used to support the study are available from the corresponding author upon request.

Conflicts of Interest
e authors declare that they have no conflicts of interest.