K-Means Segmentation of Underwater Image Based on Improved Manta Ray Algorithm

Image segmentation plays an important role in daily life. The traditional K-means image segmentation has the shortcomings of randomness and is easy to fall into local optimum, which greatly reduces the quality of segmentation. To improve these phenomena, a K-means image segmentation method based on improved manta ray foraging optimization (IMRFO) is proposed. IMRFO uses Lévy flight to improve the flexibility of individual manta rays and then puts forward a random walk learning that prevents the algorithm from falling into the local optimal state. Finally, the learning idea of particle swarm optimization is introduced to enhance the convergence accuracy of the algorithm, which effectively improves the global and local optimization ability of the algorithm simultaneously. With the probability that K-means will fall into local optimum reducing, the optimized K-means hold stronger stability. In the 12 standard test functions, 7 basic algorithms and 4 variant algorithms are compared with IMRFO. The results of the optimization index and statistical test show that IMRFO has better optimization ability. Eight underwater images were selected for the experiment and compared with 11 algorithms. The results show that PSNR, SSIM, and FSIM of IMRFO in each image are better. Meanwhile, the optimized K-means image segmentation performance is better.


Introduction
In recent years, image segmentation has attracted much attention and research by researchers. It is of great significance to the future image processing field. As a key step of image processing, image segmentation plays an important role in extracting objects of interest from images. At present, it has important research value in medicine, agriculture, ocean, and other fields. Image segmentation can be divided into four categories: threshold segmentation, region segmentation, edge segmentation, and segmentation methods based on specific theories. e clustering algorithm is a typical unsupervised learning algorithm. It uses the idea of clustering differentiation to solve the problem. e way to solve the problem is simple and easy to understand. It has been successfully applied in many fields [1]. Cluster image segmentation has also been successfully studied. K-means is the most common and easiest clustering method among them, but K-means has the disadvantage of large randomness and easily fall into local optimum, which makes it impossible to control the cluster center reasonably. Swarm intelligence algorithms is an algorithm with global optimization performance and strong versatility that is suitable for parallel processing. is type of algorithm can find the optimal solution or approximate the optimal solution within a certain period of time. [2] Intelligent optimization algorithm opens up a new way for image segmentation. In terms of clustering segmentation, Hrosik R C and others improved the K-means clustering algorithm based on the firefly algorithm, which could achieve better segmentation average error, peak signal-to-noise ratio, and structural similarity index on medical images; [3] Li h and others proposed a k-means clustering algorithm based on dynamic particle swarm optimization (DPSO), which had better visual effect than traditional K-means clustering in image segmentation and obvious advantages in improving image segmentation quality and efficiency; [4] Shubham and others applied gray wolf optimizer (GWO) [5] to the segmentation of satellite images; [6] erefore, an intelligent optimization algorithm is of great significance in the field of image segmentation.
At present, researchers have noticed this point and carried out successive studies, such as Mohamed Abd Elaziz, who combines fractional calculus with MRFO to correct the direction of manta ray movement. is algorithm has been verified by CEC 2017 test function and is applied to image segmentation problems with good feasibility [33]. Mohamed H. Hassan combines a gradient optimizer with MRFO to reduce the probability that the algorithm will fall into a local optimum, which has a good effect in single-objective and multiobjective economic emission scheduling [34]. Haitao Xu uses adaptive weighting and chaos to improve MRFO, so as to efficiently handle thermodynamic problems [35]. Essam H. Houssein uses reverse learning to initialize the population so as to increase the diversity of the population and apply it to the threshold image segmentation problem with good segmentation quality [36]. Bibekananda Jena adds an attack capability to MRFO, which allows it to jump out of local optimization and find a globally optimal solution. It is then applied to the image segmentation problem of 3D Tsallis [37]. Mihailo Micev fuses Simulate Anneal (SA) with MRFO and applies it to the Proportional Integral Derivative (PID) controller. e fused algorithm is superior to other algorithms [38]. In addition, Serdar and others adopt opposition-based learning and SA to improve the convergence effect of MRFO. It has better control performance when applied to fractional order proportional integral derivative (FOPID) controller [39]. Although the currently proposed variants of MRFO have achieved some results, the following problems still exist: (1) Most scholars use the fusion of other algorithms to improve the search ability, but this will bring higher time complexity, and the algorithm after fusion may not be able to complement each other so as to present perfect results.
(2) Reverse learning can only solve inversely in a certain space, but in complex high-dimensional situations, there are fewer individual optimization methods, and they cannot jump out of the local optimal state perfectly. (3) In the optimization process, the above algorithm cannot completely balance the local search and global search capabilities, which results in insufficient convergence accuracy of the algorithm.
Based on the above analysis, this paper presents an improved algorithm for manta rays, which uses random walk learning to make individuals wander randomly in space, to increase the diversity of the population, and avoid premature convergence of the algorithm, and then we use Lévy flight for long-distance and short-distance searches to balance the local and global searches of the algorithm. Finally, the learning idea of particle swarm is introduced. Two learning factors are used to improve the convergence accuracy of the algorithm 12 functions are used to verify the validity and feasibility of IMRFO. en eight underwater image datasets are used in K-means image segmentation.
e results show that IMRFO has better generalization ability and better segmentation quality.
e innovations and contributions of this paper are as follows: (i) A random walk learning algorithm is designed to increase the diversity of the population and reduce the probability of the algorithm falling into local optimization to a certain extent. (ii) Lévy flight and learning factors are introduced to balance the searchability of the algorithm, which makes the algorithm have a good convergence effect. (iii) In 12 standard test functions, IMRFO is compared with 7 other algorithms to show its superiority and feasibility. Next, two statistical tests are used to emphasize the optimization performance of the algorithm. It is compared with the recently proposed variants of the algorithm. Finally, ablation experiments were performed, all the results show that IMRFO has a good search ability. (iv) IMRFO is applied to K-means underwater image segmentation. e results of 11 algorithms show that IMRFO performs well. e structure of this paper is as follows: Section 2 introduces the basic MRFO algorithm, Section 3 introduces the improved IMRFO algorithm and related analysis. Section 4 describes the process of IMRFO optimizing K-means image segmentation. Section 5 tests the performance of IMRFO and compares the related algorithms. Section 6 describes and analyses the performance of each algorithm in K-means image segmentation. Section 7 summarizes the experimental results of this paper. e last section expresses the advantages and disadvantages of IMRFO and future research directions.

Manta Ray Foraging Optimization
Manta rays feed on plankton, which are mainly water microfauna. When feeding, they suck water and prey into their mouths with angular head leaves. ey then filter prey out of the water through improved rabbles. Individuals of the manta rays work together to find the best food. Inspired by the behavior of the manta rays, the algorithm is divided into chain feeding, spiral-feeding, and somersault foraging. ere are three stages of spiral and empty foraging.

Chain Feeding.
At this stage, the manta ray population will be arranged in an ordered chain to collaborate in feeding, which will maximize the amount of plankton in the pocket. e mathematical model of the chain feeding process can be expressed as follows: In formula (1), x d i (t) denotes the d-dimensional information of the location of the first manta ray in generation t. R is a random number that obeys a uniform distribution of is the d-dimensional information of the best location found so far e manta ray at position i depends on the manta ray at position i-1 and the best food position found so far. N represents the population number. e update of the first manta ray depends on the optimal location.

Spiral
Feeding. When a manta ray finds a good food source in a certain space, each individual approaches a manta ray in front of it, in addition to spirally moving toward the food. e spiral-feeding process can be represented by the following mathematical model: where β � 2e r 1 (T− t+1)/T · sin(2πr 1 ), a weight factor representing the spiral motion, T being the largest number of iterations, r being the rotation factor and obeying [0,1] uniform random numbers. In addition, in order to improve the efficiency of population foraging, MRFO randomly generates a new location during the optimization process and then performs a spiral search at that location. Its mathematical model is as follows: x d rand (t) represents a new location in space.

Somersault Foraging.
When a certain manta ray finds a food source, its position can be regarded as an axis. Each manta ray tends to wander around the axis and flip to a new location. Its mathematical model is as follows: S is the flip factor, which determines the flip distance. R 2 and r 3 are two random numbers that are uniformly distributed [0,1]. As S values vary, individual mantas flip to locations in search space that are symmetrical to the optimal solution at their current location.

Improved Manta Ray Foraging Optimization
From the above formulas, it can be seen that more communication between individuals and orderly work can improve the searchability of the algorithm and perform a wide-ranging search. On the one hand, the lack of initiative of individuals in the population limits their ability to develop. On the other hand, updates within the population are related to the best location. When encountering high-dimensional complex problems, the change of the optimal position is similar, which results in less change in the two updates before and after the algorithm, which limits the algorithm's optimization ability. erefore, a flexible change strategy is needed to improve the development ability and local convergence effect of the algorithm. is paper uses the Computational Intelligence and Neuroscience Lévy flight strategy to improve individual blindness search, and random walk learning is used to prevent the algorithm from falling into a local state and the learning idea of particle swarm to improve the search accuracy of the algorithm.

Why Each Modification Has Been Proposed?
MRFO is based on a group of animals collaborating in feeding, which results in fewer optimization methods and a lack of flexibility and fineness. erefore, individual initiative is required to increase the diversity of the population in order to find highquality solutions in space. erefore, this paper analyses and solves the defects of the algorithm from the following three points. Firstly, it is necessary to make the population individuals better distribute the whole space so as to develop the vision of the algorithm and improve the global search ability of the algorithm. Lévy flight is a classical strategy, which can fly in a given space in the way of alternating long and short distances. It has been used by most scholars to improve the search ability of the algorithm. Secondly, some individuals need to be independent and never be limited by group characteristics. Random walk learning is an uncertain way of walking. e traditional random walk can only be carried out in local areas. However, the random walk learning designed in this paper can make large location differences between different individuals and improve the population diversity of the algorithm. Finally, information sharing among individuals is needed to improve the local search ability of the algorithm and find high-quality solutions. e learning factor is derived from the particle swarm optimization algorithm, which is used to speed up the information exchange of the population, prevent the early invalid search, improve the local search ability of the algorithm, and improve the accuracy of the solution to a certain extent.

Lévy Flight Strategy.
When manta ray individuals perform chain search, all individuals follow the population to search, which leads to the lack of flexibility of the algorithm and can not perform a better search range. erefore, the Lévy flight strategy [40,41] is introduced to enable individuals to search long and short distances, increase the diversity of the population, and enable individuals to fully diffuse into the whole space. e location update format for joining Lévy flight strategy is as follows: In formula (8), x i (t) represents the position of the i-th individual in the t-th iteration, ⊕ is an arithmetic symbol representing point-to-point multiplication. l � 0.01(x i (t)-x p ) denotes a step length control parameter, x p represents the position of the best individual in the population.
Lévy flight formula is as follows [42]: where r 4 and r 5 are random numbers within the range of [0,1], ξ. e general value is 1.5. σ is calculated as follows: where Γ(x) � (x − 1)!, the schematic diagram of Lévy flight is shown in Figure 1. Lévy flight can search long and short distances in a certain space and balance the global and local search of the algorithm.

Random Walk
Learning. In the optimization process, MRFO has the probability of falling into the local optimum, which makes the current optimum individual unreliable, so it is necessary to disperse all the individuals to find a better solution. Unlike random walks, the learning factors at the best and worst locations are introduced to make individual escape directional and reduce unreasonable walk. e specific mathematical model of RWL is as follows: In formula (8), ((2/1 + t/M. sin(π/2.r)) − 1) is the sinusoidal random factor that uses the mathematical properties of the sinusoidal function to fluctuate toward the optimal solution and continuously adjusts the step size based on the worst position of the current population so that the search path can span the entire solution space. M is the maximum number of iterations, and c 1 and c 2 represent two learning factors, random numbers that obey a normal dis- is the direction of control. As shown in Figure 2(a), (a) is the distribution of individuals without introducing RWL, (b) introduces the individual distribution of RWL; we can see that the introduction of RWL enables individuals to master global information, makes the individual distribution more even, and finds the global optimal solution.

PSO Algorithm Learning Ideas.
ere are two learning factors in PSO to develop local solutions, which can effectively improve the convergence accuracy of the algorithm. erefore, the formula of introducing two learning factors is as follows: b 1 , b 2 are two learning factors, and BestX is the optimal position of the current population. As can be seen from the formula, this strategy exploits individuals between the current one and the optimal one to enhance the local search of the algorithm.

Improved Manta Ray Foraging Optimization.
To improve the local search capability of MRFO and reduce the probability of falling into local optimum, an improved manta ray algorithm is presented in this paper. e 4 Computational Intelligence and Neuroscience algorithm uses random walk learning to prevent the algorithm from falling into the local state after each iteration and to improve the global search ability of the algorithm. en, the Lévy flight mechanism is combined to improve the blindness of the manta ray algorithm and to balance the searchability of the algorithm. Using two learning factors of particle swarm optimization to improve the search accuracy ultimately makes the algorithm improve effectively both in local and global aspects. e specific pseudocode is shown in Algorithm 1.

K-Means Image Segmentation Based on IMRFO
e principle of the traditional K-means algorithm is to select K cluster centers randomly, so the way to select them is uncertain, resulting in large differences in the final results and easy to fall into local optimum. erefore, it is necessary to select an appropriate initial cluster center. Intelligent optimization algorithm has been successfully applied to K-means to improve its randomness and the defect of falling into local optimum. e improved manta ray foraging optimization optimizes K-means so that the initial cluster centers are well controlled. e objective function is as follows: X i is a pixel gray value of the image and Y i is the J-th clustering center. e optimal initial number of clustering centers is obtained by IMRFO to minimize the fitness value of the objective function.
K-means image segmentation based on IMRFO is mainly divided into two parts: (1) Use the global search capabilities of IMRFO to find the best initial cluster center in the image point set (2) e initial cluster centers of the IMRFO output are segmented in the K-means algorithm e specific flow chart is shown in Figure 3.   Computational Intelligence and Neuroscience selected to verify its function optimization ability. e specific test function information is shown in Table 1. F1-6 is a unimodal function, F7-11 is a complex multimodal function, and F12 is a fixed-dimensional function. In addition, F1-11 is tested in different dimensions to verify the optimization ability of the algorithm in high-dimensional cases. To prove that IMRFO is competitive, seven algorithms including MRFO, Honey Badger Algorithm (HBA) [45], GWO, PSO, Whale Optimization Algorithm (WOA) [46], Learning Based Optimization (TLBO) [47], and Flower Pollution Algorithm (FPA) [48] are compared.

Performance Analysis and Test
e new cluster intelligence algorithm was proposed by HBA in 2021, while other algorithms are classical ones that have been extensively studied. e number of iterations and population of each algorithm are 500 and 100. In HBA, O � 6, C � 2; In FPA, the selection probability p � 0.8. B 1 and b 2 in IMRFO are 0.2 and 0.8, respectively. e experimental environment is Windows 10 64 bit; the software is mat-lab2019b; the memory is 16 GB; the processor is Intel(R) Core (TM) i5-10200H CPU @ 2.40 GHz. e average, optimal value, and standard deviation of the results of each algorithm for 30 runs are calculated. If IMRFO is the optimal value, the font is bolded. e optimization results of each algorithm are calculated as shown in Table 2-3. On the one hand, from Table 2 and Table 3, we can see that IMRFO has obvious advantages in searching ability, and the results are better than other algorithms in each function. e increase of dimension does not reduce the searching ability of IMRFO. On the other hand, among these functions, F1, F6, F8-10, F12, MRFO itself has a good optimization effect and can find the theoretical optimal value, IMRFO also has the same optimization effect, so it can be seen that IMRFO does not weaken the original algorithm's optimization ability. Overall, IMRFO has been effectively improved in stability and accuracy. It can be seen that the introduction of multiple strategies improves the algorithm's optimization ability and reduces the probability of entering a local optimum.

Statistical Test.
To verify whether IMRFO and the other seven algorithms have significant differences in global optimization, the 30-dimension results of each algorithm are tested.
e Wilcoxon rank-sum test is used to find the differences between the two algorithms. Assume H0 : e two algorithms have the same performance. H1 : ere is an obvious difference between the two algorithms. Use the Pvalue of the test results to measure the differences between the two algorithms. When P < 0.05, reject H0. It shows that there is a significant difference between the two algorithms. When P > 0.05, H0 is accepted, indicating that the two algorithms have the same global optimization performance.
To clearly see the differences between these algorithms, we utilize N/A to represent the values of P > 0.05. e Wilcoxon test results are shown in Table 4. At the same time, in order to better show the comprehensive optimization ability of IMRFO in the whole test function, the average and variance of each algorithm are Friedman test [49], and the final ranking is calculated to measure the universality of the algorithm in the 12 test functions. e test results are shown in Table 5.
From Table 4, it can be seen that IMRFO differs significantly from other algorithms. In some functions, MRFO itself has better searching ability, so the difference is not obvious. From Table 5, IMRFO ranks best in the search results of each function, which also indicates that it has a good universality.

Comparison with Variants of the Algorithm.
To further show the effectiveness and innovation of IMRFO, this paper compares IMRFO with multistrategy serial cuckoo search algorithm (MSSCS) [50], firefly algorithm with courtship learning (FACL) [51], self-adaptive cuckoo search (SACS) [52], and CSsin [53] proposed in recent years. ese four algorithms are variants of classical algorithms and have been validated in the CEC test set. e specific parameters of each algorithm are set as follows: e number of populations and the number of iterations for each algorithm are shown above. Similarly, when IMRFO is the optimal value, font bolding will be applied. e results of each algorithm are shown in Table 6.
From Table 6, it is clear that IMRFO is the best value in F1-4, F6, F9-12, which shows that IMRFO is better than these algorithms in the optimization of these functions. Secondly, the variants of CS have better optimization results, especially in F5 and F7, which have higher accuracy. FACL, as the worst one, has poor optimization results but good stability. Generally speaking, IMRFO has some advantages in function optimization, which verifies the effectiveness and innovation of the algorithm.

Convergence Analysis.
In order to clearly see the optimization and convergence effect of each algorithm in each function, the average convergence diagram of each algorithm is given as shown in Figure 4.
From Figure 4, it can be seen that IMRFO has a good convergence effect and can find the most accurate solution quickly, especially in the functions of F1-4, F6, F11. It can be seen that the flexible search mechanism enables the algorithm to find the best solution quickly in the optimization process.

Ablation Experiment.
In order to verify the validity and feasibility of the three combinations of strategies, the combinations of strategies are experimented with to find the better one. In this paper, the algorithm of combining Lévy flight with GWL is recorded as MRFO-I, the algorithm of combining Lévy flight with PSO learning thought strategy is recorded as MRFO-II, while the algorithm of combining PSO thought with GWL is recorded as MRFO-III. Besides, the algorithm using Lévy flight alone is recorded as MRFO-IV, the algorithm using GWL alone as MRFO-V, and the algorithm utilizing PSO alone as MRFO-VI. e Computational Intelligence and Neuroscience   Computational Intelligence and Neuroscience Computational Intelligence and Neuroscience 9   experimental parameters are consistent with those above. e test function dimension is 30. If IMRFO is the optimal value, the font is bolded. e experimental results are shown in Table 7.
As can be seen from Table 7, IMRFO is the best performer of all variants, and the criteria on each function are the best. IMRFO search accuracy is better than other algorithms and the difference is significant especially in F2, F4, F11. erefore, it can be seen that the integration of multiple strategies is important, and the validity and feasibility of IMRFO are verified.

Time Complexity Analysis.
Time complexity is an important measure of an algorithm. In order to show an effective improvement, it is necessary to balance the searchability and time complexity of the algorithm. e basic MRFO consists of only three phases, chain feeding, spiral feeding, and empty feeding, where chain feeding and spiral feeding are in the same cycle. Set the population number to N and the maximum number of iterations to T. e dimension is D, so the time complexity of MRFO can be summarized as follows. Macroscopically, the time complexity of swarm intelligence algorithms is the product of     14 Computational Intelligence and Neuroscience Computational Intelligence and Neuroscience Set the calculation time of introducing RWL to be t1, the calculation time of introducing Lévy flight to be t 2 , the calculation time of using two learning factors to be t 3 , and the other calculations are ignored.
IMRFO can be summarized as follows: erefore, it can be seen that the time complexity of IMRFO has not changed fundamentally. A small increase in the number of iterations can be ignored. ese increases will be of great significance if the optimization capability of the algorithm is effectively improved.

Image Segmentation Experiments
At present, image processing has been applied in many fields, and the image on land has been well developed, but it still has research value in underwater images. So in order to show the research value, eight underwater images are selected as test images. From the literature [54], select PSO, DPSO, sparrow search algorithm (SSA) [55], Modified sparrow search algorithm (MSSA) [56], ABC, MRFO, WOA, TLBO, FPA, IMRFO 9 algorithms optimize K-means algorithm and traditional K-means algorithm for image segmentation. MSSA is a newly proposed K-means based algorithm, and other algorithms have been successfully applied to image segmentation problems in recent years. Because the K-means clustering algorithm has a strong dependence on K values, improper selection of K values will have a great impact on the results, and K-means clustering algorithm has a strong dependence  Computational Intelligence and Neuroscience 17 on K values. Set the value of k to 3 to avoid interference from unrelated factors. e general parameters of the algorithm are population size of 30 and the maximum number of iterations of 100. Each algorithm divides the image as shown in Figure 5 and Figure 6. e first line in Figure 5 and Figure 6 represents the original image, and each subsequent line represents the segmentation effect of each algorithm.
It is impossible to see the difference between each algorithm in image segmentation by human eyes. erefore, three commonly used image segmentation metrics, PSNR, SSIM, and FSIM, are selected to measure the quality of each algorithm.
Peak Signal-to-Noise Ratio (PSNR) is mainly used to measure the difference between the segmented image and the original image. e formula is as follows [57]: In formula (12) and (13), RMSE represents the root mean square error of the pixels; M×Q represents the size of the image; I(i, j) represents the pixel gray value of the original image; Seg(i, j) represents the pixel gray value of the segmented image. e larger the PSNR value, the better the segmented image quality. Generally speaking, PSNR higher than 40 dB indicates excellent image quality (indicating that it is very close to the original image). At 30-40db, it usually indicates that the image quality is good (indicating that the distortion is perceptible but acceptable).
Structural Similarity (SSIM) is used to measure the similarity between the original image and the segmented image. e larger the SSIM, the better the segmented result. SSIM is defined as follows [58]: In formula (14), μ I and μ seg represent the average value of the original image and the segmented image; σ I and σ seg represent the standard deviation of the original image and the segmented image, respectively.; σ I,seg represents the covariance between the original image and the segmented image; c 1 , c 2 are constantly used to ensure stability. SSIM value range [0,1]. e larger the value, the smaller the image distortion.
Feature similarity index mersure (FSIM) is a measure of the characteristic similarity between the original image and the quality of the segmentation, used to evaluate local structure and provide contrast information. e value range of FSIM is [0,1], and the closer the value is to 1, the better the segmentation effect. FSIM is defined as follows [59]: In the above formula, Ω is all the pixel regions of the original image; S L (X) is the similarity score; PC m (X) is the phase consistency measure; T 1 and T 2 are constants; G is the gradient descent; E(x) is the response vector size at position X and the scale is n; ε is a very small value; An(X) is the local size at scale n.
Run each algorithm 10 times, and the average and average running time of the partitioned metrics are shown in Table 8.
Simply from the naked eye, the image after IMRFO segmentation in Figures 5 and Figure 6 is clearer. Some algorithms have a rough segmentation effect and have appeared a blurry phenomenon. From Table 7, it can be seen that the segmentation index of IMRFO has a greater advantage, especially in test01 and test03-08, where more than two indexes are optimal. For example, the FSIM index in test07 reaches 0.97, SSIM in test08 reaches 0.87, which has a significant advantage over other algorithms. When the performance indicator is not optimal, IMRFO is still close to the optimal value. For example, in test01, the SSIM index of WOA is 0.7488, while that of IMRFO is 0.7479, which is close to the optimal value. In test06, the PSNR of ABC was 43.3715, and that of IMRFO was 43.1626. erefore, both subjective visual effect and measurement result of IMRFO is better than other algorithms, which can prove a good segmentation effect. It also indirectly proves the good search performance of IMRFO, solves the problem that MRFO is easy to fall into local optimal solution and K-means has the disadvantage of being sensitive to the initial clustering center, which results in an excellent initial clustering center and further improves the image segmentation quality. On the other hand, the running time of K-means segmentation is the least, but the quality is the worst. e operation of other algorithms is large and the effect is obvious. e IMRFO does have a time disadvantage, to be expected, as it takes more time to accurately scan the solution in space. 18 Computational Intelligence and Neuroscience  Computational Intelligence and Neuroscience 19

Summary of Results
MRFO relies on group behavior to find food, so it lacks flexibility and is prone to fall into local optimum. In the existing work, most scholars can not solve such problems well. In order to improve the searching ability of MRFO, an improved algorithm for bats is presented, which uses Lévy flight, random walk learning, and learning factors. e current experimental work is summarized as follows: (1) Comparing IMRFO with some basic algorithms on 12 standard test functions shows that the algorithm has certain advantages. (2) Two statistical tests are used to verify the universality of IMRFO and show a good search ability.
(3) e convergence of each algorithm in each function is given, and the result shows that IMRFO has a good convergence rate.
(4) To further verify the performance of the algorithm, IMRFO is compared with the recently proposed variants of the algorithm, and the results show that IMRFO has an obvious advantage in most functions.    Figure 6: Segmentation effect of test 05-8.  Computational Intelligence and Neuroscience image segmentation, but the optimization results in some functions and images need to be improved. More work is waiting to improve its optimization capability.

Conclusion and Future Works
In order to improve the shortcomings of K-means image segmentation and its vulnerability to local optimization, this paper presents a K-means image segmentation method based on IMRFO. IMRFO uses Lévy flight to improve the individual searchability, proposes random walk learning to prevent the premature phenomenon of the algorithm, and finally uses learning factor to improve the convergence accuracy of the algorithm so as to improve the search of the algorithm. e validity and feasibility of IMRFO are verified by 12 test functions, and through 8 underwater image data sets, it can be seen that IMRFO has a good segmentation effect and is superior to other algorithms proposed in recent years under several indicators. Although IMRFO has good segmentation advantages in eight images, it does not achieve the best three criteria for all images. From the experimental point of view, IMRFO is only the best in test 05, while the other test pictures are basically the two best. On the other hand, the running time of each algorithm is too large, and the accuracy is at the expense of time. In the future, we will improve the image quality from the following three aspects.
(1) Comprehensively improve the three performance indicators, making the three indicators the best (2) Balance the time and the search ability of the algorithm to get the best performance in an effective time (3) It can be used in agricultural, aerospace, medical, and other scenarios so that the algorithm can play a suitable role in different environments

Data Availability
Some data of this study are confidential, so the experimental data cannot be uploaded. ese data can be obtained from the corresponding author on request.

Conflicts of Interest
e authors declare that they have no conflicts of interest.