A Quadratic Traversal Algorithm of Shortest Weeding Path Planning for Agricultural Mobile Robots in Cornfield

In order to improve the weeding eﬃciency and protect farm crops, accurate and fast weeds removal guidance to agricultural mobile robots is an utmost important topic. Based on this motivation, we propose a time-eﬃcient quadratic traversal algorithm for the removal guidance of weeds around the recognized corn in the ﬁeld. To recognize the weeds and corns, a Faster R-CNN neural network is implemented in real-time recognition. Then, an ultra-green characterization (EXG) hyperparameter is used for grayscale image processing. An improved OTSU (IOTSU) algorithm is proposed to accurately generate and optimize the binary image. Compared to the traditional OTSU algorithm, the improved OTSU algorithm eﬀectively shortens the search speed of the algorithm and reduces the calculation processing time by compressing the range of the search grayscale range. Finally, based on the contour of the target plants and the Canny edge detection operator, the shortest weeding path guidance can be calculated by the proposed quadratic traversal algorithm. The experimental results proved that our search success rate can reach 90.0% on the testing date. This result ensured the accurate selection of the target 2D coordinates in the pixel coordinate system. Transforming the target 2D coordinate point in the pixel coordinate system into the 3D coordinate point in the camera coordinate system as well as using a depth camera can achieve multitarget depth ranging and path planning for an optimized weeding path.


Introduction
In order to achieve the green and pollution-free growth of the whole life cycle of field crops and the sustainable development of agriculture, many scientific researchers focus on the field of fully automatic weeding by weeding robots [1][2][3][4]. e emergence and use of agricultural mobile robots [5][6][7][8][9] not only can replace humans to complete dull and repetitive agricultural work but also can efficiently and continuously work in different outdoor environments. e use of robotic weeding techniques can also improve production efficiency and effectively liberate the human's hands. erefore, under the natural growth environment conditions, accurate and rapid identification and removal of weeds in field crops play an important role in achieving intelligent field management [10][11][12][13][14].
Until now, many researchers have carried out specific research studies on the removal of weeds in field crops.
Maruyama and Naruse developed a small weeding robot for rice fields [15]. ey proposed an approach of moving multiple robots around a field to prevent weed seeds from sprouting. But the small weeding robot can only be used to prevent the seeds germination of weeds in rice fields. Nan et al. proposed a machine-vision-based method to locate crops by providing real-time positional information of crop plants for a mechanical intrarow weeding robot [16]. is method is only used to remove weeds in a certain area. Zhang et al. proposed a navigation method for weeding robots based on the smallest univalue segment assimilating nucleus (SUSAN) corner and improved sequential clustering algorithm [17] and does not involve the removal of weeds in paddy fields. Malavaz et al. developed a general and robust approach for autonomous robot navigation inside a crop by using light detection and ranging (LiDAR) data [18]. e approach can only detect the distribution of weeds by LiDAR and cannot perform ranging. Gokul et al. developed a trainable, automatic robot which helps to remove unwanted weed on agricultural fields by using a gesture to control based on three-axis robotic arm to do the necessary work [19]. is study only introduces the design of the robot and does not involve the specific identification and location of weeds. Chechliński et al. solved the bottleneck problem in terms of the deployment of deep network models on lowcost computers, the bottleneck problem above, which has laid a research foundation for further transplantation into agricultural robot tasks [20]. e core of this study is the study of deep network models, rather than solving the problem of how to remove weeds in the field. Kanagasingham et al. attempted to integrate GNSS, compass, and machine vision into a rice field weeding robot to achieve fully autonomous navigation for the weeding operation [21]. A novel crop row detection algorithm was developed to extract the four immediate rows spanned by a camera installed at the front of the robot. e algorithm cannot accurately locate weeds in rice row.

Related Works
For robotic path planning, visual detection methods and other sensors-based detection methods are widely implemented. Choi et al. found that the guidance line extracted from an image of a rice row had precisely guided a robot for weed control in paddy fields and proposed a new guidance line extraction algorithm to improve the navigation accuracy of the weeding robots in paddy fields [22]. Hossain and Ferdous et al. developed a new algorithm based on the bacterial foraging optimization (BFO) technique [23]. ey also explored the application of BFO in terms of the mobile robot navigation to determine the shortest feasible path to move from any current position to the target position in an unknown environment with moving obstacles. Contrerascruz et al. proposed an evolutionary approach to solve the mobile robot path planning [24]. e proposed approach combines the artificial bee colony algorithm as a local search procedure and the evolutionary programming algorithm to refine the feasible path detected by a set of local procedures. e methods they proposed above are mainly used for robotic path planning, and most of them did not consider that the removal of weeds also requires path planning.
Andújar et al. proposed a method for estimating the volume of weeds by using a depth camera [25]. Reconstructing a 3D point cloud of corn crops infested with weeds in the field and using the Kinect device to estimate volume can determine the crop status and achieve accurate estimates of weed height. Bakhshipour and Jafari used the morphological characteristics of crops to evaluate the application of the support vector machine (SVM) and artificial neural network in weed detection [26]. rough experimental comparison, it was concluded that the method based on SVM can be better used for weed detection. Xu et al. developed a real-time weed positioning and variable speed herbicide spraying (VRHS) system for row crops [27]. ey proposed an improved particle swarm optimization (IPSO) algorithm for wild cornfield weed images segmentation, which optimizes the traditional particle swarm optimization algorithm to meet the real-time data processing needs of field management. e abovementioned researchers have realized the detection and segmentation of weeds through traditional machine learning methods. However, they failed to provide a way to accurately locate weeds.
In order to reduce route length and operation time, Ya et al. proposed a new path-planning algorithm for static weeding [28]. At the same time, to demonstrate the feasibility and improve the implementation of laser weeding, a prototype robot was built and equipped with machine vision and gimbal mounted laser pointers camera. Liu et al. designed an on-site imaging spectrometer system to distinguish crop and weed targets [29]. e use of a limited number of spectral bands can achieve multicategory distinction between weeds or between crops and weeds. In general, some of the algorithms or specific systems developed above have achieved effective experimental results, but none of them have qualitatively analysed and measured the distance between crops and weeds as well as between weeds and weeds. In addition, the path planning guidance of the weed removal for protecting target crops was missing from above research.
To solve the above problems as well as provide an efficient and accurate weeds removing guidance, this study proposes an efficient quadratic traversal algorithm for the field weeding robot.
rough the combination of deep learning technology and traditional algorithms, a depth camera is used to accurately measure the distance of corn and weeds. And then, a shortest weeding path around the crops is further planned for an efficient weeding process. e proposed method provides a better way to assist intelligent agricultural mobile robots to precisely perform weeding operations [30][31][32][33]. e implementation of our method can facilitate the intelligent weeding robot to better perform precise weeding operations and improve the robotic working efficiency. Figure 1 shows an overview of the system framework of agricultural mobile robots for cornfield weeding. e detailed function introduction of the proposed system is as follows. e depth camera is used to obtain real-time images from the video stream as RGB color images. It is further used to achieve multitarget depth ranging and path planning for an optimized weeding path. e data preprocessing mainly includes target recognition and processing of grayscale image. Among them, target recognition is used for corn and weed images target recognition and automatic cutting, and the processing of grayscale image is based on the EXG method to achieve grayscale image under the RGB color space. On the basis of ensuring the performance of the algorithm, the improved OTSU algorithm can effectively reduce the calculation cost by compressing the range of the search grayscale range. e edge detection based on Canny can realize the contour extraction of corn and weed targets. e quadratic traversal algorithm includes two parts: the first 2 Journal of Robotics traversal and the second traversal. e first traversal is used to extract the specified area in the contour edge image. e second traversal is used to determine the corresponding 2D coordinate information extracted on the keyframe colorful image from the video stream. Depth ranging and shortest path planning can verify the feasibility of our proposed method and achieve the experimental goals.

Data Preprocessing Method.
Under natural conditions in the field, in order to achieve automatic recognition of corn and weed, a Faster R-CNN deep neural network based on the VGG-16 feature extraction network [34][35][36] is trained on the collected corn and weed image data, so as to obtain a deep network model for automatically identifying corn and weed targets. e depth camera (RealSense D435i) needs to be turned on to obtain RGB images and depth information images aligned with it, and then, keyframe images (640×480) need to be extracted from the video stream as an RGB image. e RGB image is compressed into a size of 500×400 and imported into a deep network model for target recognition. e calculated corn-weed target recognition result is further used to facilitate subsequent image automatic cutting.
Before grayscale image processing, we need to obtain the number of corn and weed target images in database A and  (2) Data preprocessing is used to complete the preparation of corn and weed images. (3) Improved OTSU algorithm is used to obtain the optimal segmentation threshold for the corn and weed grayscale images. (4) Edge detection based on Canny is used for the contour extraction of corn and weed targets. (5) Quadratic traversal algorithm is used to extract the corresponding 2D coordinate information. (6) Depth ranging and shortest path planning is used to show the experimental effect of our proposed method. database B. en, record the original length L and width W of the target image after cutting. After that, it is zoomed out to a target size. Here, we use 640×480 pixels. Finally, the zoomed image is cut (the center of the cut is 500×400 pixels). e purpose of it is to retain the main information of the target image. At this point, the preparation of grayscale image processing data is completed.
By observing the cropped images of corn and weed targets, it is not difficult to find the color difference among the corn-weed targets and the soil background, which is easy to distinguish and identify by traditional algorithms. According to the value of the target image in the three color channels of R, G, and B, these are obviously different. In this study, by studying the linear combination of the characteristics of the three color components of R, G, and B, the green area of the target image can be better extracted. In order to select the best method for extracting the green area of the target image, the EXG method, the GMR method for the RGB color space, and the Cg method for the YCrCb color space [37][38][39] are implemented. e detailed comparison and analysis are performed in Section 6.1. An ultra-green characterization (EXG) hyperparameter is used for grayscale images. e specific formula is as follows: (1) We can obtain the maximum (maxVal) and minimum (maxVal) values in the gray array, and the formula (2) can be used to convert the gray EXG array into an gray EXG array for subsequent optimal segmentation threshold selection.
4.2. Improved OTSU Algorithm. e OTSU algorithm, which can automatically calculate the segmentation threshold, has been widely used in the field of agricultural image processing [40,41].
rough the leveraging of the OTSU features, we propose an improved OTSU (IOTSU) algorithm. On the basis of ensuring the performance of the algorithm, the IOTSU algorithm effectively improves the search speed and reduces the calculation processing time by compressing the range of the search grayscale range. e following steps specifically summarize the detailed contents of the IOTSU algorithm: (a) For an imported grayscale image, equations (3) and (4) are used to complete the parameter definition and initialization: where ω 1 is the ratio of the total number of pixels of in the foreground target image for the total number of pixels in the entire image, ω 2 is the ratio of the total number of pixels in the background image for the total number of pixels in the entire image; N 1 indicates the total number of pixels in the image whose grayscale value is less than the foreground and background segmentation threshold T, N 2 indicates the total number of pixels in the image whose grayscale value is more than the foreground and background segmentation threshold T; and M × N is the total number of pixels in the entire image. (b) e relationship expressions (5) and (6) can be obtained from equations (3) and (4) and the inherent relationship of the parameters (c) μ is the average gray level of all pixels of the input grayscale image. en, the average gray level μ 1 of the pixels of the foreground target image and the average gray level μ 2 of the pixels of the background image can be calculated by In addition, there is a linear correlation among μ, μ 1 , and μ 2 . e linear relationship expression is given in equation (8): (d) According to the equation in the above steps, a maximum variance between clusters can be obtained by equation (9). By putting equation (8) into equation (9) and simplifying it, the corresponding equivalent equation is given as equation (10): (e) Traverse the compressed grayscale interval to obtain the segmentation threshold T when the variance between classes is maximum. T is the optimal segmentation threshold. First, we need to obtain the average gray level μ in step (c). Second, obtain the minimum gray value g min and the maximum grayscale value g max in the grayscale interval of the grayscale image. Finally, in the grayscale interval (g min , g max ), the golden section points on the left and right sides of the average gray level are used to as the compressed grayscale interval (0.382μ + 0.618g min , 0.382μ + 0.618g max ). (f ) e equivalent equation (10) is used to traverse the grayscale interval (0.382μ + 0.618g min , 0.382μ + 0.618g max ). Obtain the segmentation threshold T of the foreground target image and the background image that maximized the variance between the classes. According to the obtained segmentation threshold T, use equation (11) to generate the binary image for the imported grayscale image.
where m as a substitute parameter represents the maximum value in the grayscale interval, and the value is 255; I (i,j) is the grayscale value of the pixels (i, j) of the grayscale image, and P (i,j) is the binary image generated by equation (11). (g) For the generated binary image, we first perform an area threshold filtering operation. e purpose of it is to remove the background image that wrongly divided into the foreground target image. en, the Gaussian filtering is performed to remove the noise information in the binary image. Finally, morphological operations are performed to smooth the binary image for obtaining an optimized binary image, which is used for subsequent image edge detection.

Edge Detection Algorithm Based on Canny.
Complete and effective edge contour image information plays an important role in studying the characteristics of corn and weed targets in the field. It is also convenient for the corn and weed targets to accurately select the 2D coordinate points in the subsequent study. Generally, in order to extract more complete and effective edge contour image information as much as possible, an appropriate edge detection algorithm can be selected by studying the extraction effect of the edge detection operator [42]. In this study, the binary image after optimization of the corn and weed targets is taken as the research object. We first compare two kinds of second-order edge detection operators: Canny operator and Laplacian operator, and then, we compare three kinds of first-order edge detection operators: Sobel operator, Roberts operator, and Prewitt operator.

Quadratic Traversal Algorithm.
In order to realize an accurate selection of the 2D coordinate points of the corn and weed targets in the field crop image, the edge contour image of the corn and weed targets is taken as the research object. In this study, a quadratic traversal algorithm is proposed for selecting target 2D coordinate points in the pixel coordinate system, and the corresponding traversal search box is designed. e algorithm' main implementation steps are as follows: S1. Define a row step size is p i pixels, a column step size is q j pixels, and a traversal search box size is m i × n j .
S5. Calculate the 2D coordinate points information of the upper left ends (f L i min , f W j min ) and lower right ends (f L i max , f W j max ) of the traversal search box by using the row f L i and f W j column position information. e specific calculation equation is In order to further describe our core work for the quadratic traversal algorithm in detail, as shown in Figure 2, taking corn in the image of field crops as an example, the detailed explanation of each step of the quadratic traversal algorithm is shown.
Among them, Figure 2(a) shows the preparation processing of the data preprocessing part, and the corresponding key information is marked on the image. Here, the lower ends of Figure 2

Depth Ranging and Shortest Path Planning.
e accurate selection of the 2D coordinate point of the corn and weed targets is obtained in the cropped image. en, the target distance measurement and the shortest weeding path planning work are required. ere are generally four coordinate systems in the field of computer vision. ey are the pixel coordinate system, imaging coordinate system, camera coordinate system, and world coordinate system. Generally, the depth camera ranging research work is implemented in the camera coordinate system. Its core content is to transform the target 2D coordinate point in the pixel coordinate system into the 3D coordinate point in the camera coordinate system. However, the 2D coordinate point information (f x , f y ) is the coordinate in the pixel coordinate system, and the corresponding 3D coordinate point information (X, Y, Z) in the camera coordinate system needs to be generated through coordinate system conversion. At this time, the acquisition of camera internal parameters needs to be completed by the depth camera calibration [43,44]. (camera cx, camera cy) represents the principal point coordinates in the imaging coordinate system, which is used to realize the conversion between the pixel coordinate system and the imaging coordinate system. camera fx and camera fy represents the focal length of the depth camera, which is used to realize the conversion between the imaging coordinate system and the camera coordinate system.
Next, it is necessary to obtain depth information d for the 2D coordinate point (f x , f y ) of the target in the depth image aligned with the colored image and obtain the ratio of the depth pixel to the real unit depth scale. In the end, the conversion from the pixel coordinate system to the camera coordinate system can be directly completed by equation (17). So that, the target 2D coordinate point in the pixel coordinate system can be transformed into the 3D coordinate point in the camera coordinate system. By using the distance formula between two points in the 3D space, the distance between the corn target and the weed target as well as the distance between the weed target and the weed target can be implemented.
Based on realizing the multiobject depth ranging research works, we take the corn crop target as the starting position in the shortest weeding path planning research. Using the Dijkstra algorithm for the shortest weeding path planning can achieve excellent experimental result [45]. Figure 3 shows a detailed diagram of the Dijkstra algorithm.

Cornfield Mobile Robotics
Platform. GPS is the abbreviation of the global positioning system. It is an omnidirectional, all-weather, all-time, high-precision satellite navigation system that can provide global users with lowcost, high-precision three-dimensional position, speed, and precise timing navigation information. Lidar (VLP-16) is responsible for constructing a real-time 2D or 3D navigation map of the cornfield at close range and providing real-time 3D point cloud information around it, which can further provide precise navigation information for the cornfield mobile robotics platform. e depth camera as an RGB-D camera has a pair of left-eye and right-eye stereo infrared cameras, infrared dot-matrix laser emitters, and RGB cameras [46,47]. e size is 90 mm × 25 mm × 25 mm, suitable for indoor and outdoor environments. e depth camera is based on the triangulation method for binocular stereo distance measurement, in which a pair of stereo infrared cameras is used to collect depth information of the target, and an infrared dot-matrix laser emitter is used to project certain structural features of light on the target in the visual scene. RGB camera is used to collect color image data and can achieve color image video stream and depth image video stream alignment. e maximum distance can be up to 10 meters. It is widely used in research fields such as drones, robots, and AR/VR. Universal robots (UR5), as a collaborative robotic arm, have six rotary joints (degrees of freedom) and can perform automated tasks with a maximum load of 5 kg. Its effective working radius is up to 850 mm. Robotic mobile base (Husky A200) is used as the mobile carrier of the cornfield mobile robotics platform. It uses four-wheel drive to work. e maximum payload is 75 Kg, and the maximum speed can reach 1 m/s. As a high-performance processing unit, the workstation is essentially an industrial personal computer. On the one hand, it is used to deploy the algorithm program we designed. On the other hand, it is used to communicate with the abovementioned key devices.
On a workstation with Windows 10 operating system, run the supporting software development program (Intel RealSense SDK 2.0) based on RealSense D435i and compile and generate image acquisition and target ranging software again. e image acquisition and target ranging software includes the driver of the RealSense D435i depth camera, which will allow the depth camera to collect image depth information and RGB information at a rate of not less than x 1   Journal of Robotics 7 20 frames per second (fps). At the same time, the pixels of the collected image are processed to 640 × 480. Its purpose is to measure the distance between corn and weeds and between weeds and weeds based on the currently acquired image depth information and then plan the shortest weeding path. Figure 4 shows specific details.

Data Collection and Preprocessing.
e corn field data of this experiment were collected in the agricultural experimental field, called "Nong Cui Yuan," of Anhui Agricultural University. According to the time of seeding and growth of corn, our data collection days are from May 1 to May 4 in 2019. In order to ensure a clear image collection, our collection time depended on a strong visibility which was 9 : 00 AM to 12 : 00AM and 2 : 00PM to 5 : 00 PM. In the collection process, images of corn and weeds under natural conditions were collected in the three directions of head-up, top-view, and 45 degree squint, and the collection steps were strictly followed. e image data acquisition equipment used a highdefinition digital camera (Canon EOS 6D Mark II camera). A total of 3906 corns with weed images were collected. All images are in JPEG format with resolution of 5472×3648 pixels. Some of the corn and weed images collected are shown in Figure 5. Figures 5(a)-5(c) show different numbers of corn and weeds in the agricultural experimental field, respectively.
To improve the computational time efficiency as well as to preserve the useful information in the images, it is better to compress and reduce the resolution to 500×400 according to our experience. In this dataset, 3500 samples are randomly set as training data, and the remaining 406 samples are testing data. e ratio is 8 : 2 according to the ratio of the training set and the verification set. e dataset was manually labeled, and the corn and weed in the images are labeled by using the minimum circumscribed rectangle method.

Results of Data Preprocessing.
In data preprocessing, Figure 6 shows the result of corn and weed recognition as well as automatic weed removing for the Section 4.1. Figure 7 shows the effect of grayscale image corresponding to green area extraction methods for the Section 4.1. Among them, the first column of Figure 7 is the grayscale image processing results of the EXG method, the second column of Figure 7 is the grayscale image processing results of the GMR method, and the third column of Figure 7 is the grayscale image processing results of the Cg method.

Evaluation of the Extraction Methods for Greenness
Detection. To better perform grayscale image processing, we select the EXG method for extracting the green area of the target image and use it to extract the green area of the corn and weed targets in the field crop image at the same time. In this study, the target of maize and weeds after shearing is taken as the research object. e only change of the grayscale generation method is used to generate multiple sets of binary image experimental data. We compared the green area extraction effect of the EXG method and GMR method in the RGB color space as well as the Cg method in the YCrCb color space. e detailed experimental results are shown in Figure 8.
In this study, we introduced a standard image as a benchmark (Figure 8). e traditional OTSU algorithm which only changes the grayscale image generation method generates multiple sets of binary image experimental data for the comparison. Use the three indicators of ratio, variance, and standard deviation to measure the pros and cons of the EXG method, GMR method, and Cg method. Where, the ratio is defined as the ratio of the sum of the number of pixels in the black area to the sum of the number of pixels in the black area in the standard image after the binary image is generated by the unoptimized OTSU algorithm. e equation (16) is the equation for calculating variance. e detailed comparison results are shown in Table 1.
where A i is the i ratio, A is the average value of the ratio, and N is the number of experimental images participating in the comparison.
Combining the results of Figure 8 and Table 1, it can be seen that the average ratio of the Cg method in the YCrCb color space is the smallest. Due to its obvious error in the extraction of the green area of the weed target, a large area is classified in a wrong way. And it also results in an average ratio abnormality. e EXG method in the RGB color space can effectively suppress the interference of environmental factors such as soil background, dry grass, and shadow. It is  Figure 4: Schematic design of the overall structure with the weeding robot for corn-weed detection. GPS is responsible for providing outdoor navigation information for agricultural mobile robots. Lidar is responsible for further providing precise navigation information for robotic mobile base. Depth camera is responsible for multitarget depth ranging and shortest path planning. Universal robots are responsible for carrying weeding actuators for multitask weeding operations. Robotic mobile base is responsible for autonomous movement in the cornfield as a mobile carrier. Workstation is responsible for deploying the algorithm program and communicating with the abovementioned key devices. 8 Journal of Robotics more prominent for the green area of the image of extracting corn and weed targets. It is suitable for processing field crop images under natural conditions, which can obtain an ideal grayscale image effect.

Results of Edge Detection.
By comparing the experimental results of edge detection, we can find that the edge detection algorithm based on the Canny operator has a perfect effect [48]. e specific effect is shown in Figure 9.  the combination of the first traversal and the second traversal, it is possible to accurately select the 2D coordinate points' information of the target on the keyframe image. e detailed process is shown in Figure 10. First, we need to cut out the target recognition results of the keyframe image and the data preparation after a series of zooming processes. Second, the EXG method in the RGB color space is used to complete the grayscale processing of the image. For the grayscale image, the IOTSU algorithm can be used to generate and optimize the binary image, and then, the Canny operator's edge detection algorithm can be used to perform binary image processing to achieve target edge contour extraction. Finally, in the target edge contour image, the first traversal of the quadratic traversal algorithm uses the traversal search box to search the pixel area that meets the set conditions as well as cut and store it locally for the second traversal. In the quadratic traversal algorithm, the second traversal takes the output of the first traversal as the research content, also uses the other traversal search box to search the pixel area that meets the set  conditions, and selects the pixel fixed as close as possible to the center position. Besides, according to the pixel point using equation (14), the pixel point can be calculated back to the corresponding position of the target in the keyframe image, so as to realize the research work of selecting 2D coordinate points. For the design of the traversal search box in the first traversal method, this study is based on the size of the target   edge contour image (500 × 400) processed by the edge detection, and the traversal search box can traverse the target contour edge image as much as possible. erefore, the traversal search boxes with sizes of 50 × 40, 50 × 100, and 100 × 100 were designed to select the appropriate traversal search box size. We take thirty corn and weed target contour images as the total sample number, and the total number of successful samples and failed samples of each traversal search box were counted, respectively. Table 2 presents the performance of various traversal search box sizes. e success rate of the traversal search box with a size of 100 × 100 can reach 90.0%, which can meet the actual requirements of the experiment.

Evaluation of Depth Ranging and Shortest Path Planning.
In order to verify the feasibility of our proposed research method, we conducted a systematic test in the agricultural experimental field, called "Nong Cui Yuan," of Anhui Agricultural University. On the one hand, we started by the keyframe image in the colored image video stream captured in real-time by the depth camera.
rough the method proposed in this study, the 2D coordinate points of the cornweed targets in the field crop image can be accurately selected. Finally, multitarget depth ranging and shortest weeding path planning can be achieved, and the system test has achieved favorable experimental results. Figure 11 mainly shows the results of multitarget ranging and the shortest weeding path planning results in different scenarios. e results in Figure 11 are highlighted for better results observation. Among them, Figure 11 Figure 11.
On the other hand, take a single plant of corn and multiple weeds as examples in a scenario. Figure 12 shows the detailed processing flow. We collected distance data by the manual measurement method in the agricultural experimental field. e distance between the corn target and the weed target as well as the distance between the weed target and the weed target in the corresponding field of view of the depth camera are recorded, respectively. Here, the unit of the distance data is also in meters, and the corresponding distance data statistics between the targets are shown in Figure 3. At the same time, according to the recorded distance data statistics, combined with the shortest path planning algorithm Dijkstra's ideas, we manually calculate the corresponding shortest weeding path. e specific calculation process is shown in Figure 3. By observing and comparing the extraction results of the shortest weeding path ( Figure 12(a) and the result of artificially calculating the corresponding shortest weeding Figure 13(b), it is not difficult to find that the shortest weeding path of the two methods can be consistent. us, it further illustrates the feasibility and accuracy of our research method.

Performance Comparison of Image Segmentation.
On the basis of obtaining an ideal grayscale image, the next research is to perform binary image generation and optimization on the grayscale image. In this study, the PSO algorithm [49,50], traditional OTSU algorithm [51], and the proposed IOTSU algorithm were used to segment the corn and weed targets in the field crop image, respectively, to generate a binary image. Figure 9 shows the original image of the corn and weed target as an example to study the image segmentation effect of the PSO algorithm, traditional OTSU algorithm, and IOTSU algorithm. It is obvious that the segmentation effect of the IOTSU algorithm is better than the traditional OTSU algorithm and the PSO algorithm which loses a lot of important information.
In order to further evaluate the performance of the segmentation algorithm, the three aspects of segmentation threshold, running time, and diversity rate are evaluated, respectively, where the diversity rate can be calculated by equation (17). Table 3 lists the performance of the three segmentation algorithms, where the segmentation threshold and running time are average values.
where P T is the average segmentation threshold of the segmentation algorithm, and O T represents the average segmentation threshold of the reference algorithm OTSU. As shown in Table 3, although the PSO algorithm can meet the needs of real-time processing, compared with the traditional OTSU algorithm and the IOTSU algorithm, it has a faster processing speed, but its segmentation performance is poor and the difference rate is large. Moreover, it is unable to find the best segmentation threshold. Although the traditional OTSU algorithm can find the optimal segmentation threshold, the high cost of time cannot meet the real-time requirements because it needs to traverse the entire grayscale space to find the optimal threshold. Compared to the traditional OTSU algorithm, the IOTSU algorithm can ensure the segmentation performance and can be processed at a faster speed under the condition that the segmentation threshold is consistent. At the same time, the processing time of the PSO algorithm is close, so it can meet the needs of real-time image processing.

Performance Comparison of Path
Planning. In order to further evaluate the performance of the quadratic traversal algorithm, the two aspects of running time and diversity rate are evaluated, respectively, where the diversity rate can be calculated by equation (18). Table 4 lists the performance of the three different algorithms.
where E d is the sum of the algorithm's estimated distances of multiple weeds, and M d is the sum of the true values of the multitarget weeds distance manually measured. As shown in Table 4, although the genetic algorithm has a faster processing speed, compared with the path planning algorithm and the quadratic traversal algorithm, it must be processed when the position of the target is known. erefore,  the genetic algorithm cannot meet the real-time processing requirements of agricultural weeding robots. At the same time, we can find that the running time of the path planning algorithm and quadratic traversal algorithm is close. In order to further illustrate the superiority of our proposed quadratic traversal algorithm, we compared the diversity rate of these three algorithms on the premise of the same number of targets. It is not difficult to find that our proposed method has a lower diversity rate than the genetic algorithm and path planning algorithm. erefore, the various performances of our proposed method can basically meet the real-time processing requirements of agricultural weeding robots.

Conclusions
In this study, the task of weed one-time removal operations in cornfields is studied. Major innovations and contributions of this study are as follows: (1) e Faster R-CNN deep network model based on the VGG-16 feature extraction network is used to realize real-time target recognition and complete automatic cut classification of targets. By returning the predicted parameter information of the border regression and the color of the prediction border, the target category in the image can be accurately determined. And we realized the data connection between deep learning and traditional algorithms. (2) Using our improved OTSU algorithm can achieve the generation and optimization of binary images. Compared with the traditional OTSU algorithm, the algorithm compresses the range of the search grayscale interval. e search speed of the algorithm is effectively improved, and the proposed path planning calculation is time efficient. It meets the demand of the real-time data processing requirements, which allows that our method can be further   applied to the mobile agricultural weeding robot in the field. (3) A quadratic traversal algorithm is proposed for selecting the target 2D coordinate point. e corresponding traversal search box is designed; the search success rate of the traversal search box with a size of 100 × 100 can reach 90.0% of the testing date effect. By transforming the target 2D coordinate point in the pixel coordinate system into the 3D coordinate point in the camera coordinate system, the used depth camera can achieve the multitarget depth ranging and the shortest weeding path planning as well as avoid using complex point cloud information. is way effectively saves computing resources and avoids redundant information. (4) e application and implementation of our methods in this research can assist intelligent weeding robots to carry out precise weeding operations and improve the efficiency of them. Meantime, it has an important practical significance for promoting the application of intelligent weeding robots to the filed.

Future Works
Our future works includes two aspects: (1) quantitative analysis of power consumption for the robot. (2) Consider the influence of outdoor dynamic environmental factors.
(1) Our optimized weeding path planning has the potential to reduce power consumption for the robot.
In order to prove this qualitative conclusion, we will include a quantitative analysis and carry out quantitative analysis experiments in our future works. (2) Because the depth camera will be affected by different environmental factors in the outdoor, we will design corresponding experiments based on various environmental conditions in our future works. At the same time, we will fuse multiple sensors to develop more robust algorithms and overcome the weakness of the current used depth camera.
Data Availability e corn field data of this experiment were collected in the agricultural experimental field, called "Nong Cui Yuan," of Anhui Agricultural University and are available from the corresponding author upon request.

Conflicts of Interest
e authors declare that they have no conflicts of interest.