An Automatic Machine Vision-Guided System for the Propagation of Potato Test-Tube Plantlets

In manually propagating potato test-tube plantlets (PTTPs), the plantlet is usually grasped and cut at the node point between the cotyledon and stem, which is hardly located and is easily damaged by the gripper. Using an agricultural intelligent robot to replace manual operation will greatly improve the efficiency and quality of the propagation of PTTPs. An automatic machine visionguided system for the propagation of PTTPs was developed and tested. In this paper, the workflow of the visual system was designed and the image acquisition device was made. Furthermore, the image processing algorithm was then integrated with the image acquisition device in order to construct an automatic PTTP propagation vision system. An image processing system for locating a node point was employed to determine a suitable operation point on the stem. A binocular stereo vision algorithm was applied to compute the 3D coordinates of node points. Finally, the kinematics equation of the three-axis parallel manipulator was established, and the three-dimensional coordinates of the nodes were transformed into the corresponding parameters X, Y, and Z of the three driving sliders of the manipulator. ,e experimental results indicated that the automatic vision system had a success rate of 98.4%, 0.68 s time consumed per 3 plants, and approximate 1mm location error in locating the plantlets in an appropriate position for the medial expansion period (22 days).


Introduction
Potato is one of the most important crops around the world. Nowadays, the technology of the propagation of virus-free potato test-tube plantlets has solved the problem of yield and quality degradation caused by a virus infection, which increases potato yield by more than 30%. e propagation of virus-free tube potato plantlets is the basic link of the yield of the potato virus-free seed potato breeding system. In the breeding of potato test-tube plantlets, the operator cuts the stem segments with leaves of the test-tube plantlets in the mother culture dish and inserts them into the subculture dish with the medium for rapid propagation. After several generations of propagation, the number of test-tube plantlets can be multiplied, which is both time-saving and cost-saving for potato breeding [1]. e propagation of virus-free potato plantlets is a high-tech bioapplication technology, which requires investment in propagation facilities and equipment, as well as the training of skilled workers for the production of culture medium and vaccination [2]. However, the propagation of potato test-tube plantlets is a labor-intensive technique, with consequent costs. ese aspects have encouraged the development of automated machines able to increase productivity and rooting success rates while reducing costs.
Research studies on automation technology and equipment of test-tube plantlets have been carried out at home and abroad. e first type is the propagation machine of test-tube plantlets. For example, Qu et al. developed a method of shearing for the wild-mouth bottle of potato plantlets, which was employed to gather, shear, and disperse the whole bottle of potato plantlets taken out of the plantletfetching mechanism, but the method would lead to a serious damage of the plantlets [3]. Li et al. designed a seedling picking machinery for potato test-tube plantlets, which allows for the picking of an entire bottle without damage in one time in terms of the characteristics of the potato testtube plantlets and their culture environment [4]. Kai et al. designed a picking hand based on negative pressure adsorption for the adsorption of strip tissue culture seedlings' stem. e force positioning method can effectively reduce the damage on the plantlets.
is method simplifies the procedure of the picking of plantlets and improves the survival rate of plantlets [5]. e second type is the test-tube plantlets' transplanting robot, which has better flexibility and precision than the transplanting machine and enjoys a better research prospect. A tissue culture system based on machine vision was designed by Toshiba Corporation of Japan. is system locates and cuts' nodes through a 3-axis perpendicular detection arm and a transplant arm with high precision and good robustness. But its shortcoming is the low efficiency of cut and transplant [6]. Kirin Brewery, a company of Japan, developed the TOMOCA system that breaks clumps of tissue culture plants into 26 pieces in one time and transplants them into new culture flasks with every 9 pieces, which has an efficiency of 10 times higher than manual work [7]. Yang et al. developed a robot system of transplants for strip-shaped test-tube plantlets. e system uses the embedded machine vision technology to identify and locate the plantlets nodes and get, cut, and transplant the plantlets with a self-developed 5-DOF joint robot arm. e system performs well in getting and shearing plantlets, but not in the accuracy of motion positioning [8]. Due to its virtues in compact structure, high stiffness-weight ratio, and good dynamics performance, the parallel mechanism (PM) is right in the need of the test-tube plantlets' propagation [9]. Xie et al. designed an economical self-seeking parallel transplanting robot with high precision, which could automatically take photos of the breeding pot and calculate the center position of each plantlet through image analysis and then transform the expected motion trajectory into the generalized position coordinates of the driving joints via kinematic analysis and trajectory planning, which finally realize the control of computer intelligent within the working space [10]. To improve the automation and efficiency of plug plantlets' transplanting in the greenhouse, a high-speed plug plantlet transplanting robot was designed by making use of a 2-DOF parallel translation mechanism with a pneumatic manipulator. According to the coordinates of healthy plantlets, the manipulator was driven by the mechanism to fetch the plant plantlets in a planned path [11].
e key techniques in vision-based control include vision information acquisition strategies, target recognition algorithms, and eye-hand coordination methods [12]. To achieve the automatic transplant, the propagation robot needs to accurately identify the potato plantlets and calculate the spatial coordinates of the shear points. Yang et al. proposed a method to identify the position of single plantlets from the bottom of the seedling bottle according to the morphological characteristics of the plantlets in the culture bottle, which satisfies the precision requirements of seedling transplanting [13]. A vision-guided grasping system for Phalaenopsis tissue culture plantlets (PTCPs) was developed and tested. To mimic the manual transplantation operation, this study developed an image processing algorithm based on a binocular stereo vision system to segment the PTCP image into its constituent leaf and root components and to locate a suitable grasping position on the PTCP body. e locating algorithm was then integrated with a robotic gripper in order to construct an automatic Phalaenopsis plantlet grasping system [14]. An innovative multimodal vision system combining optical stereo vision and thermal imaging was developed. Algorithms of detection of the teat and determination of their three-dimensional position were also developed [15]. A rapid and reliable method based on binocular stereo vision was developed with the aim of effectively recognizing and locating litchi in the natural environment. A litchi recognition algorithm based on K-means clustering was presented to separate litchi from leaves, branches, and background. e experimental results showed that the proposed method could be robust against the influences of varying illumination and precisely recognizing litchi with the highest average recognition rate [16]. HE et al. proposed a machine vision algorithm that can restore plantlets' leaves with ellipse fitting and extract parameters for automatic grafting of robots. e experiment showed that the algorithm can solve the problem of leaf occlusion, and the success rate of the plantlets' identification and localization reached up to 97.5%, which satisfied the requirements of the automatic operation of grafting robots [17]. Yang et al. developed a highly real-time vision system, which could comprehensively evaluate the suitability of the straightness and height of the transplanted plantlets and select plantlets that satisfied the transplanting requirements [18]. Zhang et al. proposed a vision system used in the plantlet transplanting of cucurbit taking into consideration the actual working conditions of the self-made robot system. In this system, the information of the cross-sectional diameter size could quickly be extracted from the color image of the crop plantlets and then automatically detect the spatial location of growth points [19]. Luo etc. developed a method for acquiring spatial information from grape clusters based on binocular stereo vision. is method includes four steps: calibrating the binocular cameras and rectifying the images, detecting the cutting points on the peduncle and the centers of the grape berries, extracting three-dimensional spatial coordinates of the points detected in step 2, and calculating the bounding volume of the grape clusters. e demonstrated performance of this developed method indicated that it could be used on harvesting robots [20].
Combined with the actual working conditions of a selfmade parallel robot system, this paper focuses mainly on locating the spatial coordinates of the cutting points on a stem of the test-tube plantlet for the end effector and presents a method for acquiring spatial information from the test-tube plantlet based on binocular stereo vision. e method for acquiring spatial information from the test-tube plantlet is presented based on binocular stereo vision. is method can determine the 3D coordinates of the cutting point on the stem for the end effector in such a way that the harvesting robot can smoothly grasp the plantlet stem and cut off the stem. e research on the robot vision system is of great significance to the automation and intelligence of potato plantlets breeding. e detailed objectives of the current study can be summarized as follows. Section 2.1 describes the experimental materials used in the research. Section 2.2 introduces the test platform of the image processing algorithm in this paper. Section 2.3 introduces the hardware structure and workflow of the vision module in the self-made extended robot. Section 2.4 introduces the software of the vison module system developed on the LabVIEW platform. Sections 2.5 and 2.6 show the algorithms of image recognition and location of nodes. Section 2.7 introduces the method of transforming the spatial coordinates of the nodes acquired by the visual system into the motion parameters of the 3-PUU parallel manipulator.

Testing Materials.
e potato test-tube plantlets were provided by the College of Horticulture and Forestry of Huazhong Agricultural University. e virus-free potato plantlets were aseptically cultured in a culture box, and the MS medium was placed at the bottom of the culture box. e size of the culture box is 80 mm × 80 mm, and there are 3 rows and 3 columns of 9 potato plantlets in each box. e age of the virus-free plantlets is 20-25 days, the height of plantlets is 7-10 cm, and the diameter of the stem is 0.6-0.8 mm. When the tissue-cultured plantlets grew 5 or 6 leaves, they would be cut into segments and each segment carried a leaf and an axillary bud and then put them into the culture medium to grow into new test-tube plantlets.

Testing Equipment.
e hardware environment of algorithm developing and testing in this paper was the general-purpose computer Intel Core i7-6700HQ 2.60 GHz/16G DDR3/120G SSD/NVIDA 960 m 2 G, Windows 10 Professional, MATLAB R2017b, and LabVIEW2018. e image acquisition module used the STM32f103zet6 microcontroller to connect the SYUE MS4101V1 binocular stereo camera module (1920 × 1080 30 fps lens, 3 mm focal length, 130°wide-angle distortion-free). e image processing algorithm was implemented on the Matlab2017b platform. System software was developed by using LabVIEW2018 for connecting, coordinating, and controlling hardware and software modules. e module used the MATLAB Script node of LabVIEW to call the MATLAB image processing program. Figure 1 is a schematic diagram showing the installation position and structure of the vision system in a self-made parallel robot system. e vision system mainly consisted of a binocular stereo camera, a background plate, and a linear conveyor. e vision system used natural light instead of special lighting. e culture box was carried to the specified position by the lateral linear conveyor and then was fixed. e background plate was inserted between the two rows of potato plantlets by a digital servo, so that the binocular stereo camera could take a color image pair of a single line of potato plantlets. e workflow of the robot vision system is shown in Figure 2 and described in detail.

Vision System Architecture.
(a) System Initialization.
e system controlled the horizontal linear conveyor and drove the culture box to the specified position where the background plate could right be inserted between the two rows of potato plantlets. (b) Binocular Image Acquisition. e system controlled digital servo to drive the background plate to be inserted, and then the binocular stereo camera acquired the image pair. (c) Running Image Processing Algorithm.
e vision system software invoked an image processing algorithm which identified and located the shear points and grip points of the potato plantlets and converted them into spatial coordinates for transmission to the controller of the end effector. (d) Image Acquisition and Processing of the Next Line of Potato Plantlets. After a row of potato plantlets had been expanded and when the conveyor moved a fixed distance, the background plate was inserted between the next two rows of potato plantlets, and then the procedures of (b) ∼ (c) were repeated until all the potato plantlets had been processed.

Integrated Development of Vison System Software.
Based on the LabVIEW software, the upper computer program of the vision system of the propagation robot was designed. LabVIEW communicated with the STM32 development board through serial communication and sent commands to control the coordinated operation of each electrical unit. When the potato plantlets reached the designated position, the Vision System Software would control the binocular stereo camera to acquire the image pair and send the image parameters to the MATLAB program it called. After the MATLAB program finished processing the image, it would extract the point coordinate parameters and send them to the LabVIEW. When the coordinate data was received, the parallel robot and the end effector would complete the subsequent potato seed treatment. A slave computer was designed based on the STM32 development board. e electric unit included a stepping motor and a stepping motor driving board for controlling the moving conveyor of the Petri dish, a digital servo for controlling the movement of the background plate, a parallel mechanical arm (including three stepping motors), and a binocular stereo camera. e 12 V regulated DC power supplied power to each unit. After receiving the motion parameters, the STM32 development board would control the motor to rotate at a certain angle through outputting the corresponding pulse signal to run the other components. Figure 3 shows the image segmentation process of potato test-tube plantlets. Inserting a black background plate between the rows of potato plantlets could weaken the background information and improved the segmentation efficiency and the result of the potato plantlets' image. e segmentation of the potato plantlets included three parts. Step motor . e cotyledon image and the stem image were superimposed, the small connected domains were removed, and the holes were filled by a morphological operation to obtain the complete segmentation result of the potato plantlets.

Node Point Search.
e node was a part which connects the cotyledon and the stem of the potato test-tube plantlets. e identification and positioning of nodes were a prerequisite for robotic clamping and cutting operations in expansion. Figure 4 shows the schematic diagram of the node identification principle. e node searching included the following steps.
Refine the binary image of the potato plantlets to obtain a skeleton image with a single-pixel width retaining the shape of the potato plantlets.
Search for the junction points and endpoints of the skeleton. As shown in Figure 4(a), the pixel value of the 8 neighborhoods of the skeleton junction points is not less than 3, and the pixel value of the 8 neighborhoods of the endpoint is 1.
e junction points and endpoints in the image were marked in terms of the data. Only some of the discovered junction points were nodes.
Filter out nodes from the junction points. According to the morphological characteristics of the potato plantlets, the nodes were all on the main stem of the potato plantlets, and the nodes could be selected on the skeleton of the main stem in terms of this feature. As shown in Figure 4(a), a 3 × 3 rectangular template operator whose center point is 0 and the direction values of other points are 1∼8 is created. e endpoint of the skeleton was used as the starting point, the junction point was the endpoint, and the middle section was called the skeleton branch. From the starting point, the convolution result of the operator and the current coordinates can display the 8-neighbor pixel value and could also indicate the relative direction of the 8-neighbor pixel point and the center point, namely, the direction of the moving operator. Each time the operator moved, the pixel in the current coordinate was assigned a value of 0. Finally, all values of the skeleton branch were 0 except the main stem, and then the node pixel coordinates were further obtained, which are marked in Figure 4(b).

Derivation of 3D Coordinates of Junction Point
2.6.1. Hand-To-Eye Calibration. In this paper, the visual system model of the robot was established through the hand-toeye model. It meant that the camera was installed in a fixed position and separated from the robot. Firstly, the mathematical relation between the vision system and the robot system needed to be established, which meant that the hand-eye calibration was required. e objective of the hand-to-eye calibration procedure was to determine the transformation between the 2D images perceived by the two cameras and the 3D real-world plantlet information. Firstly, the intrinsic parameters and the extrinsic parameters were calibrated using the camera model proposed by Zhang [21]. Secondly, the following method of hand-eye calibration is used to determine the hand-eye relation. e hand-eye calibration schematic is shown in Figure 5. Let the base coordinate system of the arm be O base -xyz, let the coordinate system of the end of the arm be O tool -xyz, let

3D Coordinate Acquisition of Nodes.
e 3D coordinates of the grasping point were determined using the binocular stereo vision system formed by the left and right cameras. e binocular stereo vision was a method of acquiring three-dimensional information of an object by simultaneously taking a picture of a plurality of viewing angles according to the principle of parallax. Point P (X, Y, Z) was the world coordinate of any spatial point, (u, v) was the pixel coordinate point of point P projected on the camera imaging coordinate system, and DISP was the disparity value. O 1 -x 1 y 1 z l was the left camera coordinate system, and O 2 -x 2 y 2 z r was the right camera coordinate system. O l and O r were, respectively, the optic center of the camera. e distance between the two optic axes was the baseline B, and z l and z r were the optic axes. According to the positioning principle of binocular stereo vision, if any point in space could be imaged in the binocular stereo camera, the three-dimensional coordinates could be calculated using equation (4). After the binocular stereo camera collects the left and right views, it performed image processing to obtain the node pixel coordinates of the left and right views and matched the coordinates of the view nodes. rough the above calibration, the spatial three-dimensional coordinates of the nodes were obtained through matrix transformation. In this paper, Zhang's calibration method [21] was used to calculate the participation of external parameters in the camera, and then the hand-eye calibration method was used to determine the hand-eye relationship:

Robotic Control Propagation
Task. e self-made parallel robot system consisted of a moving platform, a fixed platform, and three independent motion branches connecting the two platforms. Figure 6 shows the 3-PUU parallel manipulator structure. e mechanism consisted of the A 1 A 2 A 3 moving platform, three pairs of equal length connecting rods, three drive sliders of C 1 C 2 C 3 , and the base. e connecting rod connected the moving platform and the slider through the ball joint to form a spherical pair. A moving pair was formed between the driving slider and the pedestal column. e moving platform could move through changing the position of the three drive sliders. Since the space parallel connecting rods limited the three degrees of rotational freedom of the moving platform, the moving platform could only be three-dimensionally translated. e fixed coordinate system O-XYZ was built on the base (the starting point of sliders), and the motion coordinate system O'-X'Y'Z' was built on the moving platform. e coordinate origin was placed at the geometric center of the moving platform, and the end was executed. e device was placed on the moving platform.
In the parallel manipulator, the displacement of the slider on the column was S i , the length of the connecting rod was T, and the angle between the rod and the static platform was θ i . e displacement of the slider on the column was S min ≪ S ≪ S max . Assume that the radius of the circumcircle of the moving plane of A 1 A 2 A 3 was r, and the radius of the circumscribed circle of the B 1 B 2 B 3 platform was R; then the absolute coordinates of the three sliders were  Journal of Robotics Under the moving coordinate system o'-x'y'z', the coordinate vector of A 1 A 2 A 3 was e vector relation of the moving coordinate system o'x'y'z' relative to the fixed coordinate system O-XYZ was Since the moving platform had three translational degrees of freedom and no rotational degrees of freedom, the coordinate vector of the point A i in the fixed coordinate system was erefore, the inverse solution of the parallel mechanism could be represented as the constraint equation of the fixedlength rod: According to the equation above, S i could be obtained: If the coordinates of the spatial point in the fixed coordinate system O-XYZ were known, the corresponding displacement S of the driving slider would be calculated by the equation above, which was the inverse solution of the kinematic equation. . Tables 1 and 2 summarize the extrinsic and intrinsic parameters of the CCD cameras, respectively. e accuracy of the camera calibration process was evaluated by substituting the computer image coordinates of each testing point in the left and right camera image planes into the camera model established using the calibrated extrinsic and intrinsic parameters to obtain the theoretical 3D world coordinates. e accuracy of the reconstructed result was then evaluated by computing the root mean squared error (RMSE) which is 0.18 pixels.

Potato Plantlets Image Segmentation.
In order to evaluate the performance of the image processing algorithm in this paper, an experiment was designed to test it. e result of image segmentation was classified into three levels: level Ι, level ΙΙ, and level ΙΙΙ. Level Ι represented the full segmentation of the foreground and background, and the foreground information remained intact with no obvious Figure 6: e schematic representation of a 3-PRS mechanism.
Journal of Robotics 7 deficiency. Level ΙΙ represented the full segmentation of the foreground and background, but the foreground information was clearly deficient. Level ΙΙΙ represented the incomplete segmentation of the foreground and background or the lack of foreground information. A variety of image segmentation algorithms were tested through the vision system and the results are shown in Figure 7. Figure 7(a) displays the result of segmentation obtained through the GrabCut algorithm and it belongs to level III. Figure 7(b) displays the result of segmentation obtained by the regional growth method. It could be seen that the foreground segmentation and the background segmentation were complete, but some information such as the virus-free potato plantlets' cotyledon was lost, and it belongs to level ΙΙ. Figure 7(c) displays the result of segmentation obtained by the FCM clustering algorithm. e result indicated that some cotyledons and stems were not completely segmented, and it belonged to level ΙΙ. Figure 7(d) displays the result of the algorithm used in this paper. e algorithm performed a complete segmentation, and the information of the virusfree potato plantlets remains intact, which could be included in level Ι. e first three algorithms were the iterative algorithm, and the number of iterations and the complexity of the image tended to affect the accuracy of the algorithm. Besides, they were time-consuming. e algorithm adopted in this research could best perform the image segmentation, and it was time-saving.

Node Image Recognition.
In order to evaluate the performance of the algorithm, 20 boxes of seedlings from three different growth periods were used to test the accuracy and the time of node recognition. Table 3 shows the experimental results. e initial expansion period (15 days) was marked as P-1, the medial expansion period (22 days, optimal propagation period) was marked as P-2, and the overgrowth expansion period (29 days) was marked as P-3. Figure 8 shows one group of the test images. e nodes in the potato plantlets' image were manually identified and marked, and the result was marked as "manual marking." e number of nodes automatically recognized by the algorithm in this research was marked as "algorithm marking." e error was the deviation rate between the ground truth and identification result. e time was the average time of processing each image. e algorithm had the best performance on node recognition of P-2 period potato plantlets, and the error rate was only 1.6%. ere were two factors resulting in the errors. e first factor was that the algorithm failed to recognize the young and heavily occluded cotyledons. e second factor was that there were two cotyledons at the top of the potato seedlings, which might be identified as nodes. e recognition performance of the P-1 period was the second to the performance of the P-2 period, and the error rate was 5.6%, which was lower than the true value. e main reason was that the cotyledons of potato plantlets were not fully grown at this period, and some cotyledons were so tender that they were seriously occluded by the stem. e recognition performance of the P-3 period was the worst, and the error rate was 11.6%, which was higher than the real value. is error mainly resulted from the budding at the top of the potato seedlings at this period, and the points of intersection of these buds and stems were misidentified as nodes.

Node Positioning Accuracy.
In order to evaluate the accuracy of the hand-eye calibration, a checkerboard with a side length of 5 mm × 5 mm was attached to the end of the   robot arm. e movement of the arm to the corresponding calibration space point was controlled. e coordinates of the key points on the calibration plate were located by the binocular stereo camera, and the coordinate data in the robot coordinate system was obtained through the conversion of the hand-eye relation. 7 sets of the tests were repeated to compare the millimeter difference between the measured value and the true Euclidean distance. Figure 9 shows the checkerboard calibration plate and the error analysis diagram of the stereo camera calibration. e camera calibration results indicate that the average pixel error for binocular stereo vision positioning was 0.18 pixels. According to the statistical data of Table 4, the positioning accuracy tested by the robot vision system was about 1 mm, which could be applied in practical engineering.

Conclusion
is paper designed a vision system for the propagation robot of potato test-tube plantlets and the system has been installed and tested on the self-made propagation robot of potato test-tube plantlets. is system could automatically collect the binocular images of a row of potato plantlets in the culture box, segment, identify, and locate the connection points of the cotyledon and the main stem, and convert them into the three-dimensional coordinates which could be used by the parallel robot according to the calibration result of the hand-eye calibration model. is visual system adopted in this research had the best performance for potato plantlets in their best expansion period (22 days). In this period, the identification error of the node number was less than ±5%, the positioning error was less than 1 mm, and the performance time at a time was less than 1.5 s. e vision system had high positioning accuracy and excellent real-time performance. It could satisfy the performance requirements of the potato plantlets' expansion robot.

Data Availability
e data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest
e authors declare that they have no conflicts of interest.