Intelligent Point Cloud Edge Detection Method Based on Projection Transformation

An edge detection method based on projection transformation is proposed. First, the vertical projection transformation is carried out on the target point cloud. Data X and data Y are normalized to the width and height of the image, respectively. Data Z is normalized to the range of 0-255, and the depth represents the gray level of the image. Then, the Canny algorithm is used to detect the edge of the projection transformed image, and the detected edge data is back projected to extract the edge point cloud in the point cloud. Evaluate the performance by calculating the normal vector of the edge point cloud. Compared with the normal vector of the whole data point cloud of the target, the normal vector of the edge point cloud can well express the characteristics of the target, and the calculation time is reduced to 10% of the original.


Introduction
As a key automation technology, machine vision is very important to the modernization of the economy. Machine vision has been widely studied by scholars. Machine vision uses machines to measure the size of target or detect the surface of target instead of eyes. Machine vision mainly uses computers to simulate the function of human visual and reproduce certain intelligent behaviors related to human vision. Information is extracted from the image of an objective object, processed and understood, and finally used for practical detection and control. Machine vision started from statistical pattern recognition in the 1950s. The main work is focused on two-dimensional image analysis, recognition, and understanding. In recent years, various noncontact research results emerged [1][2][3][4]. Machine vision in the industrial field can be divided into four aspects. Surface detection is always used in product quality inspection and product classification. A camera and a robot are combined to package products. Feature detection is always used in robot positioning. Civil Machine Vision Technology is widely used in intelligent transportation, safety protection, character recognition, identity verification, medical equipment, etc. In the field of scientific research, machine vision can be used for material analysis, biological analysis, chemical analysis, and life science. In the military field, it can be used in aerospace, aviation, weapons, and mapping. Its technology mainly includes image processing, mechanical engineering, control, and optical imaging.
With the rapid development of 3D acquisition technology, 3D sensors are becoming more available and affordable, including various types of 3D scanners, LiDAR, and RGB-D cameras (such as Kinect, RealSense, and Apple depth cameras). The 3D data from these sensors can provide rich geometry, shape, and scale information. Complementing 2D images, 3D data provides an opportunity to better understand the environment around the machine. 3D data can often be represented in different formats, including depth images, point clouds, grids, and volumetric grids. As a common format, the point cloud representation preserves the original geometry in 3D space without any discretization. It is the preferred notation for understanding related applications in many scenarios. 3D point cloud detection is widely researched by scholars. Ali et al. [5] build on the success of the one-shot regression meta-architecture in the 2D perspective image space and extend it to generate oriented 3D object bounding boxes from LiDAR point cloud. Zhou et al. [6] remove the need of manual feature engineering for 3D point clouds and propose VoxelNet, a generic 3D detection network that unifies feature extraction and bounding box prediction into a single-stage, end-to-end trainable deep network. Meyer et al. [7] present LaserNet, a computationally efficient method for 3D object detection from LiDAR data for autonomous driving. Beltran et al. [8] present a LiDAR-based 3D object detection pipeline entailing three stages. Minemura et al. [9] employ dilated convolutions to gradually increase the perceptive field as depth increases; this helps to reduce the computation time by about 30%. Asvadi et al. [10] address the problem of vehicle detection using Deep Convolutional Neural Network (Con-vNet) and 3D-LIDAR data with application in advanced driver assistance systems and autonomous driving. Propose a vehicle detection system based on the Hypothesis Generation (HG) and Verification (HV) paradigms. Simon et al. [11] propose a specific Euler-Region-Proposal Network (E-RPN) to estimate the pose of the object by adding an imaginary and real fraction to the regression network.
Edge detection is a key technology to detect the target [12]. Most of the information of the image exists in the edge of the image, which is mainly represented by the discontinuity of the local features of the image. Edge detection is first proposed for a two-dimensional digital image; the purpose is to identify and detect the position where the image characteristics change [13]. Point cloud edge refers to some edge measurement points that can express the target features. Point cloud edge can not only express the geometric characteristics of the object but also play an important role in the quality and accuracy of object recognition and surface model reconstruction [14,15]. As an important research field of image analysis and computer vision, edge detection has attracted the attention of many scholars. A variety of mature edge detection algorithms have been developed.
Different point cloud data models have different edge feature extraction methods, which can be roughly divided into grid-based and scattered point cloud-based feature extraction methods [16,17]. In feature extraction based on mesh, firstly, the point cloud is gridded, and then, the edge features of the point cloud are obtained by traversing the triangulated point cloud and threshold constraints. Among them, the most famous algorithm of Delaunay is simple and intuitive. But in the process of triangulation, we need to evaluate the Euclidean distance between point clouds. If the Euclidean distance is not suitable, holes will be generated. In addition, if the method is applied to threedimensional point clouds, it needs to use the normal direction of each point cloud to determine the projection direction, so the algorithm is more suitable for uniform and smooth point clouds. The feature extraction based on scattered point cloud mainly extracts some regular points, lines, surfaces, and other features from this type of point cloud, so it pays more attention to local features. Song et al. [18] take the vector of each point in the point cloud and the vector of K adjacent points as the root mean square as the standard of edge feature extraction, although this method well reflects the relationship between the normal direction of each point in the point cloud and its adjacent points, the nonedge points adjacent to the edge will be detected in the result of edge extraction. Han et al. [19] keep the edge feature by using the feature that the normal direction of the boundary point is different from the nonboundary normal direction, but the density of the edge is the same as that of the nonedge part. Chen et al. [20] proposed a feature extraction algorithm with multiparameter constraints. Feature points are determined by normal, curvature, and Euclidean distance. In [21,22], principal component analysis (PCA) and normal method were used to extract edge feature points.
Referring to the edge detection algorithm of twodimensional image, this paper proposes a Canny operator based on projection transformation for edge detection of point cloud data and obtains the normal vector of the edge point cloud after edge detection. The point cloud can better reflect the characteristics of the target, and the speed of solving the normal vector is greatly improved. Compared with the normal vector of the whole data point cloud of the target, the normal vector of the edge point cloud can well express the characteristics of the target, and the calculation time is reduced to 10% of the original.

Methodology Vertical Projection of Point Data
Projection transformation is the process of transforming the coordinates of one map projection point into the coordinates of another map projection point. 3D point cloud is a massive set of points that express the spatial distribution and surface characteristics of targets in the same spatial reference system. It is a collection of points after obtaining the spatial coordinates of each sampling point on the object surface. Compared with a 2D image, 3D point cloud usually only has X, Y, Z coordinate information. Its spatial information is redundant to a 2D image. The corresponding geometric structure is more complex, and the neighbour structure of point cloud data is more complex. In this paper, the Canny edge detection algorithm based on projection transformation is proposed to detect the edge of point cloud data. The point cloud data is projected along the vertical direction to the XY two-dimensional plane, and the projected point cloud data is normalized. The X direction represents the width, the Y direction represents the height, and the Z value represents the gray value of the pixel in the image. The edge of the transformed data is detected, and then, the final edge point cloud is obtained by inverse transformation.
Supposing the number of point clouds is m, all point clouds are represented as Here, P i = ½x i , y i , z i T is the three-dimensional coordinates of point i.

Wireless Communications and Mobile Computing
The point cloud data is vertically projected in the Z direction, and the Z value is converted into the depth value of the current point. The projected point set is expressed as For the projected point set, the maximum value and minimum value of X and Y directions are counted, which are marked as follows: x min , x max , y min , and y max . The X, Y axis data is quantized into a form corresponding to the image width W and height H. Then, the abscissa and ordinate of x i , y i are as A linear transformation is performed on the matrix z i . In order to realize the corresponding relationship between point cloud and image, statistically, the minimum and maximum values of Z are marked as follows: z min and z max . The Z value is linearly transformed to the range of 100~255. For a coordinate where there is no point cloud, it is represented by 0. The Z value is like the gray value of an image. The projection transformation is shown in The point set after quantization is expressed as As an example, the industrial part target is vertically projected, and the projection result is shown in Figure 1.

Edge Detection with Canny Operator
Canny edge detection was first proposed by John Canny in the paper with a computational approach to edge detection in 1986. Canny edge detection is a technology to extract useful structural information from different visual objects and greatly reduce the amount of data to be processed. It has been widely used in various computer vision systems. Canny found that the requirements of edge detection in different vision systems are similar, so it can achieve a widely used edge detection technology. The Canny algorithm is based on three basic objectives. (1) In the low error rate, all edges should be found with no pseudoresponse. Capture as many edges as possible in the image as accurately as possible. (2) The detected edge should be accurately located in the center of the real edge. (3) In single edge point response, the detector should not point out multiple pixel edges where there is only one single edge point. In order to meet these requirements, Canny uses the variation method. The optimal function in Canny detector is described by the sum of four exponential terms, which can be approximated by the first derivative of Gaussian function. The Canny edge detection algorithm can be divided into the following five steps. (1) Gaussian filter is used to smooth the image and remove the noise. (2) Calculate the gradient intensity and direction of each pixel point in the image. (3) Nonmaximum suppression is applied to eliminate the spurious response caused by edge detection. (4) Double threshold detection is applied to determine real and potential edges. (5) Finally, the edge detection is completed by suppressing the isolated weak edge.

Gaussian Smoothing.
Gaussian smoothing is a 2D convolution operation, which is applied to blurred images to remove details and noise. In order to reduce the influence of noise on the result of edge detection as much as possible, the noise must be filtered to prevent false detection caused by noise. In order to smoothen the image, Gaussian filter is used to convolute the image. In this step, the image is smoothed to reduce the obvious noise effect on the edge detector. The generation equation of Gaussian filter kernel with size of ð2k + 1Þ * ð2k + 1Þ is given by Here, σ is the standard deviation of the distribution K = i + j. Assume that the mean of the distribution is 0; that is, its center is on the line x = 0. The distribution of two-dimension Gaussian filter kernel is shown in Figure 2.
When σ = 1:4, k = i, and the size of Gaussian filter kernel is 3 * 3, the corresponding Gauss kernel is shown in The choice of Gaussian convolution kernel size will affect the performance of Canny detector. The larger the size, the lower the sensitivity of the detector to noise, but the positioning error of edge detection will increase slightly.
If a 3 * 3 window in the image is f and the Pixel to be filtered is f ði, jÞ, then after Gauss filtering, the value of pixel In equation (12), * is a convolution symbol.

Calculate the Intensity and Direction of the Gradient.
Using a discrete difference operator, convolution operation is carried out from the x-axis and y-axis, respectively. The gray change value and direction in the horizontal and vertical directions are obtained. Determine the gradient amplitude and direction with Here, G x ði, jÞ and G y ði, jÞ are the first derivatives of horizontal and vertical directions, respectively. Suppose G x and where G x is the Sobel operator in the x direction, which is used to detect the edge in the y direction and G y is the Sobel operator in the y direction, which is used to detect the edge in the x direction (edge direction is perpendicular to gradient direction). G x ði, jÞ and G y ði, jÞ are the convolution of G x ðG y Þ and the image data. The calculation formula is shown in 3.3. Nonmaximum Suppression. Nonmaximum suppression is a kind of edge sparsity technology. The effect of nonmaximum suppression lies in the "thin" edge. After calculating the gradient of the image, the edge extracted only based on the gradient value is still very fuzzy. The edge should have only one accurate response. Nonmaximum suppression can help to suppress all gradient values except local maximum to 0. The algorithm of nonmaximum suppression for each pixel in gradient image is as follows. (1) The gradient intensity of the current pixel is compared with two pixels along the positive and negative gradient direction.
(2) If the gradient intensity of the current pixel is the largest compared with the other two pixels, the pixel will remain as the edge point; otherwise, the pixel will be suppressed.

Double Threshold Detection and Suppress Isolated Low
Threshold Points. In order to delete the edge pixels that are caused by noise, delete the edge pixels with weak gradient, and retain edge pixels with high gradient through selecting high and low thresholds. The gradient of weak edge pixel is less than the high threshold and greater than the low threshold that should be suppressed. Generally, weak edge pixels caused by real edges are connected to strong edge pixels, but noise response is not connected, in order to track the edge connection, by looking at   Edge detection is performed on the image after the point cloud is vertically projected, and the result is shown in Figure 3.

Back Projection Transform of Edge Detection Data
In order to express the three-dimensional information of edge points, the edge detection data is back projected to the origin cloud. The points in the edge map after projection transformation of point cloud are represented as f ði, jÞ, in which i is the horizontal ordinate. The value range is 1W . j is the vertical ordinate. The value range is 1~H. When f ði, jÞ = 1, that means the point is the edge point. The corresponding point in point cloud should be found. First, calculate the x and y of point cloud. The transformation method is as follows: In the origin point cloud data, search the horizontal ordinate and vertical ordinate. Then, mark the points as edge points. The search code is as follows: The effect of the edge point cloud after the inverse transformation of the edge points is shown in Figure 4, in which gray represents the target and red represents the edge point cloud.  Table 1.
The edge point of each target is around 10% of the original point. After extracting the edge points, it can provide a reliable basis for the subsequent point cloud normal vector calculation and point cloud registration.

Normal Vector Calculation of Edge Points
Based on PCA After edge detection, the normal vector of the local fitting plane is taken as the normal vector of the point. Suppose the edge point is P i = ½x i , y i , z i T , K-Neighbourhood search in origin cloud dataset. Calculate the best fit plane Ax + By + Cz − D = 0. Here, A, B, C is the normal vector of a plane equation. D represents the distance from the origin to the plane, A 2 + B 2 + C 2 = 1, and D > 0. The distance from each point to the plane is d i = jAx i + By i + Cz i − dj. The sum of the distance from each point to the best fit plane is the smal-lest, that is, The normal vector estimation of the edge point cloud is shown in Figure 5.
Because the number of points corresponding to edge features is greatly reduced, the calculation time of using edge features to calculate normal vector is also greatly shortened. The calculation time is shown in Table 2.
It can be seen from the figure that compared with the normal vector of all data point clouds of the target, the normal vector of the edge point cloud can well express the characteristics of the target, and the processing time of the tee target is reduced from 53.03 s to 4.94 s, and the elbow target is reduced from 42.03 s to 4.37 s, which is around 10% of the original.

Conclusions
The data of point cloud is large and contains a lot of invalid information. It is very important to extract the characteristics of point cloud. Edge features can well express the geometric features of the target, so it is very important to extract edge point cloud. This paper proposes an edge detection algorithm based on projection transformation. Firstly, the target point cloud is projected vertically. Then, the Canny algorithm is used to detect the edge of the image. The detected edge data is back projected to extract the edge point cloud. By calculating the normal vector of the edge point cloud, compared with the normal vector of the whole data point cloud of the target, the normal vector of the edge point cloud can well express the characteristics of the target, and the calculation time is reduced to 10% of the original, which greatly saves the calculation time. This paper only detects 3D tee and elbow. Give the advantages of edge features in normal vector calculation. The advantages of edge point cloud in point cloud matching need to be further studied.

Data Availability
The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.