^{1}

^{1}

^{2}

^{2}

^{1}

^{2}

This paper proposes a map representation method of three-dimensional (3D) environment by using B-spline surfaces, which are first used to describe large environment in 3D map construction research. Initially, a 3D point cloud map is constructed based on extracted line segments with two mutually perpendicular 2D laser range finders (LRFs). Then two types of accumulated data sets are separated from the point cloud map according to different types of robot movements, continuous translation and continuous rotation. To express the environment more accurately, B-spline surface with covariance matrix is proposed to be extracted from each data set. Due to the random movements, there must be overlap between extracted B-spline surfaces. However, merging of two overlapping B-spline surfaces with different distribution directions of their control points is a complex problem, which is not well addressed by far. In our proposed method, each surface is divided into overlap and nonoverlap. Then generated sample points with propagated uncertainties from one overlap and their projection points located on the other overlap are merged using the product of Gaussian probability density functions. Based on this merged data set, a new surface is extracted to represent the environment instead of the two overlaps. Finally, proposed methods are validated by using the experimental result of an accurate representation of an indoor environment with B-spline surfaces.

Two-dimensional (2D) features-based simultaneous localization and mapping (SLAM) is the problem of correcting a robot position and building an environment map by using the extracted features in unknown environment. In the past decade, researchers have investigated many issues in 2D SLAM such as feature characterization [

Recently, several SLAM works have constructed a 3D point cloud map of a real environment to show the geometrical shape of the real objects [

To represent the environment well, the most commonly used feature is a plane, which has been considered to be extracted from the point cloud map in current research about 3D map construction. There are many plane extraction methods [

In this paper, two mutually perpendicular 2D LRFs are used to build the 3D point cloud map. To correct the position of a mobile robot, line segments are extracted from the sensor data obtained from the horizontal LRF. Improved extended Kalman filter (IEKF) SLAM algorithm is applied to update the position of robot by using matched feature pair. Based on the accurate position of robot, point cloud map is constructed by using the sensor data obtained from the vertical LRF, shown in Figure

Constructed 3D point cloud map by using two mutually perpendicular LRFs, which is expressed with different colors according to their height. Robot trajectory is plotted with green triangles, and 2D map is plotted with blue lines.

B-spline surfaces with a small number of parameters are extracted from the point cloud map to represent the 3D environment because of its powerful representation of various objects with complex geometrical shape. Only small storage space is needed to store the small number of parameters of B-spline surface instead of large amounts of point cloud. It can make the SLAM process more efficient, and it does not increase with even repeatedly scanning a same object. This is because extracted B-spline surfaces from different scans by scanning a similar object are merged as one, which lead to the small number of parameters. Comparing with planar surface-based 3D map construction, B-spline surface has a broad representation of complex environment. Not only the polyhedral objects can be expressed well, but also irregular and curved objects can be accurately expressed. If a polyhedral object is expressed with both of these two methods, the number of B-spline surface must be smaller than the number of planes because of its property of closures.

Even though B-spline surface is commonly used in computer aided design (CAD) [

To extract the surface, control points of the curves are rearranged and considered as raw data points. Afterwards, the curve extraction process is repeated to find the control points of the B-spline surface. The covariance matrix of the control points is derived from the uncertainty of the raw data points. Due to the random robot movements, there must be overlap of two B-spline surfaces extracted from the different data sets. These overlaps should be merged as one to represent the environment. However, by far, the problem about merging two B-spline surfaces with overlaps is not well solved. This is because the distribution directions of the control points of these two overlapping surface patches are different. In our proposed method, each surface is first divided into the overlap and the nonoverlap. Then, one data set is generated from the overlap of one surface and the other data set is its projection locating on the other one. Merged data sets are obtained by using the product of two Gaussian probability density functions. Finally, a new B-spline surface is extracted from this merged data set to represent the environment instead of two overlaps.

The rest of this paper is organized as follows. The result of a 3D point cloud map construction is presented in Section

To build the 2D and 3D map accurately, the position of a mobile robot should be accurately corrected after each movement. This can be realized by considering the extracted line segments as landmarks. Therefore, there are three subsections, line segments extraction, line segments-based 2D SLAM, and construction of 3D point cloud map.

Landmarks play a key role in the update of robot pose. In this paper, line segments are considered as landmarks. These line segments are extracted from the segmented data groups of each sensor scan, obtained from a 2D LRF horizontally located on a mobile robot. The raw data of each sensor scan should be separated into many groups if the distance of two adjacent points is beyond a defined limit value. Moreover, the separated group is further divided if the angle of three sequential points is bigger than a limit angle. Then each data group is used to extract one line segment.

For the extraction of line segments, each segment has two geometrical parameters, intercept

Extracted line segment (black line) from a group of raw sensor data (green points) has two geometrical parameters, expressed as intercept

The least-square solution is found by setting the partial derivative of

In order to match the new extracted line segments with the stored map features, these new segments should be transformed in the global coordinate system. The parameters

In order to correctly localize the mobile robot and accurately build the 2D environment map, a data association method should be used to construct the correspondence between the stored line segments and the new extracted ones. Partial compatibility algorithm (PCA) [

To consistently and efficiently update the state vector, an improved EKF (IEKF) SLAM algorithm was used. There are three parts in IEKF-SLAM algorithm, prediction, data association, and correction. In the

The experimental system established in our research is shown in Figure

Experimental setup, a Pioneer mobile robot, a vertical SICK LMS 100 LRF, and a horizontal Hokuyo 08LX LRF.

Four reference frames are built in Figure

By using (

In this section, some fundamental concepts and the fitting method of the B-spline surface are presented. The term

A B-spline surface of the order

The basis functions of

The number of knot points in these two knot vectors is

Example of a bicubic B-spline surface

There are many mathematical and geometrical properties of a B-spline surface, which are very useful for the remaining content. Four of them are listed as follows.

The minimum number of control points of a

For any value of the parameters

In any given rectangle, at most

Let

More information about the properties of the B-spline surface can be seen in [

Assume that there is a data set

The first step in proposed B-spline surface extraction algorithm is the B-spline curve extraction from each sensor scan. This means that the error between the raw data in each scan and the curve is minimized, which is done by fixing the parameter

If the error between the raw data point and this extracted curve is bigger than the limit value, a new knot is added in the common knot vector of these curves at the knot position of a sensor point which has the maximum value of all the biggest divergence between the sensor data to the corresponding extracted curve in each scan. This process is terminated until all the error of the curves is located within the error bound.

To calculate the control points of the B-spline surface, the control points of the curves in the first step is rearranged and considered as raw data:

In our research, the raw data from the constructed 3D point cloud map are divided into two types according to the different types of robot movements, continuous rotation and continuous translation. All the combined movements of rotation and translation can be analyzed by dividing these movements into the two types of defined movements. An example of the extracted B-spline surface from the two simulated types of raw data is shown in Figure

Two types of extracted B-spline surfaces according to two different types of robot movements, continuous translation and continuous rotation (different scans are plotted with different shapes of points).

An example of the two surfaces with overlap is illustrated in Figure

An example of two B-spline surfaces with overlapping parts.

To merge the overlaps of the two surfaces, the overlapping part of each surface should first be found. Each surface is separated into two parts, the overlap and the nonoverlap. This is done by projecting the boundary points of one B-spline surface onto that of the other one. Projection of a point

By repeating the projection process of the generated sample points, the boundaries of the overlapping part and nonoverlapping part in each B-spline surface of Figure

Boundary points of the divided B-spline surface patches (column 1), the sample points of these patches (column 2), and the corresponding extracted B-spline surface patches (column 3). The boundary points and the sample points are plotted using different colors and different shape of point for each patch.

To merge the overlapping surface patches, two surface patches should be selected correctly from all the patches. An example with six B-spline surface patches is shown in Figure

Two surface patches are merged by operating the generated sample points. Due to the different distribution directions of the sample points of the two surfaces, it is difficult to group the combined sample points of the two B-spline surface patches. To solve this problem, one group of sample points is generated from one B-spline surface patch. By projecting these sample points onto the objective patch, these projection points located on this surface patch are considered as the second group. The covariance matrix of these two groups of sample points is propagated from the covariance matrix of the two B-spline surface patches, respectively. The covariance matrix

The merging process of the overlapping parts in the example of Figure

Merged surface (right) of two overlapping patches by merging the sample points (left star points) from one and their projection points (middle circular points) from the other one.

B-spline surface patches after surface division and merging of overlapping patches.

An experiment was performed with real data obtained using the experimental tools in Figure

A two-dimensional map of the real environment is built as shown in Figure

Real experiment environment (a), corrected position of mobile robot, and the constructed 2D map (b).

Number of new extracted line segments and stored line segments in each step.

By using the information from the horizontal LRF, the robot position is corrected, and the 2D map is simultaneously constructed. Based on the accurate robot pose, the observed information from the vertical LRF is transformed into the 3D coordinate system to build the 3D point cloud map, shown in Figures

Another view of constructed 3D point cloud map.

As mentioned in Section

Example of raw sensor data (column 1) and their uncertainty ellipsoid (column 2) with the continuous translation of mobile robot. Extracted B-spline curves (

In addition, the B-spline surfaces are extracted from the same data set with different degrees to show the accuracy and efficiency of the bicubic B-spline surface. The extracted B-spline surfaces with (

Comparison of extracted B-spline surface with different degrees: (

Error between the raw sensor data in each 2D scan from the vertical LRF and the extracted B-spline surfaces with different degrees: (

By comparing the error range of these three extracted surfaces, the B-spline surface with (

To know the accuracy of the extracted B-spline surface, the average errors between all the sensor data in this data set and three surfaces with different degrees are calculated. These average errors are plotted with red dashed lines in Figure

Number of control points in

Due to the seven data sets separated from the constructed 3D point cloud map, there must be seven extracted B-spline surfaces. To represent the environment using B-spline surface patches without overlap between any two patches, the overlapping parts should be merged as one. In Section

Raw sensor data of the data set with the robot movements of continuous rotation (red points in row 1, column 1) and raw sensor data of the data set with the robot movements of continuous translation (blue points in row 1, column 1). Overlapping parts of these two data sets also can be seen in the top view (row 1, column 2). Extracted B-spline surfaces from these two data sets are shown in row 2, column 1, and row 2, column 2, respectively.

To merge the overlapping parts of these two B-spline surfaces, one of them is divided into five patches,

Divided B-spline surface patches (

The entire constructed environment map from the 3D point cloud map is represented with bicubic B-spline surface patches shown in Figure

Front view (a) and back view (b) of the whole 3D environment map expressed using bicubic B-spline surface patches.

A novel methodology for 3D map representation of the environment with the characteristics of complicated geometry has been proposed and experimentally validated. In consideration of the limitation of a traditional 3D point cloud map, the B-spline surface has superior advantages of the expression of complex structure. To represent a map with B-spline surfaces, the 3D point cloud map was constructed initially by using two mutually perpendicular LRFs. To build this point cloud map, an accurate robot position is corrected based on the extracted line segments from the horizontal LRF. The IEKF SLAM algorithm was used to update the robot position with the features pairs of the line segments and their matched stored segments. Raw sensor data obtained from the vertical LRF forms the point cloud map of the 3D environment map.

To extract the B-spline surface from the point cloud map, two types of data sets were segmented from the point cloud map according to two different types of robot movements, continuous translation and continuous rotation. Due to the existing overlap between the two B-spline surfaces, a B-spline surface division method was proposed to divide the surface into two types, overlaps and nonoverlaps. Then a merging method was presented to merge the overlapping surface patches with different distribution directions of their control points by operating the generated sample points and their projection points. Simulations of the two B-spline surfaces with overlaps were used to show this detailed process. Finally, a real experimental environment was successfully constructed with B-spline surface patches, which validated the accuracy, efficiency, and feasibility of proposed methods.

The authors declare that there is no conflict of interests regarding the publication of this paper.

This research was supported by the Ministry of Trade, Industry and Energy (MOTIE), KOREA, through the Education Support Program for Creative and Industrial Convergence. (Grant no. N0000717).