Representation of 3D Environment Map Using B-Spline Surface with Two Mutually Perpendicular LRFs

This paper proposes a map representation method of three-dimensional (3D) environment by using B-spline surfaces, which are first used to describe large environment in 3D map construction research. Initially, a 3D point cloud map is constructed based on extracted line segments with two mutually perpendicular 2D laser range finders (LRFs). Then two types of accumulated data sets are separated from the point cloud map according to different types of robot movements, continuous translation and continuous rotation. To express the environment more accurately, B-spline surface with covariance matrix is proposed to be extracted from each data set. Due to the randommovements, there must be overlap between extracted B-spline surfaces. However, merging of two overlapping B-spline surfaces with different distribution directions of their control points is a complex problem, which is not well addressed by far. In our proposed method, each surface is divided into overlap and nonoverlap.Then generated sample points with propagated uncertainties from one overlap and their projection points located on the other overlap are merged using the product of Gaussian probability density functions. Based on this merged data set, a new surface is extracted to represent the environment instead of the two overlaps. Finally, proposed methods are validated by using the experimental result of an accurate representation of an indoor environment with B-spline surfaces.


Introduction
Two-dimensional (2D) features-based simultaneous localization and mapping (SLAM) is the problem of correcting a robot position and building an environment map by using the extracted features in unknown environment.In the past decade, researchers have investigated many issues in 2D SLAM such as feature characterization [1][2][3], data association [4][5][6], and loop closing [7][8][9].Even though much work has increased the accuracy of constructed 2D environment map, only the 2D geometrical parameters of the objects in threedimensional (3D) environment have been obtained.
Recently, several SLAM works have constructed a 3D point cloud map of a real environment to show the geometrical shape of the real objects [10,11].Based on constructed 3D point cloud map, navigation [12] and path planning [13] research have been done in a 3D environment.To build the 3D point cloud map, a 3D sensor is necessary to obtain the raw sensor data.Most of the 3D LRF system is composed of a 2D LRF and a mechanism system, such as a vertical rotating system [14], a pitch motion system [15], and a springmounted system [16].The obtained 3D raw sensor data from these sensors should be organized to represent the environment map.The iterative closest point (ICP) algorithm [17] is the most well-known method for registration of 3D shapes described either geometrically or with point clouds.Extended ICP algorithms [18,19] have been used to represent outdoor terrain maps.However, since the environment is represented with scanned sensor data, large storage space is needed in the experimental process.
To represent the environment well, the most commonly used feature is a plane, which has been considered to be extracted from the point cloud map in current research about 3D map construction.There are many plane extraction methods [20,21], in which the planes have been chosen as landmarks to build an environment map.Nevertheless, a plane is not a good choice to represent 3D map in consideration of the diversity of real objects.
Figure 1: Constructed 3D point cloud map by using two mutually perpendicular LRFs, which is expressed with different colors according to their height.Robot trajectory is plotted with green triangles, and 2D map is plotted with blue lines.
In this paper, two mutually perpendicular 2D LRFs are used to build the 3D point cloud map.To correct the position of a mobile robot, line segments are extracted from the sensor data obtained from the horizontal LRF.Improved extended Kalman filter (IEKF) SLAM algorithm is applied to update the position of robot by using matched feature pair.Based on the accurate position of robot, point cloud map is constructed by using the sensor data obtained from the vertical LRF, shown in Figure 1.
B-spline surfaces with a small number of parameters are extracted from the point cloud map to represent the 3D environment because of its powerful representation of various objects with complex geometrical shape.Only small storage space is needed to store the small number of parameters of B-spline surface instead of large amounts of point cloud.It can make the SLAM process more efficient, and it does not increase with even repeatedly scanning a same object.This is because extracted B-spline surfaces from different scans by scanning a similar object are merged as one, which lead to the small number of parameters.Comparing with planar surface-based 3D map construction, B-spline surface has a broad representation of complex environment.Not only the polyhedral objects can be expressed well, but also irregular and curved objects can be accurately expressed.If a polyhedral object is expressed with both of these two methods, the number of B-spline surface must be smaller than the number of planes because of its property of closures.
Even though B-spline surface is commonly used in computer aided design (CAD) [22], manufacturing, and reverse engineering [23], it is firstly considered to represent a large environment in the 3D map construction research in this paper.In order to extract the B-spline surfaces, two types of data sets are separated from the point cloud map according to different types of robot movements, continuous rotations, and continuous translations.An extended approximation algorithm is proposed to extract the raw data of each scan in every data set as a B-spline curve.The proposed method generates a curve with as few control points as possible that approximates the raw data within a prespecified error tolerance by adding a new control point.We extract the initial B-spline curve with +1 control points, where  is the degree of curve.A new knot point is inserted into the knot vector of the current curve at the position of a raw data point with the biggest divergence to the extracted curve.This iteration process is terminated until the error of the extracted curve is smaller than the error bound.
To extract the surface, control points of the curves are rearranged and considered as raw data points.Afterwards, the curve extraction process is repeated to find the control points of the B-spline surface.The covariance matrix of the control points is derived from the uncertainty of the raw data points.Due to the random robot movements, there must be overlap of two B-spline surfaces extracted from the different data sets.These overlaps should be merged as one to represent the environment.However, by far, the problem about merging two B-spline surfaces with overlaps is not well solved.This is because the distribution directions of the control points of these two overlapping surface patches are different.In our proposed method, each surface is first divided into the overlap and the nonoverlap.Then, one data set is generated from the overlap of one surface and the other data set is its projection locating on the other one.Merged data sets are obtained by using the product of two Gaussian probability density functions.Finally, a new B-spline surface is extracted from this merged data set to represent the environment instead of two overlaps.
The rest of this paper is organized as follows.The result of a 3D point cloud map construction is presented in Section 2. Definition, properties, fitting method, and covariance matrix derivation of B-spline surface are shown in Section 3. Bspline surface merging and the experiment results are discussed in Section 4 and Section 5, respectively.Finally, this paper is concluded in the last section.

3D Point Cloud Map
To build the 2D and 3D map accurately, the position of a mobile robot should be accurately corrected after each movement.This can be realized by considering the extracted line segments as landmarks.Therefore, there are three subsections, line segments extraction, line segments-based 2D SLAM, and construction of 3D point cloud map.

Extraction of Line Segments.
Landmarks play a key role in the update of robot pose.In this paper, line segments are considered as landmarks.These line segments are extracted from the segmented data groups of each sensor scan, obtained from a 2D LRF horizontally located on a mobile robot.The raw data of each sensor scan should be separated into many groups if the distance of two adjacent points is beyond a defined limit value.Moreover, the separated group is further divided if the angle of three sequential points is bigger than a limit angle.Then each data group is used to extract one line segment.For the extraction of line segments, each segment has two geometrical parameters, intercept  and angle  shown in Figure 2. In the figure, each raw sensor data is expressed as (  ,   ) in polar coordinate system.The parameters of a line segment are expressed in a local coordinate system of mobile robot because the sensor scan is obtained in the local reference frame.To derive these parameters, the distance between the raw data points and the expected line segment is minimized.This distance is calculated by projecting the data point onto line segment, which is expressed as   =   cos(  −)−.The squared distance from the raw data point to the line are summed as follows: The least-square solution is found by setting the partial derivative of  with respect to  and  as zero, shown as follows: where  and  are symbols to represent the numerator and denominator in (2).In order to match the new extracted line segments with the stored map features, these new segments should be transformed in the global coordinate system.The parameters (, ) of a line segment in the global coordinate system are shown in Figure 2, which are calculated based on the local parameters (, ) and current robot position vector (  ,   ,   ) as in the following: 2.2.IEKF-SLAM with PCA.In order to correctly localize the mobile robot and accurately build the 2D environment map, a data association method should be used to construct the correspondence between the stored line segments and the new extracted ones.Partial compatibility algorithm (PCA) [24] has been proposed as a new robust data association algorithm with a short computation time.The output of PCA is a vector storing best matching pairs.After this segments matching process by using PCA, each matching pair is used to update a state vector  including the position of the mobile robot and the parameters of the line segments.
To consistently and efficiently update the state vector, an improved EKF (IEKF) SLAM algorithm was used.There are three parts in IEKF-SLAM algorithm, prediction, data association, and correction.In the th step of prediction step, the inputs of this method at th step are the state vector  −1 in (6) including the current pose (  ,   ,   ) of mobile robot and all the stored map {  map } segments with their covariance matrix {  map } at ( − 1)th step.Based on new odometry data (  trans ,   rot ), the state vector  −1 with its covariance matrix Σ −1 is predicted as   and Σ  by using (7) to (9).After the prediction, new extracted segments and previously stored map features are matched by using PCA.A best matching vector can be obtained by considering the partial compatibility, which includes many unit matching pairs [(  new ,   new ), ( () map ,  () map )].In correction step, each unit matching pair is used to update the state vector and its covariance matrix by using (12) to (15).To reduce the computation complexity, the large covariance matrix is decomposed as many block matrices.Finally, new updated state vector   with its covariance matrix Σ  is obtained, and unmatched new segments are stored as map segments.These whole steps iteratively process with the newly observed scans: (a) Prediction: ) .

Construction of a 3D Point Cloud
Map.The experimental system established in our research is shown in Figure 3, in which the horizontal sensor obtains 760 raw data points with the measurement range of 270 degrees, and vertical sensor obtains 541 points with the same measurement range.Two laser sensors are mutually perpendicular with each other, and the horizontal sensor is parallel with ground.In the first subsection, the line segments are extracted from the raw sensor data, which are obtained from the horizontal LRF.By matching and updating these line segments, the pose of a mobile robot can be corrected accurately.Based on the updated robot position, the raw data points obtained from the vertical sensor are projected into the 3D space to build the 3D point cloud map.Four reference frames are built in Figure 3 to express a laser point into 3D space.These are the absolute reference frame {-}, reference frame of mobile robot {  -      }, reference frame of horizontal LRF {  -      }, and reference frame of vertical LRF {  -      }.Assume that the th laser point obtained from the vertical LRF in a global reference frame is represented as -   , which can be transformed as follows: where Finally, we get By using ( 16) to (18), each 2D scan obtained from the vertical LRF can be correctly expressed in the global reference frame.To have a robust 3D point cloud map, the mobile robot in an indoor environment is controlled to translate with a small interval or rotate with a small angle.In this paper, the spacing distance is 100 mm and the rotation angle is 0.087 rad in each time.

B-Spline Surface
In this section, some fundamental concepts and the fitting method of the B-spline surface are presented.The term spline is a sufficiently smooth polynomial function that is piecewise-defined, which can be applied in many scientific applications where approximation or interpolation of noisy data is required.A  × th degree B-spline surface is obtained as the tensor product of a th degree (order  + 1) B-spline curve and a th degree B-spline curve.

Definition and Properties of the B-Spline Surface.
A Bspline surface of the order  ×  with a bidirectional net of control points  , ( = 0, . . ., ;  = 0, . . ., ), two knot vectors , , and the product of the univariate B-spline basis functions  , () , (V) can be expressed as The basis functions of  , () ( = 1, . . ., ) are defined as and for all  > 1 where  , is the simplified form of  , (). , (V) can be calculated using the same method.Two parameters  and V are defined as . The two nondecreasing knot vectors are expressed as follows: The number of knot points in these two knot vectors is  +  + 1 and  +  + 1, respectively.An example of these parameters is illustrated in Figure 4.
There are many mathematical and geometrical properties of a B-spline surface, which are very useful for the remaining content.Four of them are listed as follows.
(2) For any value of the parameters , the sum of its all the B-spline basis function is 1; that is, (3) In any given rectangle, at most ( + 1)( + 1) basis functions are nonzero if This means that the value of (, V) depends only on ( + 1)( + 1) of the coefficients N 2,q N 3,q N 4,q N 0,q V = {0, 0, 0, 0, 0.5, 1, 1, 1, 1}  (4) Let (, V) be fixed, the partial derivative of (, V) can be obtained by computing the derivatives of the basis functions: More information about the properties of the B-spline surface can be seen in [25].

Fitting of the B-Spline Surface.
Assume that there is a data set  , ( = 0, . . ., ;  = 0, . . ., ) with  + 1 sensor scans, and each includes  + 1 raw data points.The least square approximation method is used to find the control points of the B-spline surface by minimizing the error between the raw sensor data and the extracted surface, which is expressed as where the parameter set {  } in the th data group is calculated by using the centripetal method [25].Let Then The first step in proposed B-spline surface extraction algorithm is the B-spline curve extraction from each sensor scan.This means that the error between the raw data in each scan and the curve is minimized, which is done by fixing the parameter  in (27).In the beginning,  + 1 control points are extracted from the raw data with 2 + 2 knot points,  + 1 zeroes, and  + 1 ones.The control points   of the curve are calculated by setting the partial derivative of the sum of the error with respect to all the control points as zero: If the error between the raw data point and this extracted curve is bigger than the limit value, a new knot is added in the common knot vector of these curves at the knot position of a sensor point which has the maximum value of all the biggest divergence between the sensor data to the corresponding extracted curve in each scan.This process is terminated until all the error of the curves is located within the error bound.
To calculate the control points of the B-spline surface, the control points of the curves in the first step is rearranged and considered as raw data: New control points from these data can be obtained using the same method as in (25): where  :, is the th column of the control points of the extracted B-spline surface with the dimension of 3 × 1 in three-dimensional space.Matrix  V and parameters {V  } are calculated using the same principle as in (24) and (26).To express the uncertainty of extracted surface, the covariance matrix   of the control points is propagated from the covariance matrix   of the raw data points using first-order Taylor expansion: In our research, the raw data from the constructed 3D point cloud map are divided into two types according to the different types of robot movements, continuous rotation and continuous translation.All the combined movements of rotation and translation can be analyzed by dividing these movements into the two types of defined movements.An example of the extracted B-spline surface from the two simulated types of raw data is shown in Figure 5.In addition, due to the random robot movements, there may be overlap between the two different extracted B-spline surfaces.The overlap of the two surfaces should be merged as one to represent the environment, which is done in the following section.

Merging of the B-Spline Surfaces
An example of the two surfaces with overlap is illustrated in Figure 6, in which the distribution directions of the control points of the two surfaces are different and intersected.To represent the environment without overlapping surfaces, two steps are executed, which are the B-spline surface division and merging of the two overlapping patches of the two surfaces.

Division of B-Spline Surface.
To merge the overlaps of the two surfaces, the overlapping part of each surface should first be found.Each surface is separated into two parts, the overlap and the nonoverlap.This is done by projecting the boundary points of one B-spline surface onto that of the other one.Projection of a point   onto a B-spline surface  is an iteration process, which begins with the closest generated sample point   Init (with knot points ( Init , V Init )) of the surface  to the point   .Because of the finite number of sample points, the projection point of   cannot be exactly the same as a sample point.The knot points ( +1 , V +1 ) of the much closer points located on the surface  are calculated as follows: where   ,  V ,   ,  VV ,  V are the derivative of surface  with respect to  and V at the knot position   , V  , calculated by using (20).This iteration is terminated when at least one of the following two conditions is satisfied: By repeating the projection process of the generated sample points, the boundaries of the overlapping part and nonoverlapping part in each B-spline surface of Figure 6 can be found.The boundary points of these two surfaces are shown in the top part of Figure 7, in which the nonoverlap part is separated into two parts because the distribution of the knot points is discontinued in some places.Separation of the nonoverlapping part is done by fixing one of the two knot points of the corner point locating on the boundary.According to these boundaries, the sample points are generated and assembled into different groups.To maintain the boundary of the original surface, an interpolation algorithm is used to calculate the control points of a new surface after obtaining the control points of the curves in each group of the sample data.The segmented B-spline surface patches are shown in the bottom part of Figure 7, where the number of control points increased due to the maintenance of the shape.

Merging of the Overlapping B-Spline Surfaces.
To merge the overlapping surface patches, two surface patches should be selected correctly from all the patches.An example with six B-spline surface patches is shown in Figure 7.We find the goal patches by projecting the middle point with the knot points ( mid , V mid ) of the sequential patches from one original surface to that from the other one.Here ( mid , V mid ) are set as (0.5 * (  1 +   1 ), 0.5 * (  2 +   2 )), where   1 ,   1 ,   2 ,   2 are the start knot point and end knot point of the th B-spline surface patch in the  and  direction, respectively.This procedure is terminated if the projection point of the middle point is not located on the edge of the objective patch.These two surface patches cannot be merged by updating the control points directly because not only is the number of control points from the two surface patches different but also the distribution directions of the control points of the two are different.
Two surface patches are merged by operating the generated sample points.Due to the different distribution directions of the sample points of the two surfaces, it is difficult to group the combined sample points of the two B-spline surface patches.To solve this problem, one group of sample points is generated from one B-spline surface patch.By projecting these sample points onto the objective patch, these projection points located on this surface patch are considered as the second group.The covariance matrix of these two groups of sample points is propagated from the covariance matrix of the two B-spline surface patches, respectively.The covariance matrix Σ  of the sample point  is propagated from the covariance matrix   of the control points as follows: with  = ( 0,0 , . . .,  ,0 ,  1,0 , . . .,  , ) ,  = ( 0,  0,  3×3 , . . .,  ,  ,  3×3 ) ,  = 3 ( + 1) ( + 1) . (37) The merging process of the overlapping parts in the example of Figure 7 is shown in Figure 8, in which the two groups of sample points are from two different patches.Each sample point and its projection point are merged by using the product of two Gaussian PDFs.Assume that there is a state vector   of a sample point with its covariance matrix Σ  and the state vector   of the projection point with a covariance matrix Σ  ; the state vector of the combined point with its covariance matrix are calculated as follows Finally, the merged surface is extracted from all the merged points of the sample points and their projection points using previously described surface extraction method.All the Bspline surface patches of the example in Figure 6 are shown in Figure 9.

Experiment Results
An experiment was performed with real data obtained using the experimental tools in Figure 3 in order to validate the methodologies presented in this paper.To represent an environment with continuous B-spline surface patches, a 3D indoor experimental environment was selected without any meshes and holes.Initially, a 3D environment map is represented with the point clouds.Then the accuracy and effectiveness of an extracted B-spline surface from the point clouds was analyzed.To show the process of map construction, the surface division and merging are explained by using an example with real sensor data.Finally, the entire environment map represented with B-spline surfaces is shown.

2D Map and 3D Point Cloud Map.
A two-dimensional map of the real environment is built as shown in Figure 10, where the robot position is corrected by matching the new measured line segments with stored segments.To show the detailed matching process, the number of new measurements and stored ones with each robot movement are illustrated in Figure 11.It can be seen that the number of new extracted segments from each scan is not bigger than 14, and the number of the stored segments generally increases.However, it decreases in some steps, especially in the beginning steps.This is because some discontinuous line segments actually belonging to the same line are extracted initially.With the accumulated environment information, the remaining parts of the line are observed and merged with the stored segments to represent the environment map.By using the information from the horizontal LRF, the robot position is corrected, and the 2D map is simultaneously constructed.Based on the accurate robot pose, the observed information from the vertical LRF is transformed into the 3D coordinate system to build the 3D point cloud map, shown in Figures 1 and 12.It can be seen that the discrete points are arranged regularly.In each vertical sensor scan, the cross section of the environment at the current robot pose is scanned except for part of the ground due to the limited scan range of a sensor.The measuring distances of all the laser points in every vertical scan are continuous without large fluctuations.

Accuracy and Efficiency of Extracted B-Spline Surface.
As mentioned in Section 4, the entire data set of the 3D point cloud map is divided into two types according to the continuous translation and continuous rotation of a mobile robot.The 3D point cloud map in Figure 12 is separated into seven data sets with these two different types of robot movements.The raw sensor data in each data set is used to be extracted as one B-spline surface patch.The detailed process of the bicubic B-spline surface extraction from the 7th data set with continuous robot translation is shown in   Figure 13.There are two types of control points, control points of the extracted B-spline curves from the raw sensor data in  direction, and the control points of extracted B-spline surface (control points of the B-spline curves in  direction) from the control points in  direction.To show the propagation of the uncertainties, the uncertainty ellipsoids of the raw sensor data and these two types of control points are plotted with different scales according to their corresponding covariance matrix.The covariance matrices of these control points are calculated in (33).
In addition, the B-spline surfaces are extracted from the same data set with different degrees to show the accuracy and efficiency of the bicubic B-spline surface.The extracted Bspline surfaces with ( = 0,  = 0), ( = 0,  = 3), and ( = 3,  = 3) are shown in Figure 14.It can be seen that the Bspline surface with  = 3 in the middle and right of the figure is much smoother than the surface with  = 0 in the left.Furthermore, the error between all these raw data points in this data set and these three B-spline surfaces are calculated and shown in Figure 15.The limit values of the error in these three surface extraction processes are set as the same in the  direction and  direction, which are 0.01 m and 0.05 m, respectively.In this data set, there are twenty 2D scans from the vertical LRF.The error between all the 541 sensor points of one 2D scan and the surface is plotted with a continuous curve.Therefore, in all these three figures of Figure 15, there are twenty curves.
By comparing the error range of these three extracted surfaces, the B-spline surface with ( = 0,  = 0) has the largest error range.In four 2D scans, in particular, the average error between the raw data and this surface is about 0.05 m.The errors of part of the scans at the 350th step to the 480th step are bigger than 0.05 m.For the surface with ( = 0,  = 3), the error range at all the 541 steps is about 0 to 0.04 m.Even though the error range of this surface is more stable than the surface with ( = 0,  = 0), this error range is bigger than that of the surface with ( = 3,  = 3).The error range at most of the steps of this bicubic B-spline surface is about 0 to 0.02 m.However, the error range at the 60th step, 260th step, 350th step, and 470th step is a little bigger.This is because there are four sharp corners at these steps in a real experiment environment.Bigger errors exist between the raw sensor data scanned from the sharp corners and the extracted B-spline surface because of its property of continuity.This can be proven by the uncertainty propagation in Figure 13 (column 6) where the size of the uncertainty ellipsoid of the control points located in the sharp corners is bigger than other parts.However, even though the maximum error in the corners' part is about 0.06 m, which is relatively small with respect to the measuring error of LRF, ±0.03 m.
To know the accuracy of the extracted B-spline surface, the average errors between all the sensor data in this data set and three surfaces with different degrees are calculated.These average errors are plotted with red dashed lines in Figure 15, which are 0.0325 m, 0.0192 m, and 0.0131 m for the surface with ( = 0,  = 0), ( = 0,  = 3), and ( = 3,  = 3), respectively.It can be seen that the bicubic B-spline surface has the minimum error in all these three extracted surfaces.Furthermore, the number of control points in the  and  direction of these three surfaces is compared in Figure 13: Example of raw sensor data (column 1) and their uncertainty ellipsoid (column 2) with the continuous translation of mobile robot.Extracted B-spline curves ( = 3) in  direction with the control points (column 3) and the ellipsoid of propagated uncertainty from the raw sensor data (column 4).Extracted B-spline curves ( = 3) in  direction with the control points (column 5) and the ellipsoid of propagated from the control points in  direction (column 6).These three types of uncertainty ellipsoid are plotted with the ratio of 50 : 1, 100 : 1, and 100 : 1 with respect to their corresponding real sizes.the existing overlap between the two B-spline surfaces, a Bspline surface division method was proposed to divide the surface into two types, overlaps and nonoverlaps.Then a merging method was presented to merge the overlapping surface patches with different distribution directions of their control points by operating the generated sample points and their projection points.Simulations of the two B-spline surfaces with overlaps were used to show this detailed process.Finally, a real experimental environment was successfully constructed with B-spline surface patches, which validated the accuracy, efficiency, and feasibility of proposed methods.

Figure 4 :
Figure 4: Example of a bicubic B-spline surface ( =  = 3) with control points and the basis functions with two knot vectors (, ).

Figure 5 :
Figure 5: Two types of extracted B-spline surfaces according to two different types of robot movements, continuous translation and continuous rotation (different scans are plotted with different shapes of points).

Figure 6 :
Figure 6: An example of two B-spline surfaces with overlapping parts.

Figure 7 :
Figure 7: Boundary points of the divided B-spline surface patches (column 1), the sample points of these patches (column 2), and the corresponding extracted B-spline surface patches (column 3).The boundary points and the sample points are plotted using different colors and different shape of point for each patch.

Figure 8 :
Figure 8: Merged surface (right) of two overlapping patches by merging the sample points (left star points) from one and their projection points (middle circular points) from the other one.

Figure 9 :
Figure 9: B-spline surface patches after surface division and merging of overlapping patches.

Figure 10 :
Figure 10: Real experiment environment (a), corrected position of mobile robot, and the constructed 2D map (b).

Figure 11 :
Figure 11: Number of new extracted line segments and stored line segments in each step.

Figure 12 :
Figure 12: Another view of constructed 3D point cloud map.

Figure 16 ,Figure 17 :
Figure16, which shows that the bicubic B-spline surface has the smallest number of control points.Less control points means that less iteration is used in the process of surface extraction.Moreover, the storage space is also saved by representing the surface extracted 541 × 20 raw data points with only 26 × 5 control points.In summary, the bicubic B-spline surface can accurately represent the real environment with a fewer number of control points.

Figure 19 :
Figure 19: Front view (a) and back view (b) of the whole 3D environment map expressed using bicubic B-spline surface patches.
local coordinate system of mobile robot  Figure 2: Extracted line segment (black line) from a group of raw sensor data (green points) has two geometrical parameters, expressed as intercept  and angle  in local coordinate system of mobile robot and  and  in absolute coordinate system.  and   are the coordinates of a raw sensor data expressed in polar coordinate system. )