Detection and Tracking of Road Barrier Based on Radar and Vision Sensor Fusion

The detection and tracking algorithms of road barrier including tunnel and guardrail are proposed to enhance performance and reliability for driver assistance systems. Although the road barrier is one of the key features to determine a safe drivable area, it may be recognized incorrectly due to performance degradation of commercial sensors such as radar and monocular camera. Two frequent cases among many challenging problems are considered with the commercial sensors. The first case is that few tracks of radar to road barrier are detected due to material type of road barrier. The second one is inaccuracy of relative lateral position by radar, thus resulting in large variance of distance between a vehicle and road barrier. To overcome the problems, the detection and estimation algorithms of tracks corresponding to road barrier are proposed. Then, the tracking algorithm based on a probabilistic data association filter (PDAF) is used to reduce variation of lateral distance between vehicle and road barrier. Finally, the proposed algorithms are validated via field test data and their performance is compared with that of road barrier measured by lidar.


Introduction
The driver assistance systems (DAS) such as adaptive cruise control (ACC), forward collision warning, and lane departure warning systems have been commercialized on the market [1].They have evolved to more intelligent DAS such as automatic emergency braking (AEB), lane change assistance (LCA), and lane keeping assistance (LKA) systems [2].As prototypes of a highly automated vehicle have been introduced on the media recently, reliability of the performance becomes more important.That is, once false decision is made by a computer or vehicle, it makes the driver have low reliability of the system and may thus result in no use of the system.The reliability of the decision mainly counts on accurate detection and recognition of multiple obstacles and vehicles.For instance, Honda Motor Company had to recall certain model year 2014-2015 Acura vehicles with AEB in the United States.The reason was that a collision mitigation braking system (CMBS) may inappropriately interpret certain roadside infrastructure such as iron fences or metal guardrails as obstacles and unexpectedly apply the brakes [3].Furthermore, NHTSA in the United States investigated complaints alleging unexpected braking incidents of the autonomous braking system in Jeep Grand Cherokee vehicles with no visible objects on the road [3].
The detection and tracking algorithms of road barrier, which may be called either road border or boundary in the literature, depend on sensor configuration and their models for the road barrier.Most of the sensor configurations are single or a combination of radar [1,2,4,5], camera [6], and lidar (or laser scanners) [7,8] to recognize the drivable area via reflections from guardrail and curb.Next, extended objects such as road and road barrier are described as clothoid, circle, and elliptical model and their tracking algorithm is based on Kalman filter, probabilistic data association filter (PDAF), and interacting multiple model (IMM) PDAF [1,2,4,9].In this study, it is assumed that a front radar and a monocular camera are only used for detection and tracking of road barrier.Although additional sensors can be implemented for better performance or the lidar may be used as in the literature, the sensor configuration is limited in the viewpoint of commercialization in the near future.For  instance, cost of sensors, robustness to weather, installation inside bumper, and popularity in automotive market are considered to choose the sensor configuration.The contribution of this paper is to enhance reliability of road border detection when only few tracks are generated by radar with respect to road border and to improve lateral position accuracy of road border.Since the performance of a commercial radar relies on material and geometry of tunnel and guardrail, different number of tracks are made depending on driving environments.Thus, the estimation for stationary tracks out of detection range is proposed for better performance of road barrier detection.Furthermore, the tracking algorithm of road barrier based on a probabilistic data association filter (PDAF) is proposed to reduce variation of lateral offset, which is a lateral distance between an ego vehicle and road barrier.

Problem Statement
When commercial radars for driver assistance systems such as ACC and AEB are used, two challenging problems will be considered in this paper.A normal detection scenario is shown in Figure 1(a) and the corresponding tracks are shown in Figure 1(b).Two tracks to front vehicles are marked as a square and the others marked as × correspond to left guardrail in Figure 1(a).However, as shown in Figures 1(c) and 1(d), few tracks of radar are generated for the road barrier.Its detection performance may rely on the material type and/or shape of road barrier.This problem may lead to difficulty in determining whether there is road barrier in either left or right side.
Next, radar tracks of road barrier are compared with cloud points with magenta color measured by a front lidar sensor in Figures 2(b) and 2(d).As shown in Figures 2(a) and 2(c), inaccuracy of lateral position measured by radar with respect to lidar measurements can be larger in the same driving scenario.This is expected to have large variance of lateral position of road barrier when only radar is used to recognize it.

Road Barrier Detection
The proposed road barrier detection based on sensor fusion of radar and monocular camera consists in four steps: selection of region of interest (ROI), estimation, clustering, and representation.First, the selection of ROI is roughly described in Figure 3.That is, based on the assumption that road barrier is placed on either left or right side, zone B is defined with respect to a body fixed coordinate.If the road in Figure 3 is modeled as [4] where  and  are longitudinal and lateral position, respectively, in a body fixed coordinate in Figure 4,  is curvature, and  is the angle between the longitudinal axis of the vehicle and the road lane from a monocular camera as shown Figure 4.Then, zone B is written as where  1 is a lateral offset which determines a width of zone Next, based on detection range of radar, zone C in Figure 3 is defined for estimation of stationary tracks.Before the estimation, a motion attribute to determine whether tracks in zone B are either stationary or dynamic is decided by  where V is the vehicle velocity, ψ is the yaw rate, Ṙ is the range rate, and subscript  stands for the th radar track.It is remarked that there is uncertainty of detection range of radar.So, it is essential to divide regions either to track or to estimate.Once a stationary track in zone B enters zone C, it is estimated based on discrete Kalman filters as follows [10]: where and  and  denote the relative longitudinal and lateral position, respectively, and V  and V  are the relative longitudinal and lateral velocity.
The measurement update equations are given by Figure 4: Definitions of lateral offset (  ) and projection point. where Since a constant velocity (CV) is considered, the system matrix  and measurement matrix  are written as [10,11] where  is sampling time.
In the third step or clustering, in order to group tracks corresponding to road barrier among stationary tracks including estimated track, projection points are calculated as follows (see circles in Figure 4): where the subscript  stands for the th track of radar.After that, the projection points are classified by left or right if the projection points are positioned in zone B. If the distance between th and ( − 1)th projection points is less than  2 , th  is increased and ( − 1)th  is defined as a breakpoint.If the distance between ( − 1)th and th projection points is greater than  2 , the breakpoint is generated at ( − 1)th  as follows: Finally, if there are two projection points in two breakpoints or [ − 1] ≥ 1, all th tracks satisfying the following inequality are regarded as road barrier [2]: for  = 1, . . .,  = 1, . . ., .

(9)
Considering the -coordinates as erroneous in comparison with -coordinates, the clothoid model which can be approximated by a two-order polynomial is calculated in where It is noted that the calculation of clothoid model corresponds to creation of road barrier.The procedure to detect road barrier is shown in Figure 5. First, six tracks among seven ones coming from radar are classified as stationary objects.Then, they are projected to -axis along the road model and the corresponding projection points are shown as circles in Figure 5(a).Next, based on distance between two projection points in (7), two breakpoints are determined as shown in Figure 5(b) and six tracks thus are classified as road barrier based on (9).Then the clothoid model in (10) is determined and shown as a solid line in Figure 5(c).After a few seconds, the closest stationary track in zone B goes into zone C (also refer to Figure 3) and it becomes out of detection range and estimated based on discrete Kalman filter (see also a diamond mark in Figure 5(d)).

Tracking Road Barrier
The tracking road barrier is composed of creation, maintenance, and deletion.Creation step uses the result of road barrier detection in (10).Maintenance step based on PDAF tracks lateral offset of road barrier.So, it is rewritten as where l is a tracked value via PDAF and will be derived later.
While the lane information detected by a monocular camera sensor is useful to model roads, its performance depends on light and road conditions.For example, if the lane mark is worn out or covered by soil or snow, it may result in false detection of lane mark.Thus, considering the condition of lane marks, whether measurements by a monocular camera or estimate value is used is decided as follows [12]: where  is confidence of right or left lanes, ψ is yaw rate, and V is vehicle speed.
The PDAF is based on discrete Kalman filter and a state variable is defined as where  and V  are the relative lateral position and velocity, respectively.For time update of the state variable and error covariance matrix, System and measurement metrics are as follows: If the number of projection points in two breakpoints is , the measurement is defined as where  means the number of projection points.
In the gating region it has to be decided which measurements (i.e., clustered projection points) are associated with which existing projection points.The corresponding residual and residual covariance matrix  are calculated as All measurements are checked whether the normalized residual satisfies the following thresholding condition.
After that, valid measurements which are in gating region are combined to single residual.The weighting is according to the likelihood values of the corresponding measurements.Finally, the measurement update of the state variable calculates estimated track using the combined residual [13]: where and

Experimental Validation
As listed in Table 1, both radar and monocular camera, which are available commercially, are installed on a test vehicle and a front lidar is in addition used for performance comparison.Although a large amount of driving data has to be used for validation, the primitive evaluation of the performance is conducted with driving data of 13 minutes including the driving scenarios in Figures 1 and 2. Most of driving data have been tested on a highway.It is also noted that the driving data corresponding to tollgate, exit zone, and area without any tunnel or guardrail is not considered for performance evaluation.
Based on detection characteristics of radar, five environment scenarios on highway are considered.The first and second scenarios are shown in Figure 1 and called concrete+steel and concrete guardrail, respectively, in the paper.It is interesting to remark that different detection characteristics for the concrete guardrail are shown in Figures 1(c) and 2. In addition, steel guardrail, curb, and tunnel are included for validation as shown in Figure 6.It is thought that most of road barrier on highway in Korea can be described by one of five environment scenarios.
The first case shown in Figures 1(c) and 1(d) is revisited.That is, few tracks for road barrier are generated by radar.As shown in Figure 7, only two tracks with respect to left guardrail are generated at  1 = 123.7 (sec).After 0.8 sec, the nearest stationary track stays out of detection range of radar as shown in Figure 7  Two driving scenarios in which lateral position of tracks by radar may be inaccurate are considered when an ego vehicle drives along tunnel or guardrail as shown in Figures 8(a) and 8(c).The proposed tracking algorithm of road barrier is compared with lidar and Kalman filter based approach in [4].It is shown in Figures 8(b) and 8(d) that the performance of the proposed detection and tracking of road barrier is closer to that of lidar.Furthermore, it is shown in Figure 9 that four different approaches are compared with respect to lateral offset.Then, their relative quantitative performances with respect to lidar are evaluated in terms of root mean square error (RMSE) of lateral offset and recognition accuracy.They are summarized with respect to environment scenarios in Table 2.
Two performance measures are used for validation depending on environment scenarios.The first one is perception of road barrier (RB) and defined as ratio of detection period of RB by the proposed algorithm to detection by image manually.To describe the tracking performance, RMSE of lateral offset between sensor fusion of radar and vision and that of lidar and vision is used for the second performance measure.Finally, the performance comparison is summarized in Table 2.It is validated that the proposed algorithm improves perception of road barrier more when few tracks are often generated by radar in cases of concrete guardrail, tunnel, and curb on highway.Furthermore, it is shown that tracking performance of the proposed algorithm is more robust than others in different environment scenarios.
The detection and tracking algorithms are proposed to overcome two problems: one is when few tracks to road barrier are generated by radar and the other is when inaccuracy of lateral position becomes worse in a short time and it happens frequently.Both estimation and clustering Radar tracks corresponding to road barrier and vehicles (c) Snapshot of driving scene y Radar tracks corresponding to road barrier and vehicles

Figure 1 :
Figure 1: Detection characteristics to road barrier by radar.

x
Comparison between radar tracks and lidar points (c) Snapshot at  2 Comparison between radar tracks and lidar points

Figure 2 :Figure 3 :
Figure 2: Road barrier detection by radar and lidar.

Figure 5 :
Figure 5: Procedure of road barrier detection.

Figure 6 :
Figure 6: Additional environment scenarios for detection and tracking of road barrier.

Table 1 :
Specification of environment sensors.