Vehicle Detection Based on Multifeature Extraction and Recognition Adopting RBF Neural Network on ADAS System

A region of interest (ROI) that may contain vehicles is extracted based on the composite features on vehicle’s bottom shadow and taillights by setting a gray threshold on vehicle shadow region and a series of constraints on taillights. In order to identify the existence of target vehicle in front of Advanced Driver Assistance System (ADAS) for the extracted ROI, a neural network recognizer of the Radial Basis Function (RBF) is found by extracting a series of parameters on the vehicle’s edge and region features. Using a large amount of collected images, the ROI that may contain vehicles is veriﬁed to be eﬀective by extracting composite features of the shadow at the bottom of vehicle and taillights. Based on the positive and negative sample base of vehicles, the neural network recognizer is trained and learned, which can quickly realize network convergence. Furthermore, the vehicle can be eﬀectively identiﬁed in the region of interest using the trained network. Test results show that the vehicle detection method based on multifeature extraction and recognition method based on RBF network have stable performance and high recognition accuracy.


Introduction
Recently, Advanced Driver Assistance System is used widely to improve driving safety example as the Active Lane Keeping Assist System (ALKAS), the Front Collision Warning (FCW), the Autonomous Emergency Braking (AEB), and the Intelligent Adaptive Cruise Control System (IACCS). No matter which kind of assistant driving system, its core technology lies in the application of radar or machine vision to quickly and accurately extract the information of vehicles or obstacles in front of the system, which can timely remind the driver to avoid collision danger or automatically control the vehicle to avoid collision [1][2][3][4][5].
With the increasing maturity of technology on the popularity of cost-effective sensor and image processing, the researchers at home and abroad have done a lot of researches to detect or identify forward target based on machine vision. Reference [6] has made the vehicle detection with taillight feature information and realized the control of tracking vehicle. However, when single taillight information is used for vehicle detection, there may be some cases such as missed detection or undetectable detection. Reference [7] researches recognition method of target vehicle of the FCW/AEB system and distance estimation between the preceding vehicle and host car using the MONO-Camera sensor. References [8][9][10][11] have adopted the AdaBoost method and support vector machine classifier to complete the vehicle detection and marked an image windows on the target vehicle. Reference [12] has put forward a self-adjusting sliding window strategy to improve the detection performance of the target vehicle in the image. With the rapid development in the theory of deep learning, the object detection systems have sprung out, such as SPP-net, R-CNN, Fast R-CNN, and Faster R-CNN [13][14][15]. References [16][17][18][19] present a faster region-based convolutional neural network method to realize vehicle recognition. However, faster R-CNN has combined the RPN (the region proposal network) and fast R-CNN into a unified deep CNN framework, yielding greater detection precision than other methods.
It is our goal to propose a method of extracting region of interest that may contain vehicles based on the composite features of vehicle shadow and taillights. In order to confirm the existence of vehicle in the extracted region of interest, a RBF neural network recognizer is founded based on a series of parameters of the vehicle's edge and region features, such as eight discrete cosine transforms, six independent invariant moments, and five regional feature descriptors. e recognizer can not only identify vehicle but also realize the classification of vehicle and nonvehicle, such as the E-bike. e remainder of this paper is organized as follows. In Section 2, a vehicle detection method is described based on the composite features on vehicle shadow and taillights. Section 3 introduces the vehicle recognition adopting RBF neural network, which consists of two parts: the vehicle feature extraction and the design of recognizer. In Section 4, experimental results on vehicle detection and recognition adopting RBF network are shown. e conclusions and future works are discussed in Section 5. e main architecture of vehicle detection process is described in Figure 1.

Vehicle Detection Based on Multifeature Extraction
In advanced driver assistance system, vehicle detection is a very important step. As for vehicle detection, it involves two main parts: the vehicle feature extraction and the determination of vehicle's region of interest. e image sequence should be preprocessed before the vehicle features are extracted as shown in Figure 2, where the image preprocessing steps mainly include image graying, image filtering, edge detection based on Canny operator, and morphological operation of the image. Figure 1 represents the structure image of taillight pairs, which consists of the left taillight L l and the right taillight L r . As we all know, there are usually many obvious edge features in the vehicle area, such as the rear windscreen, the tail bumper, the taillight, license plate, and so on. ese edge features are very important for locating vehicle region. After a Canny edge detection, the information of vehicle taillight will be included in the edge detection image. Next, the profiles of taillight pairs are extracted from the edge detection image based on the opening and closing operation of morphology. As you can see from Figure 2, the profiles of vehicle taillight pairs can be extracted by the following constraint conditions as shown in formulae (1)-(3). e distance W lr between the left taillight L l and the right taillight L r is restricted in the range of minimum and maximum pixel values as follows:

Vehicle Taillight Feature Extraction.
e height difference of taillight pairs is limited within a range of the threshold value as follows: where H L l , H L r , H thr represent the height of the left taillight L l , the right taillight Lr, and the height threshold value of taillight pairs image, respectively. e area ratio between the left taillight L l and the right taillight L r is limited to a range as follows: where a min and a max represent the minimum and maximum values of the area ratio, respectively. According to the constraint conditions, the center of mass of taillight area is found and the vehicle width is determined by connecting the distance between the center of mass and the width of taillight. At the same time, the vehicle height in the image will be determined based on the gained vehicle width combined with the structural proportion of the vehicle. So far, an analogous vehicle detection region can be marked as a colored rectangular box.

Vehicle Feature Extraction Based on Shadow Area.
A vehicle shadow area usually consists of the following parts, such as the left and right wheel tire, the rear bumper, and the body of vehicle form a darker area. In general, the gray value of vehicle shadow area is in the minimum range in the whole road surface image. For the image to be detected, the mean   2 Complexity G µ and mean square error G σ of road pixel gray value are listed as shown in the following formulae (see Figures 3 and 4): where M and N represent the pixel value of the length and width of image, respectively. f (x, y) represents the gray value of pixel for the x-row and y-column. At last, the threshold of road gray value is defined as follows: ere exists a background noise or discontinuous area in the binarization image of the segmented vehicle shadow region.
erefore, it needs to split the background noise from separated vehicle shadow area as much as possible and use the opening and closing operation of morphology to obtain the vehicle shadow area (see Figure 5). Such, an analogous vehicle detection region can be marked by a colored rectangular box as shown in Figure 6. But this area is only an area of interest for the existence of vehicles, and the existence of vehicles in this area still needs to be identified by subsequent steps.

Integration of Vehicle Detection Area Based on Taillight
and Shadow Feature. According to Section 2.2, the shadow area at the bottom of the target vehicle has been obtained based on the shadow feature. en, it is necessary to calibrate the coordinate points of the vehicle's existing area in the image. e center point of the lower edge line of the shaded area is taken as the base reference point, and the width area of the marker box is determined by combining the width of the shadow (or referring to the width of the taillight). Based on the proportional relationship between the actual structure size of the vehicle and the image size (as shown in the following formula), the vehicle height coordinate points in the image are obtained, and then the rectangular marked area X S of the target vehicle is obtained: where f w1 and f h1 represent the image's marking width and height based on shadow feature extraction, respectively. V w and V h represent the actual vehicle's width and height, respectively. α 1 represents a proportional coefficient.
In a similar way, based on the taillight pair information, the centroid coordinates of the taillight can be obtained according to Section 2.1. Selecting the center point of the center of mass as the reference point, the width area of the marker box is determined according to the width of the taillight. Combined with the proportional relationship between the actual structure size of the vehicle and the image size (as shown in formula (8)), the vehicle height coordinate points in the image can be obtained, and the rectangular marker area X T of the target vehicle can be further obtained: where f w2 and f h2 represent the image's marking width and height based on taillight feature extraction, respectively. α 2 represents a proportional coefficient. e symbols V w and V h have the same meaning as described above.
According to the shadow feature and taillight pair feature, the rectangular marked area of the target vehicle can be obtained, respectively, and the marked area can be integrated according to the actual situation to determine the unique target detection area X w (as shown in formula (9)): where k s and k T represent the rectangular marker area's weight coefficient, respectively.

Vehicle Characteristic Parameter Selection.
In order to reliably recognize the vehicle target, it is necessary to extract the peculiar feature of vehicle from the detected image region. At present, the target shape recognition methods can be classified into two categories: one is called the edge feature extraction based on target edge shape recognition; the other is known as the region feature extraction based on the object covered area, where some parameters of hybrid feature are chosen to describe the feature of vehicle target, such as the discrete cosine descriptors, the independent invariant moments, and the region descriptors.

Discrete Cosine Descriptors.
Because the discrete cosine transformation parameter possesses a series of characteristics, such as the object's moving, rotation, and proportional invariance, the parameter is insensitive to the initial point on shape outline data of an object image. A complex sequence on the shape outline data of an object image, f(x[m], y[m]), can be obtained by extracting the image edge as shown in the following formula: where m represents the feature point variable of the closed edge curve obtained by edge extraction after image segmentation. f (m) denotes a one-dimensional complex sequence. e discrete cosine descriptors can be extracted by transforming formula (10) as shown in the following formula: where N represents the number of feature points of closed edge curve obtained by edge extraction after image segmentation.

Independent Invariant Moments.
For a binary image, the p + q-order moment is listed as shown in the following formula: e central moment of the target image region, μ pq , is described as follows: where x � m 10 /m 00 , y � m 01 /m 00 . (x, ty) represents the center of the target image region, and m 00 and m 01 (m 10 ) represent the zero-order and the first-order geometric moment of the region where the binary image is located. (x, ty) represents the coordinates of the center point of the region.

Region Descriptors.
Five region descriptors, composed of the eccentricity of the target image region, the ratio of the short axis to the long axis in the image region, the area of region, the perimeter of region, and the compact parameter of the image region, are chosen in order to better recognize the vehicle targets, where the compact parameter of image region can be represented as 4πS/L 2 . S denotes the area of the target image region, and L represents the perimeter of target image region.

RBF Neural Network Design on Vehicle
Recognition. e method based on the RBF neural network is a new kind of learning method extended or conducted the input vector to the high-dimensional space with linear or nonlinear unknown systems [20,21]. e network can realize selflearning of neural network quickly because it not only has the good promotion ability of the rapid convergence of tracking error [22][23][24], but also avoids the tedious calculation like BP algorithm. Figure 7 represents the structure image of RBF neural network. As you can see from Figure 7, the RBF neural network is composed of input layer, hidden layer, and output layer.

RBF Network Structure Design.
Eight discrete cosine descriptors, six independent invariant moments, and five region descriptors are chosen as input vector. e output layer consists of two nodes, that is, whether or not the vehicle is identified. In Figure 7, ɷ ih and ɷ hn denote the weights between input layer and hidden layer and between hidden layer and output layer.

Algorithm of RBF Neural Network for Self-Organizing Center Choice.
e algorithm of RBF neural network for self-organizing center choice consists of two stages. e first stage is the self-organizing process without tutor learning for solving hidden basis function, and the second stage is the process with tutor learning to solve the weights between hidden layer and output layer.
e Gaussian transform function is chosen as follows: where ‖x p − c i ‖ represents the Euclidean norm, c i denotes the center of the Gaussian function, and σ represents the variance of the Gaussian function. e output of RBF neural network is described as shown in the following formula: where x p � (x p 1 , x p 2 , . . . , x p m ) T denotes the P th input sample (p � 1, 2, . . . , P). P represents the total number of samples. ɷ ij (i � 1,2, . . ., h; j � 1,2, . . ., n) denotes the weights between input layer and hidden layer. y j represents the actual output of the jth node of RBF neural network.
Assume d is the expected output value of the sample, then the variance of the basis function can be expressed as shown in the following formula: e specific steps of RBF neural network learning algorithm for self-organizing center choice are listed as follows.
(1) Define the center of the basis function based on K-means clustering method. (2) Solve for the variance of the basis function σ i . e basis function of the RBF neural network is the Gaussian function, and the variance can be solved by the following formula: where c max is the maximum distance between the selected centers. (3) Calculate the weights between the hidden layer and the output layer. e connection weights of neurons between the hidden layer and the output layer can be calculated directly by the least square method, and the calculation formula is as follows:

Vehicle Detection Results and Analysis.
In order to verify the effectiveness of the vehicle detection method

Detection Results and Analysis Based on Single Feature
Extraction. Figure 8 shows the detection results of the vehicle's interest region extracted by the taillight feature.

Detection Results and Analysis Based on Composite
Feature Extraction. Figure 10 displays the detection results extracted from the composite feature of taillight pair and shadow. As can be seen from Figure 10(b), the two yellow rectangular boxes obtained by the composite feature are almost coincident. is indicates that the detection results for the same candidate vehicle area using two characteristics are basic anastomotic.
A single feature (such as a taillight pair or a shadow under the car) can be used to mark the candidate vehicle area to a certain extent, but for some special cases, false detection or incorrect detection may occur.
You can see this in Figures 11(a) and 11(b); the partial vehicle targets extracted by taillights are missing (see the red vehicle in Figure 11(a)), and the vehicle target extracted by shadow gray feature is free of defects. e detected vehicle target is complete by adopting the composite feature of the taillight pair and shadow. erefore, the candidate vehicle detection based on multifeature extraction can greatly improve the defects based on single feature detection, such as missed or undetectable (see Figure 11(c)). Even though, it is necessary to further confirm the confusion area (even the marked area adopting the composite feature) once the above confusion occurs. Figure 12 shows the vehicle detection results under rain, snow, and night conditions based on the composite feature.
e important thing to note is that the vehicle detection based on composite features is transformed into the detection of single taillight features because of missing shadow on the bottom of the vehicle. It is actually the coefficient k s in formula (9) is equal to 0 under rain, snow, and night conditions.
It can be seen from Figure 12 that the method proposed in this paper can also accurately determine the detection area of vehicles under the conditions of rain, snow, and night with high accuracy. Figure 13 shows the detection results of two parallel vehicles extracted by the taillight pair and shadow composite feature.
It can be seen from the Figure 13(b) that the extraction method of a single taillight can lead to confusion in the vehicle's interest region when the taillights are similar in shape and equal in height. e parallel vehicles can be detected by the shadow feature, but the shadow under the vehicle also interferes with the marked area. In order to ensure the accuracy of vehicle identification, it is necessary to further confirm the confusion area once the above confusion occurs, even the region of interest that can be identified by the composite feature. erefore, RBF neural networks were used to further confirm the marked area.

Vehicle Recognition Result in ROI Adopting RBF Network.
For marked candidate areas (i.e., areas of possible interest for vehicles), the RBF neural networks can be used for further validation adopting the MATLAB program designed in order to increase the reliability of the presence of vehicles in the ROI with composite feature markers. Figure 14 is part of the images in the vehicle sample database established in this paper. All the sample images need to be processed uniformly, and each image is adjusted to 250 × 190-pixel size with a horizontal and vertical resolution of 96 dpi. Table 1 shows the extraction results of mixed feature parameters of vehicle targets in some test samples.
In order to verify the effectiveness of RBF network designed in this paper, 850 vehicle samples (as shown in Figure 14) and 850 nonvehicle samples (negative sample) were established, respectively, and the RBF neural network was trained. 60% positive samples were randomly selected to complete the test, and the test results are shown in Table 2. It is worth mentioning that the area marked by the yellow box in the middle shown in Figure 13(b) is identified as nonvehicle, thus avoiding false detection. Figure 15 shows the error performance curve of RBF neural network test. As can be seen from Figure 15 requirements and has higher recognition accuracy and convergence speed.

Conclusions
(1) In this paper, a possible region of interest of vehicles is determined and verified by extracted the composite feature of vehicle's taillight pair and shadow at the bottom of vehicle based on a large collection of images in various complex environments. e results of vehicle detection show that the proposed method can accurately mark the region of interest of vehicles based on composite features with 97% accuracy and greatly make up or improve the shortcomings of single feature method, such as missing or failing detection. (2) An RBF neural network recognizer is found and tested by extracting a series of parameters on the vehicle's edge and regional features. e test results show that the vehicles in the marking region can be identified reliably by the RBF neural network recognizer and its accuracy reaches as high as 94%.
In a word, the research results have important reference value for the integrated control algorithm of intelligent Note. T1 represents the eccentricity of the region, T2 represents the ratio of the short axis to the long axis of the region, T3 represents the compactness of regions, T4 to T9 parameters represent independent invariant moment, T10 to T17 parameters represent discrete cosine descriptor, T18 represents the area of region, and T19 represents the perimeter of region.  safety system such as IACCS, FCW, or multisensor environment perception. Further research in this paper is intended to explore the detection method of e-bikes and pedestrians adopting intelligent control theory or algorithm and to study the real-time target tracking control of vehicle, nonmotor, and pedestrian in depth.

Data Availability
e data used to support the findings of this study are available upon request to the corresponding author.

Conflicts of Interest
e authors declare no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.