Research on the Prediction of the Operational Risk Field of Intelligent Vehicles Based on Dual Multiline LiDAR

To effectively evaluate the risk situation between intelligent vehicles and surrounding traffic participants in complex scenes, a complex traffic environment perception technology based on dual multiline light detection and ranging (LiDAR) is proposed in this work.)e vehicle motion state is predicted by fusing the multiview characteristics of point cloud timing and multitarget interaction information, and the risk assessment model is constructed via artificial potential field theory. )e real-time point cloud information is used to obtain the time-sequence bird’s-eye view and range image. )e improved VGG19 network model is used to extract the time-sequence high-level abstract combined features in the multiview scene. )e constructed time-sequence feature vector is used as the input data of the attention mechanism, and the attentionbidirectional long short-term memory (Attention-BiLSTM) model is used for training to form the desired input-output mapping relationship. )e motion state of the target vehicle can therefore be updated, and the static and dynamic risk fields of traffic participants surrounding the vehicle can be established based on artificial potential field theory, thereby allowing for the evaluation of the operational risk of the intelligent vehicle. )e results of experiments demonstrate that the prediction effect of the target vehicle state parameters via the use of the proposed model is better than that of other compared models, and the prediction effect of the risk field of intelligent vehicle operation based on the multiview point cloud features and vehicle interaction information is good.


Introduction
e modeling of the traffic situation of a road environment is a key technical means to ensure the safe driving of intelligent vehicles in the road area and prevent rollover and collision and has therefore attracted extensive research attention. Traffic risk prediction is mainly used to process original sensor data, which usually include the lowest-level data from stereo vision [1], radar [2], and light detection and ranging (LiDAR) [3] systems. With the help of these sensor data, models can reasonably and simply express the surrounding environment of the vehicle, the main information of which includes the location information of the boundaries (lane line, roadside, and intersection) and information about obstacles in the road. e ultimate goal is to improve traffic safety and reduce the incidence of traffic accidents.
At present, the most common risk assessment method is to establish the risk field model via a grid map or the artificial potential field method. A grid map model is a variant of the Bayesian occupancy filter [4]. In the field of intelligent vehicles, occupancy grids based on Bayesian occupancy filter theory have been widely applied in various studies [5][6][7][8][9]. For example, Lee and Kum [10] proposed a risk assessment method for the prediction of an occupancy map and combined this method with acceleration trajectory sampling and screening to obtain the safe trajectory with the lowest risk. Khatib [11] proposed the establishment of the risk field model by using the artificial force field method; the core concept is that the obstacle generates a repulsive field, the target point generates a gravitational field, and a robot moves along the direction with the fastest decline of the potential field in the combined potential field. At present, the common method used in the field of decision planning is to establish the repulsion fields of roads, lane lines, and obstacle vehicles and the gravitational field of target points and to solve the potential field parameters according to the vehicle driving trajectory [12][13][14][15][16][17]. Hesse and Sattel [18] combined the potential field method and elastic belt theory, introduced vehicle dynamics into the elastic belt, and ultimately proposed an extended elastic belt pathplanning method. Cheng et al. [19] introduced a virtual force field and proposed an obstacle avoidance pathplanning method based on model prediction theory, which exhibited good robustness. Tang et al. [20] proposed the RRT * algorithm guided by the safety field, which is characterized by a reduced amount of calculation and an improved convergence speed. Parzani and Filbet [21] proposed a local path-planning algorithm for active collision avoidance based on a dangerous repulsion field, which is characterized by a small amount of computation, safety, and reliability. Song et al. [22] improved the traditional artificial potential field method and established an objective function that considers both road and vehicle constraints; this not only eliminates the problem of jitter, but also meets the safety index requirement. To overcome the problem of the local minimum points of the artificial potential field, researchers have proposed some improved methods, such as the general potential field method and virtual force field method [23][24][25], the artificial coordination field [26][27][28][29][30], the random escape method [31], heuristic search and walking along the wall [32], and the tangent bug method [33], among others. ese methods have generally been proposed to apply additional control force to the controlled object. e controlled object is generally a robot and usually has no road constraints; thus, it is unsuitable for path planning under structured road conditions. e future motion state of traffic participants surrounding the target is affected by the motion of other objects and the spatial environment. us, in view of these problems and the complexity of urban scenes, to improve the prediction accuracy of the risk field of intelligent vehicle operation, it is necessary to analyze the interaction relationship between multiple objects and the information above and below the scene. Because the motion of an object in an urban scene is affected by the interaction of other surrounding objects and the surrounding environment, in this work, our contributions are mainly given by the following: (1) Propose an environment sensing technology based on dual multiline LiDAR to effectively collect the surrounding environment of intelligent vehicles and the interaction information with traffic participants in complex traffic scenes.
(2) Establish a multiobjective operation interaction feature model based on one-dimensional convolutional neural network to effectively mine the interaction features in vehicle operation state parameters. (3) Two improved VGG19 network models are used to extract the dual view features of point cloud so as to provide necessary high-precision environmental spatiotemporal information input for moving target state prediction. (4) Integrate the multiview features of point cloud timing and multitarget interaction information obtained by dual multiline LiDAR, and effectively evaluate the risk situation between intelligent vehicles and surrounding traffic participants in complex scenes by using the Attention-BiLSTM model.

Dual Multiline LiDAR Environment Sensing System
To effectively collect information on the surrounding environment of intelligent vehicles and information on interactions with traffic participants in complex traffic scenes, an environment sensing technology based on dual multiline LiDAR is proposed. e installation position of the dual multiline LiDAR system is illustrated in Figure 1. e vertical installation mode is adopted for the LiDAR system. e LiDAR on the upper side is used to obtain the global environmental map information in real time and generate the time-sequence aerial view and front-view depth map point clouds. e LiDAR located at the lower side is used to detect the operation status data of traffic participants in real time and obtain information on the complex interactions between the intelligent vehicle and traffic participants. e mathematical model of the installation position of the dual multiline LiDAR system is as follows: where ↑ represents the LiDAR on the upper side of the dual LiDAR system, ↓ represents the LiDAR on the lower side of the dual LiDAR system, h is the height of the center point of the LiDAR relative to the ground, α is the included angle between the horizontal line of the LiDAR center and the lowest scanning line, Δ∅ is the vertical angular resolution of the LiDAR, d is the distance between the ground intersection of the lowest scanning line and the ground projection point of the LiDAR center, and d i is the projection distance of the scanning line above the ith horizontal plane of the LiDAR on the ground [34]. When working, the LiDAR shall ensure that it will not scan the red key points of the vehicle body. Point A′ is located at the front end of the hood, point B ′ and point C ′ are the lowest and highest points of the A-pillar of the vehicle frame, respectively, point D ′ and point E ′ are the highest and lowest points of the C-pillar of the vehicle frame, respectively, and point F ′ is the rearmost end of the trunk. A ′ and A ″ , B ′ and B ″ , C ′ and C ″ , D ′ and D ″ , E ′ and E ″ , and F ′ and F ″ are symmetrical about points A, B, C, D, E, and F, respectively. Take the projection point O of point F on the ground plane as the origin of the coordinate system, take O as the starting point, point to the front of the vehicle along the ground plane in the longitudinal central plane as the positive direction of X-axis, the vertical longitudinal central plane of crossing point O points to the driver's side as the positive direction of Y-axis, and the vertical ground plane of crossing point O points to the vehicle body as the positive direction of Z-axis. 32-line LiDAR center point G ↑ coordinates are (x ↑ , y ↑ , z ↑ ); 16-line LiDAR center point G ↓ coordinates are (x ↓ , y ↓ , z ↓ ). In order to ensure that key points A, B, C, D, E, and F are not in the field of vision of dual LiDAR, taking 32-line LiDAR as an example, the installation coordinates shall meet the following requirements: In order to prevent the lateral key points A ′ , B ′ , C ′ , D ′ , E ′ , and F ′ of the vehicle from being scanned by the LiDAR, the installation of the LiDAR position shall also meet the following requirements:

Multiview Point Cloud Generation.
e detection accuracy in complex environments is low due to the lack of depth information of images collected by a camera. Although the original point cloud obtained by LiDAR has accurate depth information, the point cloud is sparse and can only realize the three-dimensional (3D) frame positioning of large objects. e detection effect of small objects is poor and is prone to missed detection or false detection. In the aerial view of the original point cloud obtained by transformation, the objects occupy different spaces, which can avoid the occlusion problem between objects. Moreover, the 3D boundary of the object in the aerial view is accurate, which can avoid the problem of position offset, and the projection object in the aerial view retains its physical size, which can avoid the problem of object distortion. e depth information of the area in front of a moving object can be obtained from the front-view depth map obtained by the Journal of Advanced Transportation transformation of the original point cloud, which can reflect the geometry of the surface of the environmental object. erefore, based on the multiview of the point cloud, the original point cloud obtained from LiDAR is converted into an aerial view, a front-view depth map, and other image forms for processing, thereby providing the necessary highprecision environmental perception information used as the input for the subsequent state prediction of the moving target.
e multiview generation process of the point cloud is used to convert the original LiDAR point cloud into a top aerial view and a front depth view, as presented in Figure 2. e bird's-eye view and range image can be obtained by transforming the original point cloud of 3D LiDAR via the two-dimensional (2D) projection of a 3D image and according to the internal parameters of the corresponding camera. e conversion relationship from the LiDAR coordinate system to the image coordinate system is as follows: where (u, v) is the pixel coordinate, (X C , Y C , Z C ) is the camera coordinate, (X L , Y L , Z L ) is the LiDAR coordinate, R L−C represents the rotation matrix from the LiDAR coordinate system to the camera coordinate system, and T L−C represents the 3D translation vector from the LiDAR coordinate system to the world coordinate system. Moreover, u 0 and v 0 are the internal parameters of the camera, f is the focal length of the camera, and Z C is the depth value corresponding to the current image coordinate (u, v, 1) T [34].

Multitarget Information Interaction Network for Complex
Traffic Scenes. To predict the state of the target vehicle in complex traffic scenes, its future state is estimated according to its own running state and combined with the spatiotemporal interaction relationships with the surrounding environment. To effectively mine the interaction features in complex traffic data, the input data perceived by the onboard LiDAR should include vehicle size parameters and operation state parameters. Figure 3 illustrates a multitarget information interaction network in a complex traffic scene. e intelligent vehicle equipped with a LiDAR sensor is located in the center of the road, and its operation state model is where L is the length of the intelligent vehicle, W is the width of the intelligent vehicle, and X (t) , Y (t) are, respectively, the horizontal and vertical axes coordinates of the center point of the onboard LiDAR. Moreover, are surrounding vehicles whose state behavior must be predicted. e green vehicle O 1 is an example of the target, and its operation state model is where L 1 is the target vehicle length, W 1 is the target vehicle width, X (t) 1 and Y (t) 1 are, respectively, the horizontal and vertical axes coordinates of the center point of the target vehicle, V (t) 1 is the instantaneous velocity of the target vehicle, and (X (t−5) ) are the operation history status data of the target vehicle. e multitarget information interaction network model of the complex traffic scene presented in Figure 3 is as follows: where O (t) is the operation state model of the intelligent vehicle and are the operation state models of the front green vehicle, the front-right blue vehicle, the front-left yellow vehicle, and the rear-right red vehicle, respectively [34].

Vehicle Operational Risk Prediction Model
Integrating Multiview and Interaction Information

Model Framework.
e overall network architecture of the proposed vehicle motion state prediction model that integrates multiview point cloud features and multitarget interaction information is shown in Figure 4. e overall network architecture is mainly composed of a multiview point cloud feature extraction network, a multitarget interaction information extraction network, an Attention-BiLSTM prediction network, and an operational risk assessment network. e operational risk of multiple targets in the future is predicted by inputting the historical motion state information of the targets in the complex traffic scene obtained by the LiDAR sensors and the corresponding time sequences of the aerial view and front-view depth map point clouds.

Multiview Point Cloud Feature Extraction Network.
Two improved VGG19 network models are used to extract multiview point cloud features. One branch extracts the point cloud top aerial view features, and the other branch extracts the point cloud front depth view features. e improved VGG19 network model adds several convolution layers on the basis of a shallow convolutional neural network (CNN). Because the addition of a convolution layer is more conducive to image feature extraction than the addition of a fully connected (FC) layer [35], the improved VGG19 network model can more easily overcome the diversity and complexity of traffic scenes than a shallow CNN and can ultimately achieve a better spatiotemporal feature extraction effect. As shown in the multiview point cloud feature extraction network in Figure 4, the VGG19 model has 16 convolution layers in total, among which the largest pooling layer is behind the convolution layer of layers 2, 4, 8, 12, and 16.
e convolution kernel size in the convolution layer ranges from 224 × 224 reduced by half to 14 × 14. In this way, the use of a progressively decreasing convolution kernel is equivalent to the addition of implicit regularization, which can improve the feature extraction ability of the network and increase its operation speed. Figures 5 and 6 show the process of the improved VGG19 network extracting feature map of point cloud top

Multitarget Interaction Information Extraction Network.
e data of the CNN input layer are convoluted and pooled layer by layer via the established multiple filters to extract the potential topological features between the data. e deeper the network, the more abstract the extracted features, and the better the robustness of the obtained features. e convolution in the convolution layer checks the data features output from the upper layer for the convolution operation and uses the activation function to construct the feature vector output. e corresponding mathematical model has been provided in a previous publication [36]. e multitarget interaction relationship corresponds to a one-dimensional (1D) time series, so a 1D CNN (1DCNN) can be used to extract the potential interaction relationship. Specifically, the corresponding local information can be calculated by sliding the convolution kernel of a specific size through the local area of the input data. A one-dimensional convolution has only one spatial dimension, and its convolution process is as follows: where X l is the matrix corresponding to the input data, C in is equal to the number of input channels, X l+1 t,c is the tth parameter in the cth channel of layer l + 1, W l k,i,c is the kth weight coefficient corresponding to the ith channel of the cth convolution kernel, B c is the offset coefficient of the corresponding convolution kernel, k is the size of the convolution kernel, and c is the step size of the convolution kernel.
As shown in the multitarget interaction information extraction network in Figure 4, the LiDAR information is input to obtain the multitarget historical motion state data of the time-sequence frame of the complex traffic scene. e potential spatiotemporal interaction diagram of the corresponding time-sequence frame extracted is output through the 1DCNN network, and each dynamic target is represented as a node. e nodes corresponding to any two targets in the same point cloud frame are connected with a solid line to represent the space edge, and the same target in adjacent frames is connected with a dotted line to represent the time edge.

Attention-BiLSTM Prediction Network.
e output features of the multiview point cloud feature extraction network and the multitarget interaction information extraction network are fused, and the input model structure is presented in the Attention-BiLSTM prediction network shown in Figure 4. e model consists of an input layer, a BiLSTM layer, an attention layer, and an output layer. e input layer is the feature vector of the fused multiview point cloud features and multitarget interaction information. e attention mechanism can give different weights to the point cloud multiview and multitarget interaction characteristics of the input prediction network so as to highlight the influence of strong correlation factors and reduce the influence of weak correlation factors.
e BiLSTM layer is composed of forward and backward LSTM layers. e mathematical model of the LSTM structural unit is given by   Journal of Advanced Transportation where x t is the input data at time t, h t−1 is the hidden layer output at time t − 1, σ f , σ i , and σ o are the sigmoid functions of the forget gate, input gate, and output gate, respectively, f t , o t , and h t are the final outputs of the forget gate, output gate, and current time unit, respectively, and i t , C t , and C t are, respectively, the input gate, the tanh function, and the updated value of the unit state at the current time. Moreover, W f , W i , W c , and W o are the weight matrixes, and b f , b i , b c , and b o are the offset terms [37]. During information processing via the BiLSTM model, the network layer status update of the BiLSTM model from front to back is given by where H is the output function of the backward LSTM layer, W    Journal of Advanced Transportation e network layer status update of the BiLSTM model from back to front is given by where H ′ is the output function of the forward LSTM layer, W xh ← is the weight matrix from the input layer to the backward LSTM layer, W h ← h ← is the weight matrix between backward LSTM layers, and b h ← is the offset term. After the network layer is superimposed, the output of the unit cells of the BiLSTM model is where H is the output function of the output layer, e attention layer is the output of the learning function F added on the basis of the BiLSTM layer, as given by e weight coefficient ω t of the BiLSTM output vector h t is calculated by e linear weighting method is used to obtain the focus feature vector α, as given by the following equation, and the vehicle state prediction result is finally output.

Operational Risk Assessment Network.
To quickly quantify the driving risk level of vehicles in the environment, the driving risk fields of obstacles are established. e driving risk fields comprehensively consider the overall size of obstacles and the relative movement between the obstacles and the intelligent vehicle. e vehicle operational risk field is the sum of the static and dynamic risk fields and is defined as follows [38]: where U is the operational risk field, U sta is the static risk field, and U dyn is the dynamic risk field. e static risk field is only determined by the properties and shape of the obstacle, and its field strength is affected by two factors, namely, the relative distance between the intelligent vehicle and the obstacle and the direction of the intelligent vehicle approaching the obstacle. e smaller the relative distance between the intelligent vehicle and the obstacle, the greater the possibility of a traffic accident; thus, the strength of the static risk field is greater. e driving direction of motor vehicles is limited; that is, the lateral speed of a motor vehicle is usually much less than the longitudinal speed. erefore, in the longitudinal direction of the obstacle, the static risk field has a large influence range, and the influence range in the lateral direction of the obstacle is small. us, a two-dimensional Gaussian function can be used for the static risk field of obstacles. Furthermore, considering the large overall size of the motor vehicle and the large field strength difference between the edge and the center point of the motor vehicle, it is inappropriate to use the two-dimensional Gaussian function of the firstorder center distance. erefore, the two-dimensional Gaussian function of the high-order center distance is used as the risk field of obstacles. e high-order center distance flattens the peak of the function so that the entire obstacle surface has a similar risk field strength. e static risk field is defined as where (x, y) are, respectively, the horizontal and vertical coordinates of a point in the traffic environment, (x obs , y obs ) are, respectively, the abscissa and ordinate of the center point of the obstacle, A is the field strength coefficient, β is the high-order coefficient, σ x and σ y are the shape functions of the obstacle, L obs is the length of the obstacle in the longitudinal direction, W obs is the length of the obstacle in the lateral direction, and k x and k y are, respectively, the transverse-and longitudinal-dimension coefficients of the obstacle. e dynamic risk field is determined by the movement of obstacles and intelligent vehicles.
us, for the construction of the dynamic risk field, the movement of obstacles and intelligent vehicles must be comprehensively considered. e field strength of the dynamic risk field is mainly affected by four factors, namely, the relative distance, the absolute value of the relative speed, the direction of the relative speed, and the approach direction of the intelligent vehicle. When the relative speed direction and approach direction are the same, the smaller the relative distance and the greater the absolute value of the relative speed, the greater the possibility of a traffic accident, and the greater the field strength of the dynamic risk field. When the values of the other three items are the same, the greater the absolute value of the relative speed, the greater the possibility of a traffic accident, and the greater the field strength of the dynamic risk field. To meet the requirements, a dynamic risk field is established based on the two-dimensional Gaussian function. e formula of the dynamic risk field is 8 Journal of Advanced Transportation where σ v is the function of the speeds of the obstacle and intelligent vehicle, v obs is the speed of the obstacle in the longitudinal direction, v is the longitudinal speed of the intelligent vehicle, k v is the velocity coefficient, rel v is a function describing the direction of relative motion between the obstacle and the intelligent vehicle, α is the relative velocity coefficient, and the definitions of the other parameters are the same as those for the static risk field.

Experiments and Result Analysis
To verify the effectiveness of the proposed vehicle motion state prediction method that integrates multiview point cloud features and multitarget interaction information, an intelligent vehicle experimental platform was employed for data collection. e experimental platform vehicle was a Shanghai Volkswagen Langyi 2013 1.6-L automatic comfort version; the dimensions of which were 4605 × 1765 × 1460 mm (length × width × height). e platform included RS-LiDAR-16 LiDAR, RS-LiDAR-32 LiDAR, a Gigabit Ethernet switch, an algorithm processor, a notebook computer, an uninterrupted power supply, and other equipment. e 16-line LiDAR could scan the surrounding environment with a vertical field of view (FOV) of between −15°and 15°and a horizontal FOV of 360°, a maximum ranging range of 150 m, and an output of 32 × 10 4 points per second with the scanning frequency set to 20 Hz. e 32wire LiDAR can scan the surrounding environment with a vertical FOV of between −25°and 15°and a horizontal FOV of 360°, a maximum ranging range of 200 m, and an output of 60 × 10 4 points per second with the scanning frequency set to 20 Hz. e laptop was equipped with an Ubuntu 16.04 operating system, a CUDA 9.0 deep learning parallel computing acceleration kit, an NVIDIA GeForce GTX 1650 independent graphics card, and an Intel Core i5-9300H CPU with 2.4 GHz and 16 GB memory. e algorithm processor included built-in efficient environment detection-related algorithms. e Gigabit Ethernet switch ensured the highvelocity data transmission of the data acquisition platform, and the uninterrupted power source provided a reliable power supply for the experimental data acquisition equipment. e environmental point cloud data collected by LiDAR were sent to the Gigabit Ethernet switch through an Ethernet cable and transmitted to the algorithm processor for environmental information detection. e results were then sent to the notebook computer via Ethernet for storage and secondary operation processing visualization.
e test route and road information collection scene are shown in Figure 7. e test route was a two-way, four-lane urban road section on the East Second Ring Road in Guilin, Guangxi, with a total length of 4.2 km, of which 3.6 km is straight and 0.6 km is curved, and the speed limit on which is 60 km/h. During the test, the tester drove from the starting point, namely, the Liuhe intersection, to the end point (destination), namely, the Nanzhou Bridge, for about 7 min. To fully collect the point cloud data of the scene and the vehicle interaction information of the road section, the tester drove the intelligent vehicle experimental platform 40 times to collect data from different target vehicles, and focus was placed on extracting the scene data during multivehicle interaction for analysis. e intelligent vehicle experimental platform was divided into different target vehicle following data acquisition scenes 40 times, and different types of following scenes were divided. Moreover, 100 groups of following data were used as the training set, and 30 groups of following data were used as the test set.
To verify the prediction effect of the proposed Attention-BiLSTM model, the model was implemented on the Keras deep learning platform based on TensorFlow and was, respectively, compared with the FC, LSTM, and BiLSTM models on the same data set. e effects of the FC, LSTM, BiLSTM, and Attention-BiLSTM models on the global position X, global position Y, and relative speed V are, respectively, presented in Figures 8(a)-8(c). As presented in the graphs, the red dotted line representing the prediction values of the proposed Attention-BiLSTM model was found to have the best fit with the blue solid line representing the real values. erefore, the prediction effect of the Attention-BiLSTM model on the state of the target vehicle was better than those of the FC, LSTM, and BiLSTM models.
To further investigate the prediction effect of the proposed Attention-BiLSTM model on the global position X, global position Y, relative speed V, and other state parameters of the target vehicle in the multivehicle interaction scene of the multiview point cloud, a comparative experiment was designed, and the results are presented in Figure 9. e fitting degree of the true values (blue solid line) and the values predicted based on multiview point cloud features and vehicle interaction information (red dotted line) was found to be higher than that of the true values (blue solid line) and the values predicted based only on vehicle interaction information (yellow dotted line). e mean square error (MSE) was used as the evaluation index to effectively evaluate the effect of the prediction model of the target vehicle state. Based on the statistics of the difference between the predicted and true values, the calculation formula is given as follows: where P is the predicted value and R is the true value. A smaller value of the index σ MSE means that the predicted value is closer to the true value, which indicates that the Journal of Advanced Transportation model has better performance and stronger feature expression ability. As reported in Table 1, after 500 iterations of each model in the same multivehicle interaction scene, the MSE values of the prediction of the state parameters were ultimately obtained. It is evident that the MSE loss value of the proposed Attention-BiLSTM model was the lowest, and the prediction effect of this model on the motion state of the target vehicle was obviously better than those of the other models.
us, the effect of target vehicle state prediction   based on multiview point cloud features and vehicle interaction information was found to be the best.
To further study the proposed Attention-BiLSTM model based on multiview point cloud features and vehicle interaction information for the analysis of the operational risk field of target vehicles surrounding an intelligent vehicle in a traffic scene, a historical continuous scene of interaction between an intelligent vehicle and four surrounding target vehicles was analyzed, as shown in Figure 10. Moreover, the historical time-sequence data of the state of the target vehicle corresponding to the input Attention-BiLSTM model are reported in Table 2. Table 2 reports the state prediction results of target vehicles ID1, ID2, ID3, and ID4 surrounding an intelligent vehicle in a traffic scene in the next 3 s based on the proposed Attention-BiLSTM model.
To verify the prediction effect of the proposed Attention-BiLSTM model on the operational risk field of target vehicles ID1, ID2, ID3, and ID4 surrounding the intelligent vehicle in the traffic scene, the data in Table 3 were input into the operational risk assessment network. Figure 11 presents the changes of the static risk fields of target vehicles ID1, ID2, ID3, and ID4 surrounding the intelligent vehicle in the traffic scene as predicted by the proposed Attention-BiLSTM model. Figures 11(a)-11(c) present the predicted changes of the strength of the static risk field of the target vehicle in the next 3 s. e strength of the static risk field was found to increase with the decrease of the relative distance between interactive vehicles. For the points close to the edge of the target vehicle, the field strength approached the peak value, and the field strength of the entire surface of the target vehicle was similar. Figures 11(d)-11(f ) present the predicted changes of the equipotential line of the static risk field of the target vehicle in the next 3 s. e equipotential line of the static risk field was found to have a large influence range in the longitudinal direction of the target vehicle. Moreover, affected by the edge length L and width W of the target vehicle detected by the LiDAR, the larger the vehicle size, the larger the range of the equipotential line of the risk field. erefore, the static risk field established via the use of the high-order center distance two-dimensional Gaussian function meets the requirements.   Figure 12 presents the changes of the dynamic risk field prediction of target vehicles ID1, ID2, ID3, and ID4 surrounding the intelligent vehicle in the traffic scene based on the proposed Attention-BiLSTM model. Figures 12(a)-12(c) present the predicted changes of the strength of the dynamic risk field of the target vehicle in the next 3 s. e strength of the dynamic risk field was found to gradually increase with the decrease of the relative distance between interactive vehicles. When the relative distance was equal, the field strength increased with the increase of the absolute value of the relative velocity. e peak value of the dynamic risk field strength was found to be less than the field strength parameter, and it is related to the absolute value of the relative speed between the intelligent vehicle and the target vehicle; the greater the absolute value of the relative speed, the greater the peak value of the dynamic risk field strength. Figures 12(d)-12(f ) present the predicted changes of the equipotential line of the dynamic risk field of the target vehicle in the next 3 s. e peak value of the equipotential line of the dynamic risk field was not at the edge of the vehicle, but was dynamically adjusted with the parameters, such as the relative speed and relative distance between the two vehicles and the size of the target vehicle. erefore, the dynamic risk field established via the use of the high-order     center distance two-dimensional Gaussian function meets the requirements. Figure 13 presents the changes of the operational risk fields of target vehicles ID1, ID2, ID3, and ID4 surrounding the intelligent vehicle in the traffic scene predicted by the present the predicted changes of the equipotential line of the operational risk field of the target vehicle in the next 3 s. e red pentagram in the figure represents an intelligent vehicle, which is located at the geometric center of the operational risk field. e relative distance between target vehicle ID4 and the intelligent vehicle gradually increased, and the operational risk field of the vehicle was gradually weakened. Target vehicle ID2 and the intelligent vehicle were located in the same lane, their relative distance was small, and their relative speed was high. e operational risk field of the vehicle is the largest one by one. e relative speed between target vehicle ID3 and the intelligent vehicle was small, and the operational risk field of the vehicle was the smallest. e size of target vehicle ID1 was the largest, the relative distance from the intelligent vehicle gradually became closer, and the operational risk field of the vehicle was gradually enhanced. In summation, the constructed operational risk assessment network can effectively assess the operational risk of intelligent vehicles.

Conclusion
To effectively predict the operational risk between intelligent vehicles and surrounding traffic participants in complex scenes, a vehicle motion state prediction algorithm that integrates the multiview features of point cloud time series and multitarget interaction information was proposed. is proposal was based on the characteristics of object motion affected by the surrounding environment and the interaction of surrounding objects, as well as on the complex traffic environment perception technology of dual multiline Li-DAR. Artificial potential field theory was applied to construct the operational risk assessment network. With the assistance of the real-time point cloud information perceived by the LiDAR, the time-sequence bird's-eye view and range image are obtained, and the time-sequence high-level abstract combined features in the multiview scene are extracted by using the improved VGG19 network model. ese features are then fused with the potential spatiotemporal interaction features obtained by extracting the multitarget operation state data detected by the LiDAR by using a 1DCNN. e temporal feature vector is constructed as the input data of the attention network, and the desired inputoutput mapping relationship is trained to predict the motion states of traffic participants. e results of experiments demonstrated that the prediction effect of some state parameters, including the global position X, global position Y, and relative speed V of the target vehicle, achieved by the proposed Attention-BiLSTM model was better than that achieved by the FC, LSTM, and BiLSTM models. Moreover, the prediction effect of the intelligent vehicle operational risk field based on multiview point cloud features and vehicle interaction information was found to be good. e results of this study can provide support for follow-up research on the recognition of the behavior intention of unmanned vehicles.

Data Availability
e data used to support the findings of this study have not been made available due to data privacy.

Conflicts of Interest
e authors declare that there are no conflicts of interest regarding the publication of this paper.