Conflict Judgment and Safety Assessment at Unsignalized Intersections Based on Machine Vision

,


Introduction
Trafc confict can be expressed as the result of interaction between one trafc participant and other trafc participants or trafc facilities during the course of movement in space [1].In addition to rear-ending collisions, cross-collisions are a common type of collision.In a cross-collision between two vehicles, there is a certain angle between the motion directions of the two vehicles, so the collision is a kind of side collision.Most of the cross-collisions occur at the right front or left front of each vehicle.For the convenience of expression, all collisions involving a collision between the left and right fronts of the involved vehicles are collectively referred to as cross-collisions in this article, while the trafc conficts before the cross-collisions are referred to as cross-conficts.
Commonly used sensors for obstacle detection include millimeter wave radar, laser radar, and camera [2].Millimeter wave radar has excellent real-time performance [3,4], but its detection accuracy is low.Laser radar has high detection accuracy, but its price is quite high.Compared to other sensors, the camera has the advantages of high detection speed, good real-time performance, and robust adaptability [5,6].Obstacle detection based on machine vision uses methods such as machine learning and feature matching to identify the obstacles in a target region based on the region of interest (ROI) partition and obtains the deep information of the obstacles using a vision ranging model [7][8][9][10].Liu et al. [11] proposed the SSD (single shot multibox detection) model based on the precise positioning method based on the Faster R-CNN candidate area proposal and the fast detection method based on YOLO, improving the endto-end target detection method.Redon et al. [12] proposed the detection and recognition framework of YOLOv2, optimized the detection accuracy, detection speed, and detection category of the network model, and improved the running speed of the feature extraction network and the convergence speed of the model.Zhao et al. [13] proposed an integrated microscopic trafc fow model to describe human-driven vehicle maneuvers under interactions and verifed that the model can accurately describe vehicle passing orders of interacting maneuvers, paths, and speeds against empirical data.Zhao et al. [14] proposed a novel method to model the trajectories of vehicles in two-dimensional space and speed, and the descriptive power, plausibility, and accuracy of the proposed model are investigated by comparing the calculated results under several cases, which can be solved from symmetry or analytically.Tis article proposes a detection method based on machine vision.Tree monocular cameras are used to build a monocular-binocular vision switching system based on an artifcial potential feld for detecting the obstacle vehicles of the ego vehicle.Te Kalman flter algorithm is used to locate, track, and predict the positions of the target vehicles.Monocular and binocular ranging models are established to calculate the distance, speed, direction, and other parameters of obstacle vehicles.
An accurate real-time prediction of vehicle trajectories at unsignalized intersections is a crucial part of trafc collision detection and early warning [15][16][17][18].Formosa et al. proposed a centralized digital architecture that predicts trafc conficts using deep learning.Tis model detects lane markings and tracks vehicles according to the images captured by a single front-end camera.Te data are combined with trafc variables and calculated surrogate safety measures (SSMs) through the centralized digital architecture to develop a series of deep neural network (DNN) models for predicting trafc conficts [19].Huang et al. developed an extended driving risk estimation model considering the confict between vehicles and pedestrians based on an artifcial potential feld, which projects the vehicles at the intersection into virtual lanes and measures the confict risk in dynamic risks using the minimum safety distance between conficting vehicles in the virtual lane [20].Rocha et al. developed a framework [21] that can utilize the self-organizing mechanism to group trafc conficts based on the similar characteristics and contributing factors of trafc accidents.Te causes and characteristics of diferent types of confict events can be revealed based on the grouping results, allowing the trafc management department to formulate strategies to reduce the risk of collision.Qadri et al. captured road trafc information using a handheld digital camera and extracted vehicle information using a video playback and fow recording device.On this basis, they modeled crosscollision using a multiregression model.Tey analyzed the types of trafc conficts at the unsignalized intersections of six intersecting road segments from the perspective of time, accident, and travel speed [22].Zheng et al. [23] validate the Bayesian hierarchical extreme value model that is developed for estimating crashes from trafc conficts.Te study shows that Bayesian hierarchical extreme value models signifcantly outperform the on-site models in terms of crash estimation accuracy and precision.Tis article focuses on unsignalized intersections and only considers the obstacle vehicles traveling in the right front and left front directions of the ego vehicle.A model based on trajectory prediction, collision zone, collision time, and safety distance is established based on machine vision detection to enable vehicle detection and safety assessment at the unsignalized intersection.Tus, an early warning can be given to the driver before the accident actually occurs, allowing the driver to take braking measures in case of an emergency to reduce the occurrence of crosscollision accidents and improve the safety of driving.

Right front Direction Based on the Monocular-Binocular Vision Switching System
Te cross-collision detection for vehicles is to determine whether there is a threat in the left and right front directions of the ego vehicle.Tere are many detection methods based on machine learning at present.Te YOLOv5 model launched by Ultralytics in 2020 is implemented in the ecologically mature PyTorch, which has the advantages of small volume, high precision, and fast speed, and the efect of YOLOv5 has been recognized [24][25][26].In this article, the detection of the obstacle vehicles in the left and right front directions of the ego vehicle is based on YOLOv5, and a monocular-binocular vision switching system is developed using an improved artifcial potential feld.Image information acquisition is the process of converting a three-dimensional object into a two-dimensional plane, which can be simplifed as a pinhole camera model (see Figure 1).Assuming that the efective focal length of the camera is f, the installation height of the camera is h, the pitch angle of the camera is z, and the origin of the plane coordinate system (x 0 , y 0 ) (the intersection of the image plane and the optical axis of the camera) is usually set to (0, 0).Te intersection of the obstacle ahead and the road plane is P, and the coordinates of point P in the image plane coordinate system are (x, y).Te horizontal distance d between point P and the camera is obtained through monocular camera ranging:

Vehicle Detection in Left and
Binocular stereo vision entails using two cameras to capture two images of the same scene from two perspectives, thus obtaining a disparity image containing three-dimensional information about the scene.By processing the image with the stereo correction technique, the ordinate values of the same point on the two imaging planes can be equalized.Ten, the distance information of the object can be calculated after the disparity is obtained.It is assumed that the left imaging plane and the right imaging plane are coplanar, the main optical axes are strictly parallel, O l and O r are the left and right focal points of the two cameras, respectively, and 2 Journal of Advanced Transportation the focal lengths (f) of the two cameras are equal.Assuming that point P(X, Y, Z) is an arbitrary feature point in the three-dimensional scene, then, its imaging points in the camera coordinate systems on the left and right imaging planes are P l (x l , y l ) and P r (x r , y r ), respectively (the ordinate values of the left and right imaging points y l and y r are equal).Taking the left camera as the reference, we can translate P r (x r , y r ) (the imaging point of point P(X, Y, Z) on the imaging plane of the right camera) to the imaging plane of the left camera to obtain P r ′ (x r ′ , y r ′ ); thus, |x l − x r | is the vision disparity D between the two cameras.Te depth information Z can be derived according to the similar triangle principle, as shown in equation (2).Te principle of binocular stereo vision ranging is shown in Figure 2, where B is the baseline length of the binocular camera: Te obstacle vehicles are identifed using the images captured by vision sensors, and their speeds are calculated using the interframe diference method.Te frame rate of a general camera is greater than 25f/s; the calculated speed can be closer to the real speed.Since the ego vehicle is moving, the relative speed and absolute speed of the obstacle vehicle in the left and right front directions can be calculated based on the travel speed of the ego vehicle.Assuming that the moment when the obstacle vehicle frst appears in the camera's feld of view (FOV) is t 1 , the distance between the two vehicles at t 1 is d 1 , the next moment is t 2 , the distance between the two vehicles at t 2 is d 2 , and the interframe interval of the camera is t 3 .Ten, the time diference between the two moments and the distance diference between the two vehicles can be calculated using the following formulas: (3) Te relative speed of the two vehicles is Te value of Δt should be determined on the premise of ensuring driving safety and high efciency, and 10t 3 is taken as a time interval.Plugging it into formula (4), we can calculate the relative speed of the two vehicles.Te travel speed v 0 of the ego vehicle is obtained by the speed sensor.Tus, the travel speed of the obstacle vehicle is Te driving direction angle α of the obstacle vehicle relative to the ego vehicle is calculated based on the x and y coordinates:

Monocular-Binocular Vision Switching Strategy Based on
Improved Artifcial Potential Field.Considering the diferences between monocular vision and binocular vision in terms of perception accuracy and range accuracy at diferent distances, we developed a monocular-binocular vision switching system based on the improved artifcial potential feld.Te monocular-binocular switching strategy is formulated in real time to enable rapid and accurate calculation of obstacle vehicles, as well as efective tracking of obstacle vehicles.When cross-conficts can be detected efectively, driving safety at unsignalized intersections under various driving conditions can be ensured.All cross-conficts are assumed to occur within 135 °left and 135 °right from the longitudinal axis of the ego vehicle, the detection radius of the camera is r ≤ 55 °m (r is the effective ranging depth of the camera), and the FOV � 150 °.
Tree cameras are mounted on the ego vehicle to ensure the coverage area of the camera FOV and the efective switching between monocular and binocular cameras.In the FOV of each camera, some parts are only covered by one camera, while others are covered by two cameras (overlap region).In the part of FOV covered by a single camera, the environmental perception is based on monocular vision.In the FOV overlap region of two cameras, the advantages of both monocular vision and binocular vision can be combined for more efective detection [27,28].Te monocular-binocular vision switching strategy is formulated based on the realtime trafc situation at the intersection.Te FOVs of the three cameras are shown in Figure 3.
Te artifcial potential feld method is a kind of virtual force method [29][30][31].Te premise of establishing a repulsion force feld is that the ego vehicle will purposefully avoid obstacle vehicles.However, during the actual driving process, the movements of the ego vehicle and the obstacle vehicle are not linked.Terefore, we improved the artifcial potential feld method.Te intersection area of the gravitational feld G 0 between the ego vehicle and its target point and the gravitational feld G n (G 1 , G 2 , G 3 , . ..) between each obstacle vehicle and its target point is identifed using the principle of an artifcial gravitational feld.Ten, a real-time monocular-binocular vision switching strategy is developed based on the intersection area.Due to the difculty of obtaining the target points of obstacle vehicles, the maximum distance that the vehicle can travel within 3 seconds in a straight line is used as the maximum length of the gravitational feld, which is set to 50 m in this article according to the speed range of vehicles near an intersection.
Considering the requirements in the real driving environment, an overlap region W is defned according to the angle parameter α (the driving direction angle of the obstacle Journal of Advanced Transportation vehicle relative to the ego vehicle) and the intersection area of the artifcial gravitational felds G 0 and G n , and the monocular-binocular vision switching strategy is formulated in real time according to the overlap region.In the singlecamera area, the advantages of monocular vision, such as its small computing workload and high detection efciency, can be exploited to detect obstacle vehicles, and the gravitational feld G 0 between the ego vehicle and its target point is established in real time.For the obstacle vehicles that appear in the FOV overlap region of two cameras, the artifcial gravitational feld G n between each obstacle vehicle and its target point is established, and the overlap region W is determined according to the intersection of G 0 and G n .In the FOV overlap region, the distance and speed of the nearest vehicle are calculated, and whether to use binocular vision is decided according to the size of W. Gravity is linearly related to the distance d between the two vehicles.Te greater the distance d, the greater the gravity.When the overlap region W > 0, it indicates that the obstacle vehicle has begun to threaten the normal travel of the ego vehicle.At this time, binocular vision is used to identify the obstacle vehicle and calculate its distance and speed.Te switching process is shown in Figure 4; the process of determining monocular-binocular vision switching strategy is shown in Figure 5. Te specifc process is as follows: (1) Te three cameras are mounted on the ego vehicle to acquire the environmental information in their own FOVs.Te monocular vision technology based on YOLOv5 is used to identify the obstacle vehicles.We calculate the distance d from the target vehicle to the camera by using similar triangles.β is calculated according to the operating angle α and distance parameter d.Te driving angle α of the obstacle vehicle relative to the ego vehicle is calculated according to the x and y coordinates of the obstacle vehicle relative to the ego vehicle: An orthogonal coordinate system is established, in which the moving direction of the ego vehicle is taken as the positive direction of the X-axis, and the positive direction of the Y-axis is obtained by rotating the X-axis clockwise by 90 °.Te acute angle β between the moving direction of the centroid of the obstacle vehicle and the Y-axis in a time interval is determined according to the target point of the obstacle vehicle, and the gravitational feld G n between the obstacle vehicle and its target point is determined accordingly.Te scene is shown in Figure 6.(2) When α > 75 °, monocular vision technology is used to identify and track the obstacle vehicle, as well as calculate its distance and speed.(3) When α ≤ 75 °, the gravitational feld G 0 is established, and the gravitational feld G n is established according to the driving direction β of the obstacle vehicle.When W < 0, monocular vision technology is used to track the obstacle vehicle as well as calculate its speed and distance.When W ≥ 0, binocular vision technology is used to track the obstacle vehicle as well as calculate its distance and speed.(4) When the angle parameter α or W deviates out of the original range, real-time monocular-binocular vision switching is needed to enable high-precision tracking.(5) For a new obstacle vehicle or an obstacle vehicle that reappears in the vision system, we repeat the above four steps.

Vehicle Motion State Estimation Based on Extended
Kalman Filter.Te position of the monitored vehicle at a certain moment in the future can be obtained based on a reasonable prediction of the trajectory of the vehicle.Tis provides a basis for tracking the obstacle vehicle [32][33][34][35][36].
Taking the centroid of the vehicle as the origin of coordinates, we can establish a three-dimensional coordinate system, in which the positive direction of the X-axis is the driving direction of the vehicle.Te positive direction of the Y-axis is obtained by rotating the positive direction of the Xaxis clockwise by 90 °, and the positive direction of the Z-axis is the upward direction perpendicular to the XOY plane, as shown in Figure 7.
State and observation variables are determined using the x coordinate, y coordinate, longitudinal component v x of the travel speed, and yaw angle θ of the driving direction of the vehicle.Assuming that the motion state of the vehicle is X � (x, y, v x , θ) and the observation state is Z � (x, y, v y , θ), we can obtain the position of the obstacle vehicle by predicting the x and y coordinates of the obstacle vehicle.We can obtain the state equation ( 8) of vehicle motion and the observation equation ( 9) of the vehicle motion system by ignoring the system noise and inputting the longitudinal acceleration a x and yaw rate w r of the vehicle calculated by the environmental sensing system as specifc quantities: where Δh represents the actual sampling time (i.e., the time interval between two samples) and R is the observation noise, assuming that the position calculation error of the system is less than 1 m, the velocity calculation error is less than 2 m/s, and the acceleration calculation error is less than 1.3 m/s 2 .Te matrix of the system can be expressed as During the actual driving process, the motion state of the obstacle vehicle may be moving straightforward, left turning, or right turning, and it is also possible that turning left or right is followed by moving straightforward.Under the trafc condition of moving straightforward, the motion state is characterized by the regular variation of the position parameter x, slight variation of the position parameter y, and slight variation of v x and v y in direction.Under the trafc condition of left and right turning, obvious variations of the position parameters x, y, and v y , as well as θ, can be observed.
Te vehicle is considered to be a particle.After the vehicle motion state is predicted using the extended Kalman flter algorithm, its motion state equation is modifed, and the time period length of each prediction is changed in order to obtain the predicted value after a period of t (t represents the time diference between the two predictions).Te new equation for vehicle motion is Te vehicle motion state equation at the moment T is where i is given a value from 1 to n, and n � T/t, in which n is the iteration number of the state equation.We can change the value of n to predict the trajectory of the vehicle in a certain period of time.
Te process of tracking the detected obstacle vehicle using the Kalman flter algorithm is as follows: First, the detected vehicle is marked by a rectangular frame, and the vehicle's position in the image is obtained.Te center of the rectangular frame is taken as the centroid of the vehicle for marking, and the marked position is the state variable in the tracking process.Ten, the algorithm judges whether the vehicles in diferent images are the same vehicle through feature matching.Finally, the motion state equation is continuously updated in order to determine the predicted position of the vehicle at a certain moment in the future.Te tracking result is shown in Figure 8. Te vehicle position is indicated by a yellow frame, and the vehicle tracking result is marked with a red frame.Te experimental results show that the vehicle tracking method based on the Kalman flter algorithm can achieve a position matching degree of 94%, which satisfes the requirement for tracking vehicles at intersections.

Safety Assessment Based on Trajectory Prediction, Collision Zone, and Collision Time
After determining the accurate motion state and trajectory of the obstacle vehicle, the next step is to determine the cross-confict point between the ego vehicle and the obstacle vehicle.Te time-to-collision method is used to detect crosscollisions and assess the obstacle vehicle's safety.Calculating the time for the ego vehicle and the obstacle vehicle to enter and leave the collision zone, respectively, and judging whether the cross collision occurs according to the W value of the time period of the ego vehicle and the obstacle vehicle passing through the collision zone, the confict level is determined based on the distance between the two vehicles when one of them reaches the cross-confict point, and the safety of the obstacle vehicle is evaluated based on the confict level.Te specifc process is shown in Figure 9.

Vehicle Trajectory Crossing Model Based on Motion State.
When there is the possibility of a cross-collision between the ego vehicle and the obstacle vehicle, their trajectories will intersect.In cross-conficts, the X-shaped trajectory intersection is the most common type.An X-shaped trajectory intersection indicates that the smaller included angle after the intersection of the extension lines of the driving directions of two vehicles is an acute angle, i.e., the two lines form an X shape.Perpendicular trajectory intersection is a special case where the included angle is 90 °.Suppose c is the trajectory intersection angle of the ego vehicle and the obstacle vehicle.In this article, we performed cross-collision detection for X-shaped trajectory intersection conditions and established a safety assessment model (Figure 10).Suppose both the ego vehicle and the obstacle vehicle move along a straight line in a time period Δt; the position of the ego vehicle at the moment t 1 is taken as the origin of coordinates; the driving direction of the ego vehicle is the positive direction of the X-axis, and the position direction of the Y-axis is obtained by rotating the positive direction of the X-axis clockwise by 90 °.At the moment t 1 , the position of the ego vehicle is the origin of coordinates o.Te position of the ego vehicle is (x 11 , y 11 ) and (x 12 , y 12 ) at moments t 1 − Δt and t 1 , respectively.Te positions of the obstacle vehicle at the moments t 1 − Δt and t 1 are (x 21 , y 21 ) and (x 22 , y 22 ), respectively.Te heading angle of the obstacle vehicle is the angle between the positive direction of the X-axis and the connecting line of the positions of the obstacle vehicle and ego vehicle at a certain moment.At the moments t 1 − Δt and t 1 , the heading angles of the obstacle vehicle are θ 1 and θ 2 , respectively.Moreover, the corresponding distances between the two vehicles are d 1 and d 2 , respectively.Te trajectory intersection point is denoted as (x 0 , y 0 ).Tus, the positions of the obstacle vehicle at the moments t 1 − Δt and Te component of obstacle speed in the x direction is When c � 90 °, the perpendicular trajectory intersection model is established like the one that is shown in Figure 11.

Cross-Confict Judgment Based on Collision Zone and Collision Time.
We use an X-shaped trajectory intersection as an example to establish a model of collision zone and collision time for the scenario in which the ego vehicle encounters a single obstacle vehicle.Assuming that the position of the intersection point is (x 0 , y 0 ), we can substitute x � x 0 and y � 0 into equation ( 14) to obtain Te collision zone is a circular area with a radius of ρ, and its center is the intersection point (x 0 , y 0 ).A cross-confict will occur if two vehicles appear in the collision zone simultaneously.T 11 , T 12 , T 21 , and T 22 are the collision times, which are, respectively, the times it takes for the ego vehicle and the target vehicle to enter and exit the collision zone.Since the size of the vehicle cannot be ignored, the length and width of the vehicle should be taken into account when calculating the radius of the collision zone.Te formula for calculating ρ is as follows: Assuming that the time from the driver's activation reaction to the moment the braking takes efect is 1 s, the vehicle travels c meters in 1 second.Tus, the modifed collision zone radius is where v 1 represents the speed of the ego vehicle and a and b are the length and width of the vehicle, respectively.Te time periods for the ego vehicle and the target vehicle to enter the collision zone are (T 11 , T 12 ) and (T 21 , T 22 ), respectively.Te following inferences can be made: (1) When (T the collision zone at frst.Tus, cross-confict will not occur.
Figure 12 is a schematic diagram of the collision zone and time at an intersection.

Vehicle Safety Assessment Based on the Safety Distance.
Assuming that the safety distance (the distance just long enough to avoid a collision if both drivers apply emergency braking) is D S , we can take D S as the critical value of the highest confict level to calculate the safety distance: where v represents the real-time speed of the vehicle arriving at the cross-confict point, τ 1 is the driver's reaction time, τ 2 is the delay time of the braking system, and a 1max represents the maximum braking deceleration of the vehicle.When (T 11 , T 12 ) ∩ (T 21 , T 22 ), cross-confict between the ego vehicle and the obstacle vehicle will occur.It is assumed that one vehicle arrives at the cross-confict point frst, and the distance between the other vehicle and the cross-confict point is D n .In the case of cross-confict, it is stipulated that the two thresholds are D 1 and D 2 .When D n ≤ D 2 , the confict is a level-3 one; when D 2 ≤ D n < D 1 , the confict is a level-2 one; if D 1 ≤ D n , the confict is a level-1 one.

Reliability Analysis of Monocular-Binocular Vision
Switching System.In this article, a HAVAL H7 vehicle serves as the experimental vehicle, a RERVISION camera was used as a sensor, and a Core (TM) i7-9700 Lenovo notebook was used as a processing unit.Te camera's optical center is 1.73 m above the ground.Te procedure for verifying the reliability of the monocular-binocular vision switching strategy is as follows: (1) Tree RERVISION cameras are mounted on a HAVAL H7 vehicle, traveling at a certain unsignalized intersection and collecting trafc video, as shown in Figure 13.(2) Te improved artifcial potential feld principle is programmed.Te monocular-binocular vision switching system was implemented to process the captured images and calculate the motion states of the obstacle vehicles in real-time.Te artifcial gravitational feld was established to determine the value of W, and monocular-binocular vision switching was implemented based on the values of W and α.Te selective monocular-binocular vision switching strategy was executed to process the captured images so as to identify the obstacle vehicle and calculate its distance, speed, driving angle, and other parameters.(3) A nonselective vision switching system was set up.A program was used to process the captured images so as to identify the obstacle vehicle in the monocular vision area and calculate its distance, speed, driving angle, and other parameters.
In the overlap zone, only binocular vision was used for vehicle identifcation and parameter calculation.(4) Te experimental results obtained using the three methods were compared.Te reliability of the monocular-binocular vision switching strategy proposed in this article was verifed based on the performance evaluation of each method in detection accuracy, ranging accuracy, ranging speed, and camera detection area.
A frame of camera picture was taken every 0.04 s to collect information about obstacle vehicles around the experimental vehicle (RERVISION camera frames are 25 frames per second).Te proposed monocular-binocular vision switching strategy was used to identify each obstacle vehicle and calculate its distance, speed, driving angle, and other parameters.Te calculated data, including the number, distance, speed, and driving angle of obstacle vehicles, were compared with the data measured by on-board Lidar so as to assess the performance in detection accuracy, ranging accuracy, ranging speed, and camera detection area, thus verifying the feasibility of the proposed monocular-binocular vision switching strategy.Figure 14 shows the construction of the real vehicle experimental platform, Figure 15 demonstrates the result of detecting a vehicle ahead of the ego vehicle, and Table 1 shows the distance calculation results obtained based on monocular and binocular vision.
Te ranging accuracy values calculated in the monocular vision mode and the binocular vision mode were assessed using the ranging accuracy results of Lidar as a benchmark.Table 1 demonstrates that the distance calculation method based on monocular vision can reach an average accuracy of 96.18% within 60 m.Te method based on binocular vision can achieve an average accuracy rate of 96.63% within 60 m, which can meet the accuracy level required in measuring the distances of moving targets in a real driving environment in real time.
Te detection areas of cameras under diferent vision types were calculated using the formula for fan area calculation based on camera FOV.Te detection accuracy and ranging accuracy of the monocular-binocular vision switching system are expressed by the average of the accuracy of monocular vision and the accuracy of binocular vision.Table 2 compares the experimental results of various vision switching strategies, assuming that the actual distance is a, the calculated distance is b, and the ranging accuracy is 1 − |a − b|/a.
Table 2 demonstrates that the monocular-binocular vision switching system proposed in this article outperforms pure monocular vision in terms of detection accuracy and is superior to pure binocular vision in terms of the detection area.In the performance of distance calculation, the proposed monocular-binocular vision switching system has higher ranging accuracy than pure monocular vision and high ranging speed than pure binocular vision.Overall, the proposed monocular-binocular vision switching system meets the requirements for environment perception in the left and right front directions.5) We calculate the collision radius ρ using formula (16) and calculate the values of (T 11 , T 12 ) and (T 21 , T 22 ).In the PreScan software, a driving scene model of unsignalized intersection was developed to facilitate the simulation and analysis of the results of judging conficts resulting from X-shaped trajectory intersection.Te scene simulation is shown in Figure 16.Tere is only one obstacle vehicle in trafc conditions 1-3, and there are two obstacle vehicles in trafc condition 3.Among them, D 1 � 16.7 m and D 2 � 5 m.Te simulation results are as follows.

Trafc Condition 1.
Te speed of the ego vehicle is v 0 � 60 km/h, the speed of the obstacle vehicle is v 1 � 16 km/h, the initial distance from the ego vehicle to the cross-confict point is set to 120 m, the initial distance from the obstacle vehicle to the cross-confict point is 52 m, and the initial heading angle of the obstacle vehicle is β 1 � 15 °.Te simulation results are as follows: T 11 � 5.9 s, T 12 � 8.5 s, T 21 � 6.8 s, and T 22 � 16.6 s, and the distance variation between the two vehicles during driving is shown in Figure 17.When T 21 > T 12 , it indicates that the ego vehicle passes through the collision zone before the target vehicle and there is no danger of a collision.As shown in Figure 18, when the ego vehicle arrives at the cross-confict point at 7.2 s,       � 4.97 m. Figure 19(b) demonstrates that when obstacle vehicle 2 reaches the intersection point, the distance from the ego vehicle to the intersection point is 18 m, which constitutes a level 1 confict.Figure 19 demonstrates the variations of the distances between the vehicles during the process of obstacle vehicles 1 and 2 passing the intersection points.
Te simulation results of the three trafc conditions were compared with the theoretical calculation results.Te comparison demonstrated that the cross-confict judgment and safety assessment model based on trajectory prediction, collision zone, and collision time can efectively identify cross-conficts and rate the severity of each confict under the corresponding threshold.PreScan simulation results show that the efective recognition rate of cross confict based on collision zone and collision time is more than 97%, and the recognition accuracy of confict level is more than 95%.It shows that the intersection confict judgment

Conclusion
Aiming to solve the rarely studied problem of how to ensure trafc safety at unsignalized intersections, we developed a method to detect cross-conficts between the ego vehicle and vehicles traveling in the left and right direction based on machine vision and a safety assessment model.First, a monocular-binocular vision switching strategy based on an improved artifcial potential feld method was developed, and a monocular-binocular vision switching system was set up.Ten, a single-binocular vision conversion strategy based on the basic principle of artifcial potential feld is formulated, and a single-binocular vision environment perception system is built to realize the efective detection of obstacle vehicles in front of the vehicle side.In addition, a vehicle motion state estimation model and a vehicle trajectory prediction model based on the extended Kalman flter algorithm were constructed.On this basis, a cross-confict detection and safety assessment model based on vehicle trajectory, collision zone, and collision time was developed.With a reasonable setting of confict level thresholds, the method proposed in this article can enable efective safety assessment for the vehicles traveling at low speeds at intersections, provide a reasonable process for judging the confict level in the trafc environment at intersections, and provide a basis for the takeover of the driving assistance system and for human-computer interaction of autonomous vehicles.
Right front Direction Based on the Monocular-Binocular Vision Switching System.A monocular vision ranging model and a binocular vision ranging model are established to identify the vehicles in the left and right front directions of the ego vehicle and calculate the distance d, travel speed v, and driving direction α.
All based on monocular vision.α≤75°, Further judgment based on the improved artificial potential field method.W = 0, Monocular vision.W > 0, Binocular vision.α≤75°, Establish gravitational fields G 0 and G m , and determine the size of W in real time.

Figure 6 :Figure 7 :
Figure 6: Schematic diagram of intersection scene at a certain moment.
Zone.A PreScan simulation experiment was conducted on the safety assessment model based on the collision zone.Te parameters of the ego vehicle and the driving environment were set according to the driving data and environmental conditions obtained in a real vehicle experiment, and cross-confict classifcation and safety level division were carried out following the procedure below: (1) We set the driving scene as an unsignalized intersection.(2) We set the travel speed and direction of the ego vehicle.(3) We set the initial position, driving direction, travel speed, and initial azimuth β n of the obstacle vehicle.(4) We obtain the length and width data of the ego vehicle and obstacle vehicle according to their models.(

Figure 12 :
Figure 12: Schematic diagram of collision zone and collision time at an intersection: (a) condition of one obstacle vehicle; (b) condition of two obstacle vehicles. 2021

Figure 14 :
Figure 14: Te construction of the real vehicle experimental platform.
4.2.3.Trafc Condition 3. Te speed of the ego vehicle is v 0 � 40 km/h, the speed of obstacle vehicle 1 is v 1 � 45 km/h, and the speed of obstacle vehicle 2 is v 2 � 47 km/h.Te distance from the ego vehicle to trajectory intersection point 1 is 50 m, the distance from obstacle vehicle 1 to trajectory intersection point 1 is 80 m, and the initial heading angle of obstacle vehicle 1 is 62 °.Te distance from the ego vehicle to trajectory intersection point 2 is 60 m, the distance from obstacle vehicle 2 to trajectory intersection point 2 is 50 m, and the initial heading angle of obstacle vehicle 2 is 43 °.Te simulation results of obstacle vehicle 1 are T 11 � 2.5 s, T 12 � 6.5 s, T 21 � 6.26 s, and T 22 � 9.74 s.At time T � 0.24 s, indicating that cross-confict will occur.It can be seen from Figure 19(a) that when the ego vehicle reaches the intersection point, the distance from obstacle vehicle 1 to the intersection point is D n � 36 m, which constitutes a level 1 confict.Te simulation results of obstacle vehicle 2 are as follows: T 11 � 3.5 s, T 12 � 7.4 s, T 21 � 2.0 s, and T 22 � 5.5 s.At time T � 2.0 s, indicating that cross-confict will occur, and the collision radius is ρ

Figure 15 :
Figure 15: Result of detecting a vehicle ahead of the ego vehicle.

Figure 17 :Figure 18 :
Figure 17: Variation of the distance between the two vehicles during passing through the cross-confict point.
11 , T 12 ) ∩ (T 21 , T 22 ), that is, T 11 < T 21 < T 12 or T 21 < T 12 < T 22 , it indicates that two vehicles appear in the collision zone simultaneously, resulting in cross-confict.Te ego vehicle must make emergency braking if the available reaction time is limited.(2) When T 11 >T 22 , it indicates that the obstacle vehicle will cross the collision zone at frst.Tus, crossconfict will not occur.(3) If T 21 > T 12 , it indicates that the ego vehicle will cross

Table 1 :
Distance obtained by monocular and binocular vision.
4.2.2.Trafc Condition 2.Te speed of the ego vehicle is v 0 � 50 km/h, the speed of the obstacle vehicle is v 1 � 36 km/h, the distance from the ego vehicle to the cross-confict point is set to 65 m, the distance from the obstacle vehicle to the cross-confict point is 60 m, and the initial heading angle of the obstacle vehicle is β 1 � 67 °.Te simulation results are T 11 � 4.3 s, T 12 � 5.1 s, T 21 � 4.4 s, and T 22 � 5.5 s, and the radius of the collision zone is ρ � 4.57 m.Te distance variation between the two vehicles during driving is shown in Figure18.If T 11 < T 21 < T 12 , a cross-collision between the two vehicles will occur.As shown in Figure18, when the ego vehicle reaches the cross-confict point, the obstacle vehicle is 14 m away from the cross-confict point.Tus, the confict level of this obstacle vehicle is level 2.

Table 2 :
Comparison of experimental results obtained under diferent vision switching strategies.