An Open Data Platform for Traffic Parameters Measurement via Multirotor Unmanned Aerial Vehicles Video

,


Introduction
Traffic congestion is frequently encountered on ground roads and urban expressways [1].Many advanced traffic control strategies and complex traffic behaviors have been proposed using traffic simulation models to minimize congestion and enlarge/modify traffic networks [2,3].For example, the cell transmission model (CTM) can determine optimal on-ramp metering rates, emergency dissipation response traffic density estimation, and congestion mode estimation for a freeway [4,5].Cellular automata (CA) is a flexible and powerful visualization tool used in urban growth simulations [6][7][8] and has many appealing features, including simulating bottomup dynamics and capturing self-organizing processes [7,8].However, these optimization methods require reasonably accurate estimates for the relevant parameters [9].Almost all traffic simulation models are confronted with the same difficulties, for example, lack of continuous and detailed real time data and lack of frequent updates based on reliable timely data, leading to inaccurate and improperly calibrated traffic simulation models with questionable results [10,11].Thus, mismatches and discrepancies between predicted traffic situations (simulation model output) and actual traffic patterns occur.Since each traffic network component (segment) has distinct characteristics, one cannot use the same set of calibrated data and parameter values across all network components [11,12].This is only possible if the collected data is frequently updated and reliable, and data sources are readily available.
Classical methods to accumulate traffic data and estimate traffic parameters depend upon induction loops and other on-site instruments [13].However, they do not provide  a comprehensive picture of two-dimensional (2D) traffic situations, with the primary drawback being their limitation for measuring important traffic parameters and accurately assessing traffic conditions [14].Detection through unmanned aerial vehicle (UAV) video image processing is among the most attractive alternative new technologies, offering opportunities to perform substantially more complex tasks and provide more precise, accurate, and widespread traffic parameters than other sensors [15].Recent advances in UAV technology and utilization for traffic surveillance have allowed traffic planners to consider the eye in the sky approach to traffic monitoring, using detailed real time data collection and data processing to evaluate traffic patterns and determine origin destination flows and emergency response [16][17][18][19].Several research teams are focused on UAV applications for transportation engineering.German Aerospace Center, University of California at Berkeley, and Western Michigan University have proposed several methods to investigate the most effective methods of transmitting and analyzing UAV acquired traffic data [20][21][22].The University of South Florida, University of Washington, and Linkopings University have focused on the type of data and information that should be collected and extracted to design traffic simulation models and evaluate traffic networks [16,23,24].UAVs are preferred over traditional technologies because of their mobility and lower operating costs relative to manned systems [25].

Open data test platform
Most aerial cameras are not perfect and tend to include a variety of distortions and aberrations.To guarantee aerial data accuracy, improvements in airborne camera calibration and image distortion correction algorithms are essential.Abdel-Aziz and Karara proposed a method based on direct linear transformation into object space coordinates for close range photogrammetry.However, their method does not consider nonlinear distortion [26].Zhang used a precise lattice template to optimize internal camera parameters using template physical coordinates and image coordinates, resulting in a method that is simple, accurate, and flexible [27].Heyden and Astrom showed that automatic calibration of variable parameters is possible under certain conditions [28,29].The advantage is that the algorithm may achieve high accuracy, provided that the estimation model is good, and correct convergence has been achieved.However, since the algorithm is iterative, the procedure may settle on a bad solution unless a good initial guess is available.Espuny proposed an automatic linear calibration method for planar movement [30].While automatic calibration is more flexible compared to traditional calibration methods, the accuracy is insufficient.This paper presents a simple and effective calibration method of intrinsic and external parameters and a high distortion model for lens cameras.
Simulation models require calibration based on unique features and traffic patterns associated with specific networks.Requirements for collecting reliable and robust traffic state information have become increasingly urgent during the past decade.Multirotor UAVs are different from fixed ring UAVs, with the ability to hover at low or high altitudes and focus on data collection from a specific link or intersection.This paper proposes an open data test platform for updated traffic data accumulation and traffic simulation model verification and improvement (in terms of variable parameters values) by analyzing real time collected aerial video, shown diagrammatically in Figure 1.To offer a reliable way of collecting spatial-temporal data, camera parameter calibration methods and image distortion correction problems are explored under aerial shooting conditions.A simplified algorithm for camera calibration is proposed based on calculating the relationship between pixel and true length with the camera optical axis perpendicular to the road conditions.The most appropriate shoot altitudes were also determined.Large quantities of aerial videos from the Shanghai inner ring expressway were collected and traffic simulation models calibrated using the collected data and traffic parameters.

Purpose of Open Data Platform
The objective of our open data platform is to provide realworld data sets with corresponding data descriptions, which will be used as a resource to assist in the verification, validation, calibration, and development of existing traffic simulation models, such as car following, lane changing, gap acceptance, and queue discharge.Traffic simulation models have complicated data input requirements and many model parameters [31].For example, to build a microscopic simulation model for certain network, two types of data are required.The first type is the basic input data used for network coding of the simulation model.The second type is the observation data employed for the calibration of model parameters and the simulation model.The data sets of our platform contain the following two types.First, a freeway traffic data collection program includes both vehicle trajectory data and widearea detector data for operational and tactical algorithm research.
Second, an arterial traffic data collection program includes both vehicle trajectory data and wide-area detector data for operational and tactical algorithm research.
Finally, a regional traffic data collection program includes both instrumented vehicle data and wide-area detector data for strategic algorithm research.
Despite the importance of UAV video traffic data for research traffic flow theory, these data proved to be massively affected by measurement errors in the vehicle's spatial coordinates.If not properly accounted for, these errors would make these data sets unusable for any study on traffic theory.The accuracy of the computation of the true location of the vehicle is mainly influenced by the error of the UAV position (the maximum error is estimated below 4-5 meters for distances to the object of 80 meters), the springs in the camera platform suspension, and the object recognition algorithm to automatically track the precise location (with an accuracy of one foot or less) of every vehicle on a subsecond basis.Existing efforts have relatively high vehicle detection and tracking failure rates, which require some manual transcription to ensure appropriate vehicle capture rates.The current work presents a multistep procedure for reconstructing vehicle trajectories, which aims at eliminating the outliers that give rise to unphysical acceleration and deceleration and jerks and smoothing out the random disturbances in the data, preserving the driving dynamics (vehicle stoppages, shifting gears during acceleration and deceleration), and the internal consistency of the trajectory, that is, the space actually traveled [32].In spite of these concerns, ongoing trajectory data collection efforts can make UAV video traffic data usable for studies on traffic flow theory.

Platform and Hardware
The DJI PHANTOM 2+ miniature UAV was employed, as shown in Figure 2, with the following characteristics.
(ii) Precision flight and stable hovering were achieved using an integrated GPS autopilot system offering position holding, altitude lock, and stable hovering, which allows the controller to focus attention on video production.
(iii) Automated return to home: if the vehicle is disconnected from the controller during flight, the system failsafe protection activates and transmits a message to the controller (if the signal is strong enough) and automatically navigates home.
(iv) The flight path may be programmed from an iPad using the 16 waypoint ground station system, enabling the controller to shoot with more precision.
The camera employed was GoPro HERO3+ Black Edition, as shown in Figure 2, a commercially available "action" camera popular with athletes and extreme sports enthusiasts.
It is capable of recording smooth, high definition video at various resolutions from 720 to 4000 p at 15-120 frames per second (fps).Although the GoPro has no zoom capability, there are three field of view settings (wide, medium, and narrow), which allow the user to focus the camera on wider or smaller areas.The camera is Wi-Fi capable and can be operated using a Wi-Fi remote or via the free GoPro App on a smartphone or tablet.

Aerial Video Calibration
4.1.Pinhole Model.The basic principle of camera calibration is the pinhole imaging model.The pinhole model describes the relationship between three-dimensional (3D) coordinate points in the camera and the imaging plane projection.The camera iris, rather than the lenses used to gather light, is described as a point.However, the pinhole model does not consider geometric distortion caused by the lens and the limited aperture or blur caused by lack of focus, nor does it consider discrete pixel coordinates in the real camera.Thus, the pinhole model constitutes a first-order approximate transformation for mapping 3D coordinates to 2D or planar coordinates, with accuracy that depends upon camera quality.Accuracy decreases with increased lens distortion from center to edge.
Homogeneous coordinates using the pinhole method of camera projection are where and [  𝑍 1]  are the image and world coordinates, respectively, corresponding to the homogeneous coordinates;  is a scaling factor; [ 1  2  3 ] is the external parameter matrix, converting between real and camera coordinates, where [ 1  2  3 ] is the rotation matrix and  is the world coordinate origin in camera centric coordinates; and  is the internal parameter matrix: which includes five parameters associated with factors relating to lens focal length, image sensor system, and location of the main points.

Intrinsic Parameter Calibration and Distortion Correction.
Intrinsic parameter calibration adopts a camera calibration algorithm based on HALCON software [33].The calibration plate is a 7 × 7 lattice with 12.5 mm dot pitch.HALCON recommends that the camera should shoot at least eight images in different positions and poses and that the circle diameter in the final images should be >10 pixels.To minimize calibration error, 50 images were taken at various positions and poses.Figure 3 shows example images, and Table 1 shows the final calibration results.
Nonlinear distortion during camera imaging can arise from several characteristics, including CCD manufacturing error, lens surface errors, axial spacing between each lens, and centralization errors.The LENZ distortion correction model was adopted to meet precision requirements [34]: where  is the magnitude of radial   There are nine unknown parameters in the matrix, with "1" providing a linear equation in homogeneous coordinates and four sets of coordinates providing eight linear equations.Therefore, equations can be obtained as exact solutions. Because signify the elements of [ 1 /  2 / /]; the constraints become Let  =  represent the overdetermined equations after adding constraints (7).Then The least squares method was used to convert (8) to statically determinate nonlinear equations: We may then apply a quadratic approximation of the Taylor expansion and solve the Jacobian matrix for each equation with Newton's iteration method applied to the results.method and compare accuracy without adding constraints, the proximal and distal points of a known scene were selected as verification points (these two sets of points were not included in the solution of the parameter matrix).It is assumed that the world coordinates from field measurements and the pixel coordinates after distortion correction are the true values, and so the difference in calculated position relative to the true value was used to judge accuracy, as shown in Figure 5.

Experimental Verification and Analysis. To validate the required number of calibration points for the proposed
For small number of calibration points, distal point differences are larger than for proximal points.However, the differences tend to stabilize for seven or more calibration points without adding constraints, and only four points are needed when constraints are added.The difference does not approach zero except for certain values with increased measurement points.Given that multiple measurements reduce random error, this nonzero difference is systematic and arises mainly from internal matrix and distortion correction coefficient errors.

Simplified Aerial Shooting Calibration
Algorithm.Measuring traffic parameters via UAV usually requires adoption of the horizontal downward viewing angle.However, when the camera imaging surface and road plane are parallel to one another, camera calibration can be simplified.Using the consistent ratio of image and world coordinates, length and speed parameters can be obtained quickly.
Transforming between the world coordinate system and camera coordinate system is undertaken through rotational matrix, , and translational matrix, .
The translational matrix, , is a 3D column, whereas the rotational matrix, , is the total product of the rotation angle encompassing the , , and  axes:  When the camera imaging surface and road plane are parallel to one another,  = 0 and  = 0, which simplifies Taking the first two columns of  and  to form the 3 × 3 matrix from the previous section allows adoption of the simplified algorithm when  is equal or close to

Error Analysis of Simplified Algorithm.
To verify the accuracy of the simplified method under horizontal downward angle conditions, we selected the same calibration object within a road environment from different heights.The calibration object was a crosswalk line 4 m in length.The UAV hovered to shoot for 1 min every 10 m from 0 to 100 m.Example images are shown in Figure 6.
The camera mode was 16 × 9 W 1080 p and the parameters were  = 1920,  = 1080, and  1 = 118.2.The light sensitive component size of the GoPro HERO3+ is 1/2.5 inches, with the corresponding size 5.38 mm × 4.39 mm.Fifteen sets of independent experiments were conducted for each height, with outcomes as shown in Figure 7.
The Kolmogorov-Smirnov method was used to verify whether the flight elevation data represented a normal distribution and set the significance level at  = 0.05.Results indicate that the data represent a significant normal distribution.A two-tailed -test was applied to verify that the From Figure 7 and Table 2, the error scope can be controlled within 0.3 m at elevations 80-100 m.The simplified algorithm results represent the true value under 100 m with significance level  < 0.1.The optical axis was not perpendicular to the road surface due to camera vibration and the length ratio of lower elevation images was larger.Therefore, the appropriate flight elevation is 100 m to obtain the best accuracy from the simplified algorithm.

Practical Application
The proposed open data platform was applied to measure traffic parameters of the Shanghai inner ring expressway, which is an on-ramp located near Renaissance Park (Figure 8).This section of the expressway suffers extreme traffic congestion during peak hours, causing a major bottleneck that frequently causes traffic backups and accidents.Many traffic engineering researchers are interested in studying the unique features and driving behaviors associated  with this specific network but lack accurate, large scale data sets.One reason is a lack of convenient tall buildings enabling traffic parameter collection using traditional digital cameras.
Approximately 160 minutes of aerial video was collected during peak traffic hours from the UAV flying at 100 m.The extracted images covered 180 m of the road.Vehicle spacetime trajectories were extracted after camera calibration using methods described in the previous section (Figure 9(a)).Traffic volume, density, and speed were also gathered, as well as traffic parameters using the CA model.The lane (green backgrounds, Figure 9) is zoned into 28 cells (6 m/cell).Acceleration distribution characteristics were subsequently calculated (Figure 9(b)), followed by the proposed open data platform to verify, calibrate, and develop macroscopic, mesoscopic, and microscopic traffic models, which could be used to simulate and evaluate traffic strategies for the specific network.
To verify the quality characteristics of the recorded trajectories, another empirical study has been accomplished.We prepare a car equipped with high-precision GPS system (accuracy less than 0.3 m), which can be used to collect individual trajectory along with UAV video in peak hour condition (8:00-8:15) and free flow condition (10:00-10:15), respectively.Recording frequencies of the two modes are 10 Hz.
Vehicle spatial-temporal trajectory, speed, and acceleration can be extracted from UAV video based on the proposed camera calibration method and the existing moving target tracking algorithm.The detailed procedures are as follows.(i) In the first part, both intrinsic and external parameters of the camera should be calibrated by the proposed method.
(ii) Then the noise caused by the unavoidable motion of the aircraft in all six degrees of freedom during the hovering phase was eliminated (hovering accuracy of the aircraft: vertical: 0.8 m; horizontal: 2.5 m; angle: 0.03 ∘ ).Stationary and moving features in the frames are detected, matched, and categorized by Scale Invariant Feature Transform (SIFT) [35].Then the stationary features are used to calibrate model parameters and complete image registration [36].
(iii) Camshift algorithm is used to track moving target and a series of position coordinates are extracted [37].
Resulting speed and acceleration profiles are shown in Figure 10, respectively.The two curves of GPS data and UAV data are in good agreement under both peak hour condition and free flow condition.
The resulting frequency plot of relative errors under peak hour condition is shown in Figure 11.The mean and standard deviation of relative error are 1.7049% and 3.7619%, respectively.And relative errors are far from being normally distributed.This is confirmed by the Normal Probability Plot in Figure 11(b), which reveals that the distribution of relative errors deviates from normality especially under 25%, which are the biggest outliers.The mean of relative errors is significantly different from zero (as confirmed by the test at the level of significance of 5%), which is symptom of a systematic error component.This may be caused by the errors of GPS system and a series of vehicle extracting algorithm.However, the collected traffic UAV video spatialtemporal data can satisfy microscopic details of traffic flow theory research under admissible error.

Conclusion
An open data platform for traffic parameter collection and traffic model verification and development was proposed incorporating multirotor UAV video, which can provide large scale, comprehensive pictures of 2D traffic situations.UAV video image processing is among the most attractive alternative new technologies, offering opportunities to perform substantially more complex tasks and provide precise, accurate, more extensive traffic parameters than other sensors.The UAV platform used for this study was a DJI PHANTOM 2+ with GoPro HERO3+ Black Edition camera.
The camera parameter calibration and image distortion correction algorithms were developed for aerial shooting conditions to offer a reliable and accurate method for collecting spatial-temporal data for traffic model calibration.Using internal camera calibration, the LENZ model was employed

Figure 1 :
Figure 1: Proposed open data platform for traffic simulation model tests.

( i )
Basic Input Data.Basic input data include data of network geometry, transportation analysis zones, travel demand, and traffic detection systems.(ii) Data for Model Development, Improvement, and Validation.After extensive research into the traffic data for potential use in microsimulation model development, improvement, and validation, this platform plans that a data collection program should include the following.

Figure 3 :
Figure 3: Example calibration pictures at different positions.

Figure 4 :
Figure 4: Aerial image comparison before and after calibration.

Figure 5 :
Figure 5: Calculated and true value for distal and proximal points.

Figure 6 :
Figure 6: Images of crosswalk line under horizontal downward angle conditions.

Figure 7 :
Figure 7: Simplified algorithm calculations of the true value of crosswalk line images.

Figure 8 :
Figure 8: On-ramp located near Renaissance Park in Shanghai.

Figure 9 :
Figure 9: Traffic parameters collected by aerial video.

2 ) 10
Hz GPS data 10 Hz UAV data (d) Acceleration profile in free flow

Figure 10 :
Figure 10: Speed (a, c) and acceleration (b, d) profiles of GPS and UAV in peak hour and free flow, respectively.
each 0.1 s) (%)Observed relative error (mean = 1.7049%, std = 3.7619%) distortion, negative values indicating barrel distortion and positive values indicating pillow distortion; (, ]) is the original image coordinate point; and (  , ]  ) is the corrected image coordinate point.Knowing the distortion parameters after calibration is crucial to the correction process, as demonstrated in Figure 4.

Table 2 :
One sample two-tailed t-test for each set of flight height data.