Designing an Efficient Emergency Response Airborne Mapping System with Multiple Sensors

Multisource remote sensing data have been extensively used in disaster and emergency response management. Different types of visual and measured data, such as high-resolution orthoimages, real-time videos, accurate digital elevation models, and threedimensional landscape maps, can enable producing effective rescue plans and aid the efficient dispatching of rescuers after disasters. Generally, such data are acquired using unmanned aerial vehicles equipped with multiple sensors. For typical application scenarios, efficient and real-time access to data is more important in emergency response cases than in traditional application scenarios. In this study, an efficient emergency response airborne mapping system equipped with multiple sensors was designed.-e system comprises groups of wide-angle cameras, a high-definition video camera, an infrared video camera, a LiDAR system, and a global navigation satellite system/inertial measurement unit. -e wide-angle cameras had a visual field of 85° ×105°, facilitating the efficient operation of the mapping system. Numerous calibrations were performed on the constructed mapping system. In particular, initial calibration and self-calibration were performed to determine the relative pose between different wideangle cameras to fuse all the acquired images.-emapping systemwas then tested in an area with altitudes of 1000m–1250m.-e biases of the wide-angle cameras were small bias values (0.090m, −0.018m, and −0.046m in the x-, y-, and z-axes, respectively). Moreover, the root-mean-square error (RMSE) along the planer direction was smaller than that along the vertical direction (0.202 and 0.294m, respectively). -e LiDAR system achieved smaller biases (0.117, −0.020, and −0.039m in the x-, y-, and z-axes, respectively) and a smaller RMSE in the vertical direction (0.192m) than the wide-angle cameras; however, RMSE of the LiDAR system along the planar direction (0.276m) was slightly larger. -e proposed system shows potential for use in emergency response systems for efficiently acquiring data such as images and point clouds.


Introduction
Remote sensing is useful for acquiring various types of visual and measured data, such as high-resolution orthoimages, real-time videos, accurate digital elevation models (DEMs), and three-dimensional (3D) landscape maps [1,2], thus providing support for decision-making in disaster response [3][4][5]. Moreover, these data can be employed to plan effective rescues and assist the efficient dispatching of rescuers shortly after disaster [6][7][8]. Unmanned aerial vehicle (UAV) systems equipped with multiple sensors are used to rapidly require such data because they can be deployed in remote areas that would otherwise be inaccessible. Moreover, such systems can be deployed rapidly and flexibility, which are crucial for dynamical monitoring of disasters and accidents [9][10][11][12][13][14][15].
Recently, sensor technology has advanced rapidly. Developed remote sensors (such as high-definition video cameras, digital cameras [16], and small and lightweight LiDAR systems) integrated with different sensors can be easily mounted on the airborne platforms, thus affording efficient mapping systems. A digital camera, small laser, and low-cost global positioning system (GPS)/inertial measurement unit (IMU) were integrated and mounted on a mini unmanned helicopter to develop a colored point cloud model that can be used to easily extract 3D geometric and textural information of the objects [17]. Moreover, a flexible, lightweight, rapid mapping system was mounted on a large, unmanned helicopter for emergency response [9]. is system comprised a digital camera, laser scanner, and GNSS/ IMU sensors that can be used to acquire high-quality DEMs and orthoimages. Researchers have reported [18,19] that all UAV systems comprise four main units: LiDAR, charge coupled devices, hyperspectral sensors, and GPS/IMU known as LiCHy. Such systems can obtain multiple types of accurately georeferenced observation data [20].
Studies have demonstrated that airborne multisource remote sensing systems are efficient, safe, and affordable and can be used to collect and deliver multiple types of georeferenced data [21]. Such systems can also ensure that the disaster response community has rapid and timely access to accurate and relevant geospatial data during a disaster [22][23][24]. However, the performance of such systems equipped with a single camera is limited in some respects. First, these systems show narrow fields of view, hindering the efficiency of image acquisition. Second, the storage speed and capacity are extremely low. Furthermore, using a single camera is always unreliable, as it may malfunction during operation [25]. To overcome these limitations, in this study, we developed an emergency response airborne mapping system equipped with multiple sensors. is system integrates wide-angle cameras, a LiDAR system, and a video camera into a single platform that can be installed on a large fixed-wing UAV.

Mapping System Design.
e proposed system comprised three groups of wide-angle cameras, a high-definition (HD) video camera, an infrared (IR) video camera, a LiDAR system, and global navigation satellite system (GNSS)/IMU ( Figure 1). e groups of wide-angle cameras were designed to efficiently acquire image data, which can be used to achieve high-resolution orthoimages [27], accurate DEMs, point clouds, and 3D models. HD images were captured in real-time using the HD video camera, particularly during the day; while, IR images were obtained at night using the IR video camera, which is an active measurement sensor. Point cloud data were acquired using the LiDAR system for quickly calculating the disaster earthwork volume. is is particularly important under poor visibility conditions or at night. In addition to navigation, the GNSS/IMU was used to provide the initial poses for the images acquired using the wide-angle camera groups and LiDAR scans.

Wide-Angle Camera Groups.
e three groups of wide-angle cameras are the most important components of the mapping system. Two of the camera groups consisted of five Canon 5DSR cameras with a visual field of 60°× 105°and a focal length of 50 mm (Figures 1-3).
ese cameras produced images with 200 million pixels (9216 × 20928). If one of the camera groups malfunctioned, the other could still obtain images with a forward overlap exceeding 60%, which is the nominal average for most mapping projects. Additionally, if these two groups of cameras work well, they could be merged to form a group that yields images with 400 million pixels with a visual field of 85°× 105°; this wide visual field would increase the mapping system efficiency. e third camera group consisted of two Canon 5DSR cameras with a focal length of 24 mm, which afforded a visual field of 100°(forward) × 72°(side) and images with 100 million pixels. is group also provided the system with a double-redundant imaging capability. e camera group is also made up the geometric calibration of mutual support with the other two groups, thus improving the reliability of the system.

Other Sensors.
e LiDAR system was used to collect point cloud data for calculating disaster earthwork volume quickly, particularly at night and under poor visibility conditions. ese point cloud data could also be combined with the wide-angle camera images to create 3D models. To match the view of the point cloud data and that of the wideangle cameras, a wide-field-view laser scanner (A-Pilot AP-3500 Sure Star) was used. is type of system has proved to be highly effective in airborne topographic surveys of mountainous regions [28].
To ensure movement time synchronization, as is common in LiDAR systems and photography [29], a positioning and orientation system (POS) can provide direct position and orientation data. All the sensors used the GNSS clock as a reference. Moreover, to ensure that all sensors used the same geographic coordinate system, we transformed all the coordinate sensors to the GNSS/IMU altitude system. e integrated POS NovAtel SPAN®UIMU-LCI system was used for navigation (GNSS, 2015). Further details on the instrument specifications are given in Table 1.

UAV Platform.
A large fixed-wing "Harrier" (Figure 4), which can carry types of payloads and fly all day and in allweather conditions [30], was used herein. e main specifications of this platform are given in Table 2. Harrier is a medium-to-high altitude, low-speed, long-endurance UAV system based on the mature military UAV systems. It uses wheel-type takeoff and landing, entire-process auto-control, line-of-sight links, and combined navigation techniques. As they are equipped with image reconnaissance and monitoring systems, signals detection systems, and multipurpose function, Harrier UAVs are easy to operate and highly reliable; they also show low maintenance requirements and a long service life. A Harrier UAV comprises of four subsystems, vehicle, survey, control, and information transmission; mission payload; and integrated support subsystems.

Emergency Response.
Automatic route-planning software was developed to automatically generate the flight route based on the preloaded DEM and the latitude and longitude of the study area ( Figure 5). Moreover, if the flight speed or altitude changed, the system could automatically calculate the exposure interval necessary to ensure that the degree of data overlap met the requirements.

Data Processing.
e complete data-processing workflow is shown in Figure 6. After the UAV landed, the acquired data were exported to a computer. First, the GPS and IMU data were processed to obtain the position and orientation information. Based on the synchronization of the UAV systems, the position and orientation were assigned to each frame of the video data. Using the TerraSolid software, the GPS/IMU navigation system was employed to calibrate the acquired laser scanner data and obtain the 3D spatial coordinates (x-, y-, and z-axes) of the ground target. is method was also used to eliminate the noise and abnormal    Figure 3: Wide-angle images produced by different camera groups: , and E 1 E 2 are the images produced by the first, second, and third camera groups, respectively. e three types of images can be merged to produce a high-definition image [26]. values in the laser scanner and subsequently perform multipleroute splicing. e point cloud data of the entire survey were output in the LAS format. e single images obtained using different cameras were merged into combined wide-angle images. Finally, the position and orientation data were used to geocode the video data, LiDAR data, and wide-angle images using a common geographic coordinate system.

Mapping System Calibration.
e basic principle of combined wide-angle imaging is to use reimaging technology to form a single-center wide-angle image equivalent to one that could be obtained using a multidirectional lens (or multiple cameras whose main optical axes show different orientations). e process in shown in Figure 7. e reimaging technology can be divided into two major steps, initial and actuarial calculations. e initial calculation refers to the distortion correction of each camera based on ground calibration parameters and the projection transformation of the interior orientation elements of each camera relative to the virtual combined camera. All these calculations and transformations are based on the classical photogrammetric formulas.   Herein, the initial calculation was performed to calibrate the equipment parameters, including the distortion parameters of individual cameras and the interior orientation elements of the virtual combined camera. e actuarial calculation forms the core of the proposed method and includes static self-calibration. To calibrate the deformation errors of the camera system caused by mechanical processing, installation, temperature, and material fatigue, we adopted a static self-calibration method for the portionby-portion calibration of the images obtained using the individual cameras. is method involved using the parallax of the overlapping parts of adjacent images (Figure 8). e equation used for the self-calibration is as follows: where f represents the focal length, i and j represent the adjacent images, Δx ij and Δy ij represent the amounts of parallax in the overlapping areas of adjacent images i and j, respectively, obtained by image matching, and Δs x , Δs x , Δs z , Δφ, Δω, Δκ are the corrections to the exterior orientation elements of the individual cameras relative to the virtual combined camera. Each parallax point (Δx ij , Δy ij ) in each pair of adjacent images (i, j) can be inserted into equation (1). e size of the correction for the individual cameras can be obtained by solving the all parallax equations.
Because of the influence of various factors, when the cameras are used in combination, it is difficult to ensure simultaneous exposure times of all the cameras. Furthermore, the one-minute difference between successive exposures produces parallax in overlapping images. e parallax can be expressed as a function of the exterior orientation elements of the airborne platform; thus, the images obtained using each camera can be calibrated dynamically. e mathematical relation between the exterior orientation elements and the motion parameters of the airborne platform can be described using the following equation [26]: where Qs xi , Qs yi , Qs zi , Qφ i , Qω i , and Qκ i represent the exterior orientation elements of camera i relative to the virtual combined camera (obtained by using the static calibration); Vs x , Vs y , Vs z , Vφ, Vω, and Vκ constitute the exterior orientation information of the airborne platform flying at velocity; and dΔ s xi , dΔ s yi , dΔ s zi , dΔ φ i , dΔ ω i , and dΔ κ i denote the increments in the exterior orientation elements owing to exposure time delay ΔT i in the flying state. International Journal of Optics An intense image motion will occur during the exposure time, particularly if the motion exceeds half of a pixel at high the flight speed or the flying height is too low. A formula for calculating the IMC was proposed in [31] as where IMC is the true rate of image movement in millimeters per second, v is the true ground speed (meters per   International Journal of Optics hour), f is the focal length (millimeters), and h is the altitude above terrain (meters).

(4)
As shown in Figure 9, τ is the exposure time, x 1 , x 2 . . . . . . x τ are the exposed pixels in the flight direction, and g 1 , g 2 . . . . . . g τ are the corresponding gray values of these pixels, and L 1 is the unit record of x.

Study Area.
e study area was located in Pingba District, Anshun City, in the west central part of Guizhou Province, China ( Figure 10). is area comprises a steep, mountainous area lying at the watershed between the Yangtze River and Pearl River systems on the eastern side of the Yunnan-Guizhou Plateau. e geological structure of the area is complex, and the karst landforms are unique. e altitude of the terrain is within the range of 1102-1695 m, being generally higher in the northwest and lower in the southeast; this area is mainly mountainous and exhibits an average elevation of 1282 m. e study area shows a humid subtropical monsoon climate with an annual average temperature of 14°C and an average annual precipitation of 1146.3 mm. On average, there are 1276 h of sunshine annually. Winds are light, and the annual average wind speed is 2.4 m/s. e area to be imaged was located between 105.83°E and 106.52°E and between 26.23°and 26.64°, and the size of this area was 700 km 2

Evaluation Methods of the Mapping System.
e groups of wide-angle cameras and the LiDAR system are the most important sensors in the proposed mapping system erefore, we attempted to evaluate the accuracy of these sensors by evaluating the estimates of the locations of a set of ground control points. e locations of 89 control points were determined using the GPS real-time kinematic (RTK) technique. e observations obtained using the sensors were evaluated in terms of the deviation (bias) and root-meansquare error (RMSE) along the planar direction (RMSE xy ) and vertical direction (RMSE h ): where x i , y i , and h i are the measured values along the x-, y-, and z-axes, respectively; x ir , y ir , and h ir are the corresponding reference values determined using the GPS RTK technique; v i is a measured value in one of the three directions; v ir is the corresponding reference value; and n is the total number of measured values. International Journal of Optics

Results
After the proposed system had been flown in the test area, the experimental data were processed to obtain digital orthoimages, a wide-angle camera-based point cloud, and LiDAR-based point cloud ( Figure 11). To test the accuracy of the measurements obtained using the wide-angle cameras, after the aerial triangulation, the locations of the ground control points were determined based on the triangulation and compared with the reference locations (Table 3). e deviations in the measured values are small; there is also little variation in the values. e exception is the vertical direction, for which RMSE is larger (0.294 m). As shown in Figure 12, the measurement errors in the x-and y-axes are less than 0.4 m, whereas the maximum error in the vertical direction is nearly 1 m. We laid a lot of ground control points (GCP) collected by GNSS to check the accuracy of the airborne mapping system, and the distribution of GCP is shown in Figure 13. e results (Table 3) show that the accuracy of the wideangle cameras and the LiDAR system is reliable, but the RMSE along the horizontal plane and RMSE along the vertical plane are different. e RMSE along the horizontal plane LiDAR system is larger than the wide-angle cameras; however, in the vertical direction, it is smaller. As shown in Figure 14, the error in the LiDAR measurements along the horizontal plane is less than 0.5 m and that along the vertical direction is less than 0.6 m. ese results indicate that the measurements along the vertical direction obtained using LiDAR is better, and the measurements along the horizontal direction obtained using the wide-angle cameras is better.

Conclusions
In this study, a wide-angle emergency response airborne mapping system equipped with multiple sensors was proposed. Four main sensors were integrated on a large fixedwing platform. ese sensors consisted of groups of wideangle cameras that can be used to obtain high-resolution wide-field-of-view RGB color images, a HD video camera that can be used to capture and transmit real-time videos of disaster scenes during daylight, an infrared camera that can be used to record and transmit live videos at night, and a LiDAR system that can be used to obtain elevation point cloud data for calculating the earth-rock volume in disaster areas. Tests on the proposed system demonstrated that it is efficient and reliable and can be used to acquire RGB imagery and infrared videos in real time. Furthermore, the high-quality 3D point cloud data and high-resolution orthoimages can be obtained by processing the data after landing. Based on observations of ground control points using the GPS/IMU, the error in the imagery and point clouds obtained using the system was 0.294 m, which can meet the needs of various applications. It was also found that the images obtained using the groups of wide-angle cameras and the LiDAR system show high measurement accuracy. ese data are suitable for use in emergency response. It can be concluded that the proposed system shows all-weather all-day image acquisition capability that can meet the requirements of national airborne emergency mapping in China.

Data Availability
e data used to support the findings of this study are collected and obtained by the equipment itself, and the data link cannot be provided.

Conflicts of Interest
e authors declare that they have no conflicts of interest.