Feasibility of Accurate Point Cloud Model Reconstruction for Earthquake-Damaged Structures Using UAV-Based Photogrammetry

Camera-enabled unmanned aerial vehicles (UAVs) provide a promising technique to considerably speed up the inspection and visual data collection from regions that may otherwise be inaccessible. In addition, the technology of image-based 3D reconstruction can generate a point cloud model using images captured by UAVs. However, the performance of the point cloud modeling may be afected by multiple factors, such as the modeling software, ground control points (GCPs), and UAV fight modes. In this study, three common software packages were compared, and Pix4Dmapper was considered a suitable software for point cloud modeling for earthquake-damaged buildings. Te accuracy and resolution of point cloud models are usually evaluated by root mean square error (RMSE) and ground sampling distance (GSD). Te efects of the main factors, including the number of GCPs, distribution of GCPs, fight manner of the UAV, and distance from the UAV to the target, were investigated on the basis of two real-world multistory earthquake-damaged structures. Te infuence rules of the main factors revealed that a close range, automatic fight mode of the UAV, a large number of GCPs, and a relatively wide distribution of the GCPs may generate a point cloud model with low computational costs, high accuracy, and high resolution. In the particular illustration example here, the RMSE is 6.78mm while the GSD is 1.60 mm. Finally, rapid structural damage inspection was demonstrated using an accurate point cloud model and compared with the inspection results of a total station and terrestrial laser scanner point cloud models. Te comparison of diferent inspection results showed that the relative errors were relatively acceptable within 4%.


Introduction
Most buildings are becoming susceptible to losing their designed functions as they deteriorate from use. Once they experience earthquakes, buildings may lose more functions and become dangerous. Tis process signifes urgent maintenance and inspection issues. Because of this, many research groups have proposed computer vision-based structural health monitoring and structural damage inspection techniques [1][2][3][4][5][6]. For example, the vision-based method using a deep architecture of convolutional neural networks [1] and the faster region-based convolutional neural network-based structural visual inspection method [2] were proposed for detecting multiple types of damages in extensively varying situations. In addition, the unmanned aerial vehicle (UAV) surveying technology provides an effcient and convenient way to acquire visual information.
In recent years, UAVs have been widely applied in civil engineering [7]. With the rapid development of UAV technology [8], low-altitude UAV surveying and mapping technology provides a new efcient multidimensional information acquisition method for structural digitization [9], daily maintenance [10,11], and postdisaster emergency assessment [12] of engineering structures. For example, the autonomous UAV system integrated with a modifed faster region-based convolutional neural network was proposed to identify various types of structural damage and map the detected damage in a GPS-denied environment [13]. Te autonomous UAV-based damage detection method using ultrasonic beacons was proposed for indoor environments and areas in which GPS was denied or unreliable [14]. Terefore, UAVs play an important role in the entire life cycle of engineering structures. Te daily maintenance of various structures, such as dams [15], bridges [16], roads [17], and buildings [18], can be realized by UAVs, which collect visual data from inaccessible regions [19]. In addition, UAVs can accomplish faster structural damage inspection and assessment in disaster areas because of the inconvenience of trafc after an emergency [20]. Structural digitization is the basis of daily maintenance and postdisaster emergency assessment [21][22][23][24][25], and image-based 3D reconstruction using images captured by UAVs is one of the prominent techniques for digitization of engineering structures [26].
Some studies have investigated the accuracy of the point cloud model of image-based 3D reconstruction. Because the diferent fight parameter combinations of UAVs have an important impact on the accuracy of the point cloud model [27], an appropriate fight and image capture strategy adequate to the quality requirements of the modeling can save time and resources [28]. Te UAV observations obtained under diferent light conditions can be evaluated using terrestrial laser scanner acquisitions, which show the efect of light conditions on the accuracy of the point cloud model [29]. Te distribution and quantity of ground control points (GCPs) can also impact the accuracy of point cloud models [30]. In addition, an overview of UAV and image-based 3D reconstruction discussed the major factors that infuence accuracy and demonstrated the accuracy and limitations of UAV-based topographic surveying [31]. Te fve infuence factors (fight height, average image quality, image overlap, GCP quantity, and camera focal lengths) [32] and seven indices (including proximity to key-point features, distance to GCPs, angle of incidence, camera stand-of distances, number of overlapping images, brightness index, and darkness index) [33] impacting 3D modeling accuracy have been investigated. Moreover, the diferent software for surveying and 3D reconstruction could also afect the accuracy of point cloud models [9]. Te point cloud model of engineering structures is mostly reconstructed with centimeter accuracy using images captured by the UAV, which is used for structural damage inspection [34]. Moreover, the region-scale point cloud model can be reconstructed with centimeter accuracy using image-based 3D reconstruction [33]. Te millimeter-scale resolution of the point cloud model is displayed for infrastructure condition assessment using an image-based systematic and adaptive reconstruction technique [35]. Even when compared with the point cloud model acquired by a terrestrial laser scanner, the point cloud model reconstructed by multiview images is sufciently accurate for structural damage inspection and state assessment [35,36]. Te accuracy of the point cloud model is the foundation of engineering applications such as structural health monitoring, damage inspection, and state assessment.
Furthermore, structural condition inspection and damage diagnosis based on the point cloud model have also been studied [37]. Te health monitoring of the structural movement can be realized by comparing the data collected by the UAV over diferent periods [38]. Te semantic segmentation of important structural components in the point cloud model can be accomplished for diferent types of structures, such as bridges [39], tunnels [40], buildings [41], and towers [42]. Structural damage inspection of structural components, such as beams, columns, and walls, is performed easily after structural semantic segmentation in the point cloud model [43]. Moreover, the structural damage inspection of the accurate point cloud model can provide accurate detection results, which can be used as the research basis for structural state assessment [44,45]. Terefore, the precision of the point cloud model is critical in the processes of structural health monitoring, semantic segmentation, damage inspection, and state assessment. Tus, a feasible method that can realize accurate point cloud model reconstruction of engineering structures is fundamental and necessary.
Tis paper presents an accurate point cloud model reconstruction method for earthquake-damaged structures based on images captured by UAVs; the efects of diferent factors on the precision of the point cloud model are examined. First, the methodologies of image-based 3D point cloud model reconstruction and point cloud model quality evaluation are detailed. Tereafter, two real-world multistory earthquakedamaged building structures, corresponding to moderate damage and near-collapse conditions, are illustrated. Te infuence rules of the main factors, including the number of GCPs, distribution of GCPs, fight manner of the UAV, and distance from the UAV to the target, on the precision of the point cloud model are presented. Finally, the structural damage inspection and state assessment are presented using an accurate point cloud model. In addition, the inspection results of the image-based point cloud model are quantitatively compared with those of the total station and terrestrial laser scanner point cloud model.
Generally, this paper may be distinguished from the existing studies with the following remarks: (1) the selection and distribution of feld GCPs are studied to analyze the accuracy and computational efciency of UAV-based point cloud models for seismic damaged structures; (2) diferent UAV fight modes, such as automatic and manual fights and close and far fights, are studied to analyze the accuracy and resolution of UAV-based point cloud models; and (3) the residual deformation measured by three methods, including the total station, image-based point cloud model, and terrestrial laser scanner point cloud model, is compared for performance validation on structural damage inspection of point cloud models. Practically, the proposed method could accelerate the feld inspection of seismically damaged structures in a digital and efcient manner with UAV equipment and computer vision techniques.

Methodology
Te proposed methodology of image-based 3D point cloud model reconstruction involves a workfow with four fundamental steps, as shown in Figure 1. (1) First, GCPs are manually placed around the physical structure according to the circumstances. Traditional surveying instruments, such as total 2 Structural Control and Health Monitoring stations, are usually used to measure the coordinates of the GCPs for accurate reconstruction.
(2) Tereafter, data collection is conducted. Critical image data from diferent views are captured by photography using UAVs after path planning.
To ensure data precision, high-resolution images must be captured.
(3) Image-based 3D reconstruction algorithms [46], such as structure from motion (SFM) [31] and multiview stereo (MVS) [33], are adopted to generate a dense point cloud model of the object from multiple images captured by the UAVs. (4) Finally, the main factors afecting the precision of the point cloud model are presented. Te resolution and accuracy of the point cloud model are analyzed and shown for later structural inspection and monitoring.

GCPs Layout.
Te GCPs are points on the ground with known coordinates. In an aerial mapping survey, the GCPs are points that a surveyor can precisely pinpoint; with a handful of known coordinates, it is possible to accurately map large areas. GCPs play an important role in point cloud modeling and are used to locate the position, constrain the size, and check the accuracy. A point that is evidently distinguished in the surrounding environment can be used as a natural GCP. In addition, artifcially designed special point patterns, such as white and black papers, can be employed as artifcial GCPs because they easily display highcontrast visual characteristics. Te quantity and distribution of GCPs may have signifcant efects on the accuracy of the point cloud model [30], and the edge distributions of GCPs may obtain a better accuracy of the point cloud model [47]. In addition, it is recommended that GCPs be evenly distributed over the interest area [48]. Figure 2 illustrates the uniform, concentrated, and linear distribution of GCPs. Geometrically uniform distribution means GCPs are uniformly distributed in the whole area of interest, as shown in Figure 2(a), and the majority of the target area is covered by GCPs. Concentrated distribution means GCPs are located in a relatively small region, as shown in Figure 2(b); obviously, a small portion of the target area is covered by GCPs. Linear distribution means GCPs are approximately distributed in the area of interest, as shown in Figure 2(c).

Data Collection.
Camera-enabled UAVs provide a method that can signifcantly facilitate inspection and collect image data from inaccessible regions. UAV path planning allows accurate acquisition of high-quality image data groups for point cloud modeling. Common UAV fight paths include the scan and surround paths (as shown in Figure 3), which are available for regional fight mapping and building fight mapping, respectively. Te scan fight paths (as shown in Figure 3(a)) of UAVs collect the visual information of the region from one direction and are suitable for regional fight mapping. Te surrounding fight paths (as shown in Figure 3(b)) of UAVs collect the visual information of the target from all directions and are suitable for building fight mapping. Moreover, it is necessary to perform supplementary photography outside the UAV fight paths to ensure the completeness of the data. Adjacent images require a sufcient overlap area to ensure that a reliable connection can be established between the images in the point cloud reconstruction. Te accuracy of point cloud models would be improved when increasing the image overlap [32]. Appropriate high values for the forward and side-overlap parameters can satisfactorily improve the accuracy of the results, such as about 80% overlap [49]. Furthermore, the manner of fight of the UAV and the distance from the UAV to the target may afect the accuracy of the point cloud model.  (1) Te SfM stage involves estimating the camera poses and sparse 3D point clouds from a set of input images. Te computational cost of this stage is primarily dependent on the number of images and the complexity of the scene. For example, if the scene has a lot of texture and features, then the number of extracted features and the number of matches will be high, which will increase the computational cost. On the other hand, if the scene is relatively simple and has fewer features, the computational cost will be lower. (2) Te MVS stage involves dense reconstruction of the scene from the sparse 3D point cloud obtained from the SfM stage. Tis stage involves computing the depth for each pixel in the images and fusing them to create a dense 3D model. Te computational cost of the MVS stage is primarily dependent on the number of pixels in the images and the resolution of the output point cloud.

Point Cloud
In addition, the number of GCPs used in SfM-MVS workfows can impact the computational cost in several ways: (1) Adding GCPs can increase the overall number of tie points, which may increase the computational cost of feature extraction and matching algorithms. (2) GCPs can be used to scale the model and improve its accuracy, which may require additional computation time during bundle adjustment and dense reconstruction steps. (3) Te number and distribution of GCPs can also afect the computation cost of the SfM-MVS workfow. Using a larger number of well-distributed GCPs can lead to faster convergence times during bundle adjustment, whereas using a smaller number of poorly distributed GCPs may result in longer computation times. Te number of GCPs used in SfM-MVS workfows has a signifcant impact on the accuracy of point cloud models. In general, the more GCPs that are used, the more accurate the reconstruction of the point cloud model will be. Tis is because more GCPs provide more information for the software to use in calculating the transformation between the point cloud models and the real world. However, it is important to note that there is a point of diminishing returns with adding more GCPs. Beyond a certain number, the additional GCPs may not signifcantly improve the accuracy of the point cloud model. Additionally, the accuracy of the GCPs themselves also plays a role in the overall accuracy of the point cloud model.
Tis technique is not limited by a temporal frequency and can provide point cloud data comparable in density and accuracy to those generated by terrestrial and airborne laser scanning at a fraction of the cost. Recently, several existing commercial software packages, such as Pix4Dmapper, Agisoft Metashape, and ContextCapture, have realized image-based point cloud reconstruction. Several existing literature [51,52] have studied the performance of diferent software in terms of accuracy and speed of point cloud models reconstruction and show that diferent software has its own advantages. Agisoft Metashape and Pix4Dmapper were found to have better performance than Con-textCapture, which had a worse accuracy of point cloud models reconstruction than other software due to limitations in the performance of the image matching algorithm [51,52]. In addition, Pix4Dmapper was found to have similar performance [51,[53][54][55] with Agisoft Metashape in accuracy of point cloud models reconstruction and have better density [51] than Agisoft Metashape.

Model Quality Evaluation.
Te average Ground Sampling Distance (GSD) [48] and Root Mean Square Error (RMSE) [56], common indicators for the quality evaluation of point cloud models, indicate the resolution and accuracy of the point cloud models, respectively. GSD is a measure of one sampling limitation to spatial resolution, which is the distance between two consecutive pixel centers measured on the ground. Te larger the value of GSD, the lower the spatial resolution of the image and the less visible the details. Te calculation of GSD is shown in equations (1)-(3): where f denotes the focal length of the camera. H is the UAV's fight altitude. S height and S width are the height and width of the camera sensor, respectively. Img height and Img width are the image height and width in pixels, respectively. Te RMSE is the square root of the mean of the square of the error between the coordinates of the GCPs and the corresponding points in the point cloud model. RMSE is commonly applied, and it is considered an excellent general metric for the accuracy of point cloud models. Te three dimensions and total RMSE are calculated using equations (4)- (7): where X p , Y p , and Z p represent the three-dimensional coordinates in the point cloud model; X t , Y t , and Z t represent the three-dimensional coordinates measured by surveying instruments; and n is the number of GCPs.

Experimental Study
In this study, two earthquake-damaged buildings, which experienced the M S 8.0 Wenchuan earthquake on May 12, 2008, were selected to study the image-based 3D reconstruction technology. Both structures were located in Beichuan County, Sichuan Province. In the frst example, a four-story masonry structure with moderate damage, is used for investigating the efect of UAV fight manner, distance from UAV to target, and quantity of GCPs on the accuracy of the point cloud model. Te second example is a three-story reinforced concrete frame structure with a near-collapse status for examining the efect of the distribution of GCPs on the accuracy of the point cloud model.  Figure 4(c). In addition, GCPs Nos. 6-13 on the building surface, such as points in corners and edges, are natural and clearly distinguished in the surrounding environment. Since the damaged building is inaccessible, the fve artifcial GCPs covering all directions of the structure are uniformly laid near the ground on the building surface. It is difcult to identify multiple corners with distinct colors as natural GCPs on the dark west side of the building. Tus, three natural GCPs are laid on the west side of the target building, and fve natural GCPs are utilized on the east side of the building. Te TOPCON GPT-7502 total station was used as a surveying instrument to measure the coordinates of the GCPs, as shown in Figure 4(d). Te coordinates of the GCPs measured by the total station were adjusted to a range of 10∼30 m for better calculations, as shown in Table 1.
Te DJI Phantom 4 Pro drone was used for feld photogrammetry with camera angles of 0°and −30°in diferent fight modes to ensure that 2000 images with approximately 75% front and side overlap area were captured with an image resolution of 5472 × 3648 pixels, as shown in Figure 5  Diferent software was compared for selecting the software that is most suitable for reconstruction of point cloud models. Te computer with the main specifcations (CPU: Intel (R) Core (TM) i7-12700 CPU; RAM: 64 GB; GPU: NVIDIA GeForce RTX3060) was used for selection of suitable software. Te comparison of point cloud models generated by images captured by UAV (automatic close surround path) between three software (ContextCapture,  Agisoft Metashape, and Pix4Dmapper) is shown in Table 3, which also shows the version and website of the three software. Figure 6 illustrates the comparison of computational time and model quality between diferent software. ContextCapture was found to have worse accuracy and speed of point cloud model reconstruction than other software, but more dense point clouds with 546,951,496 points in the model was generated by ContextCapture. Te faster speed and better resolution of the point cloud model reconstruction were found in Agisoft Metashape, but more sparse point clouds with 10,713,043 points in the model were generated. Pix4Dmapper was found to have better accuracy of point cloud models reconstruction than other software. Additionally, the moderate speed and density of point cloud modeling were found in Pix4Dmapper. Terefore, Pix4Dmapper, with its balanced performance between efciency and accuracy, was selected to establish the point cloud model  Table 2. Te GSD and RMSE of the point cloud model under any fight mode were less than 20 mm, some of which even reached the millimeter level. Figure 7 shows the efects of the fight manner and surround radius of the UAV on the GSD and RMSE. Noticeably, the GSD of the manual surround path is smaller than that of the automatic surround path, and the GSD of the close surround path is smaller than that of the far surround path, as shown in Figure 7(a). Tis ensures that the spatial resolution of the reconstructed point cloud model is higher, indicating that the precision of the point cloud model is better. Furthermore, the RMSE of the automatic surround path is smaller than that of the manual surround path owing to the more consistent overlap area ratio of the images, as shown in Figure 7 Additionally, the impact of the GCPs quantity on computational cost was studied. Te computer with the main specifcations (CPU: Intel (R) Core (TM) i7-12700 CPU; RAM: 64 GB; GPU: NVIDIA GeForce RTX3060) was used for reconstruction of point cloud models. Group I in Table 4 was selected for studying the impact of the GCPs quantity on computational cost. Te computational time for each reconstruction is recorded, as shown in Figure 9. A small amount of GCPs does not signifcantly afect the computational cost. However, a large number of GCPs signifcantly reduce computational time, as shown in Figure 9(a). Te signifcant correlation between the accuracy of the point cloud model and computational time is shown in Figure 9(b). Te larger quantity of GCPs leads to better accuracy of point cloud models and faster efciency of model reconstruction.

Tree-Story Frame Structure.
Tere are many trees and buildings around the seismically damaged three-story concrete frame structure; therefore, the scan fight path of the UAVs is selected to collect image data. Te visible and diferent inclination conditions were observed on the second foor of the three-story frame structure, providing a unique opportunity for studying point cloud model-based structural damage detection. Terefore, two natural GCPs are laid on each column of the second foor to control the accuracy of point cloud models. In addition, in order to study the distribution of GCPs, three natural GCPs are uniformly laid on the distinguished corners of the upper part of the frame structure, and four artifcial GCPs are laid on the accessible part of the frame structure. Te seismically damaged three-story concrete frame structure with multiple GCPs, as shown in Figure 10, was reconstructed based on images for examining the efect of the distribution of GCPs on the point cloud model. Numerous GCPs were regularly distributed on the structure surface, and the coordinates of the GCPs measured by the TOPCON GPT-7502 total station were adjusted to a range of 10∼40 m for improved calculation, as shown in    Te point cloud model of the seismically damaged threestory frame structure was reconstructed using the SfM and MVS techniques based on 27 GCPs and 348 images. Te GSD and RMSE of the point cloud model were 2.4 and 6.78 mm, respectively. Figure 11(a) illustrates the point cloud model from three directions (front view and two side views) with complete structural members, such as beams, columns, and walls. Te local damage in the point cloud model is clear for structural inspection, although nonstructural members, such as glasses, are not completely reconstructed. Te real-life images and processed point cloud models were further compared with two illustrative regions, as shown in Figure 11(b). It can be observed that the point cloud model can digitalize the original appearance with slight deviations regarding color and lighting.
Because numerous GCPs are regularly distributed on the structure surface, the diferent spatial distributions of the GCPs can be used for point cloud reconstruction, as shown in Table 6, with fve work groups, including four 4GCPs groups and one 27GCPs group. Te GCPs in work groups 4GCPs-I and 4GCPs-II were widely distributed in the model, and the GCPs in work groups 4GCPs-III and 4GCPs-IV were locally distributed in the model. Te errors of the GCPs in each spatial distribution are shown in Figure 12. Noticeably, the error of the GCPs is afected by their diferent spatial distributions. For the selected GCPs (marked with black squares) for reconstruction and the GCPs around them, the error is equal to or smaller than the model error of 27 GCPs for reconstruction. In contrast, there are larger errors in the GCPs that are further from the selected GCPs  for reconstruction. Te error of the point cloud model reconstructed using GCPs widely distributed in the model, such as the work groups 4GCPs-I and 4GCPs-II, is similar to that of the 27 GCPs reconstruction model, as shown in Figures 12(a) and 12(b). Te error of the model that is reconstructed using GCPs distributed in the model local area, such as the work groups 4GCPs-III and 4GCPs-IV, is larger than that of the 27 GCPs reconstruction model, as shown in Figures 12(c) and 12(d). In particular, the error of the GCPs further from the local area is larger, such as GCPs 9, 10, 19, 20, 23, and 27 in the 4GCPs-III group and GCPs 21, 22, and 23 in the 4GCPs-IV group. PCA can be used to interpret the spatial distribution characteristics of GCPs. Te products of the eigenvalues of the PCA, shown in Table 6, represent the spatial distribution characteristics, and their efect on the RMSE is shown in Figure 13(a). Tere are diferent eigenvalues in the diferent GCPs distributions, and a larger product of the eigenvalues of the PCA results in a smaller RMSE. For the same number of GCPs, the maximum product of eigenvalues (606.122) in 4GCP-I corresponds to a minimum RMSE of 8.48 mm, and the minimum product of eigenvalues (0.001) in 4GCP-IV corresponds to a maximum RMSE of 32.88 mm. Moreover, the error distribution of the diferent work groups is shown in Figure 13(b); larger products of eigenvalues of the PCA result in small errors of more quantities, such as 70% error ≤5 mm, and large errors of fewer quantities, such as 7% error ≥10 mm, in the 27GCPs group. Correspondingly, smaller products of eigenvalues of PCA lead to large errors in more quantities, such as 70% error ≥10 mm, and small errors in fewer quantities, such as 22% error ≤5 mm, in the 4GCPs-IV group. Terefore, a better spatial distribution with a larger product of eigenvalues of PCA can result in higher accuracy of the point cloud model.

Structural Control and Health Monitoring
Additionally, the impact of the GCPs distribution on computational costs was studied. Te computer with the main specifcations (CPU: Intel (R) Core (TM) i7-12700 CPU; RAM: 64 GB; GPU: NVIDIA GeForce RTX3060) was used for reconstruction of point cloud models. Te computational time for each reconstruction is recorded as shown in Table 6. Tere is a signifcant trend between GCPs distribution and computational time, as shown in Figure 14(a).   An accurate point cloud model can be used for structural damage inspection. A prior study [44] realized the measurement of structural inclination and deformation. Each plane of target columns was segmented by semantic segmentation method [44], as shown in Figure 15. Te diferent colors represent points on diferent planes in 3D space. Firstly, the point cloud model was downsampled by the Voxel Grid method; points were approximately sampled as their centroid in each voxel. Te distance between the nearest two points was approximately the size of the voxel, which was commonly set as 0.01∼0.1 m. Ten, the plane points (marked with solid dots) and noises (marked with hollow dots) were separated in the point cloud model after random sample consensus (RANSAC) and density-based spatial clustering of applications with noise (DBSCAN) methods execution. Te threshold in RANSAC was commonly set at 0.005∼0.05 m. Finally, the point cloud model was upsampled by the Voxel Grid method. Currently, the sequential processes of downsampling, segmentation, clustering, and upsampling are presented ( Figure 15) by combining the voxel grid, RANSAC, and DBSCAN methods, which are motivated to achieve a satisfactory performance balance between the processing efciency in computation and the inspection accuracy in engineering. Te plane equations of columns were calculated by the regression  5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27  method using scatter distribution on the structural surface, and the edges of columns were identifed by using plane intersection [44]. Figure 16 shows the details of identifcation methods for the inclination of columns edges. Te normal vector of plane U and V were shown in Figure 16 as u → and v → , calculated by the plane equations. Te edge vector was identifed as w → , which was the cross product of u → and v → . Ten, the vector sum of the edges of the column was taken as the inclination vector of the column. Te inclination rate and angle of the edge was calculated by decomposing the vector of the edge, which were given as w → � (w x , w y , w z ). Te inclination angle θ w and inclination rate k w were calculated by equations (8) and (9):   Te accuracy and voxel resolution of the point cloud model were the main factors limiting the precision of inclination measurement. Figure 17(a) shows the segmentation and inclination measurements of the structural columns. Te inclination of the 30 columns divided manually was calculated, and the maximum and minimum inclination rates of the columns were 31.359% and 0.316%, respectively. Te structure was evaluated as Damage State 4 with the danger of structural collapse from earthquake aftershocks, referring to the code FEMA P-58 [57]. Te ten columns on the second foor of the building stayed at obvious and diferent inclination rates, which ranged from about 0% to about 30%. Te column inclination rates on the ground foor and third foor of the frame structure were simpler than those on the second foor. Terefore, the second-foor columns were considered the target inspection object and then measured by the total station. Subsequently, the point cloud model of the seismically damaged three-story concrete frame structure based on image reconstruction and a terrestrial laser scanner was compared to the structural second-foor column inclination measurement. Figure 17(b) shows the comparison of second-foor column inclinations measured by three methods, including the total station measurement, the measurement of the image-based point cloud model, and the measurement of the terrestrial laser scanner point cloud model. Te inclination rate of the 10 columns on the second foor could be measured by the three methods from the maximum value of approximately 32% to the minimum value of near 0%, and the inclination rates measured by the three methods were similar. More detailed data and relative errors are presented in Table 7. Te inclination measured by the total station was compared with that measured by the two point cloud models, and the relative errors were all within 4%. Te average values of relative error of the inclination of the 10 columns were also displayed as 1.543%

Conclusion
Tis study investigated the factors infuencing the point cloud model reconstruction performance according to two experiments on earthquake-damaged buildings. Te performance of common software in point cloud reconstruction was investigated. Te main infuencing factors, including the fight manner of the UAV, distance from the UAV to the target, number of GCPs, and distribution of GCPs, were studied separately. In addition, the structural damage inspection of accurate point cloud models was presented. Te main conclusions are as follows: (1) Tree common software packages, including Con-textCapture, Agisoft Metashape, and Pix4Dmapper, were quantitatively compared in terms of point cloud reconstruction performance. ContextCapture was found to have better density, and Metashape was found to have faster speed. Pix4Dmapper outperformed the other packages with a relatively balanced performance, including the best accuracy, moderate speed, and moderate density. Terefore, Pix4Dmapper can be considered a suitable software for point cloud modeling for seismically damaged structures. (2) Te automatic fight manner results in an RMSE of the point cloud model smaller than that of the manual fight manner because of the more consistent overlap area ratio of the images in automatic fight. In addition, a closer distance from the UAV to the target causes a GSD of the point cloud model smaller than that of a farther distance from the UAV to the target. Terefore, the images captured at a close range and in the automatic fight mode of the UAV reconstruct the point cloud model with higher precision and resolution. (3) Te GCPs can control the size and location of the point cloud model to make it the same as the actual target. In addition, more GCPs can reconstruct point cloud models with lower computational costs and better model accuracy. Moreover, a wider distribution of GCPs with a larger product of eigenvalues of PCA can produce point cloud models with lower computational costs, better model accuracy, and fewer large errors. Terefore, more GCPs and a wider distribution of GCPs should be selected for a faster and more accurate reconstruction of point cloud models. (4) A comparison of 10 column inclinations measured by three methods, including the total station measurement, the measurement of the image-based point cloud model, and the measurement of the terrestrial laser scanner point cloud model, was presented, which showed that the relative errors of inclination rates were all within 4%. Terefore, an accurate point cloud model reconstructed using image-based 3D reconstruction technology is suitable for structural damage inspection and state assessment.
Tis study provides a theoretical basis for accurate point cloud model reconstruction for earthquake-damaged structures using UAV-based photogrammetry. Tus, after an earthquake, an accurate point cloud model can be reconstructed rapidly for structural damage inspection and state assessment. Tis provides important support for postearthquake rescue and resettlement of victims. However, deploying more GCPs with a wider distribution in the feld measurement requires more time. Hence, it is important to balance the distribution and quantity of GCPs, which speeds up the reconstruction of accurate point cloud models.
In addition, the accurate point cloud models of building structures are limited in practical engineering applications. Te identifcation of the current structural deformation can be realized by employing a single point cloud model. It may be challenging to realize the measurement of structural incremental deformation with the matching analysis of multiple point cloud models. Te structural fnite element model may not be fully reconstructed by using structural point cloud models because the inside condition of the structure could not be obtained using UAV-based photogrammetry. In the future, the multisource point cloud model based on the fusion of multiple types of autonomous unmanned equipment (such as unmanned drones and vehicles) is expected to achieve comprehensive structural modeling and diagnosis.
In future studies of the point cloud model precision of image-based 3D reconstruction, an investigation should be conducted on other factors, such as real-time kinematics, and other scenarios, such as regional buildings. Furthermore, the application of accurate point cloud models, such as the extraction of observable physical damage, including surface cracking, wall spalling, and steel component buckling, may also be of interest in the future.

Data Availability
Data used in this study are available upon reasonable request to the corresponding author.

Conflicts of Interest
Te authors declare that they have no conficts of interest.