Multi-UAV Factor Graph Colocation Algorithm

To tackle the di ﬃ culty and high cost of UAV positioning in a 5G disaster environment, this study proposes a multi-UAV coordinated positioning algorithm based on a factor graph (MUAV-FGC). A factor graph model is designed for coordinated positioning of the master-slave UAVs. In the positioning process, the in ﬂ uence of the speed and heading errors of the UAV on the positioning accuracy was analyzed, and the expected and variance values of the variables were used as the information transmitted between the factors. The error of the cooperative positioning algorithm was derived based on the factor graph. The coordinated positioning of multiple drones and the positioning accuracy improved. The simulation results show that, compared with the traditional extended Kalman ﬁ lter (EKF) algorithm, the proposed MUAV-FGC positioning algorithm reduces the mean positioning and root mean square error by approximately 18.86% and 17.54%, respectively, thus proving its e ﬀ ectiveness.


Introduction
In recent years, various types of natural disasters affecting a wide area have occurred frequently, causing huge losses to life and property. With the continuous maturity of 5G technology and the research and development of 6G technology in the future [1], the advanced technology represented by UAV is more and more applied to the field of disaster prevention, reduction, and disaster relief and emergency management [2][3][4], which opens the "eye of the air" for disaster prevention, reduction, and disaster relief. In the event of a severe natural disaster, the drone can quickly cross the mountain and river barriers and get to the disaster site unaffected by the information interruptions and traffic obstructions; obtain the disaster information through photography, video, or other remote sensing methods; and provide accurate information for emergency rescue and relief in a timely manner [5,6]. Thus, drone technologies present substantial technical advantages. When performing missions, drones need to be able to accurately determine their own location to successfully perform tasks such as disaster situation detection, search and rescue, and delivery of rescue supplies [7,8] at the correct location in scenarios including nuclear disasters, where investigation and rescue by humans is not possible. Therefore, at present, obtaining information on the precise location of the UAV has become one of the most significant core technologies to be developed and improved [9,10].
The traditional methods of obtaining the position of the UAVs primarily include the global positioning system (GPS) [11][12][13] and inertial navigation and positioning system (INS) [14][15][16]. Although the GPS positioning system has the advantages of globalization and real time, the signal could be easily blocked by obstructions (such as high mountains and valleys and remote areas). Moreover, electromagnetic interference, which affects its position accuracy, can easily occur. If the drone cannot receive the GPS signals or if the frequency update of the GPS signal could not meet the requirements of the aircraft to update the information, the drone will lose its position. Conversely, INS has the advantages of small size, low cost, and independent positioning without relying on external information [17]. However, when an INS moves in a large area for a long time, the error of pose estimation will accumulate over time, leading to positioning, and the error increases sharply and deviates significantly from the actual position. When drones are used for disaster relief, they are often required to work for long hours in harsh environments, such as remote mountain areas. Thus, the GPS and INS systems cannot meet the requirements of drone positioning.
At present, scholars at home and abroad have conducted a significant amount of research on the positioning of UAVs [18][19][20]. Padhy et al. [21] proposed a model for autonomous navigation and collision avoidance of drones in corridor environments where GPS cannot be located; detailed experiments were conducted in different corridors to prove the effectiveness of the experimental plan. Zhang et al. proposed an autonomous navigation and positioning method for UAVs based on electric field array detection [22]. Their experiments showed the positioning effect of the positioning algorithm, and the positioning error was within the range of the user error; therefore, this positioning algorithm had good usability and application value. Liu et al. [23] proposed an interactive multimodel positioning algorithm based on a robust Cuban Kalman filter and analyzed the performance of the UAV positioning and navigation algorithm. These positioning methods are only for single UAV positioning research, and there is no research on multi-UAV coordinated positioning. However, in disaster environments, multiple UAVs often work collaboratively; thus, coordinated positioning is required to improve the positioning accuracy. Therefore, the coordinated positioning of multiple drones has become a crucial issue that must be addressed.
Regarding the research on multi-UAV colocation [24][25][26][27][28][29], Sivaneri and Gross [30] proposed a UAV colocation algorithm in a global navigation satellite system (GNSS) environment, which uses a platform where the satellite signals are interfered, except for on the ground level, for navigation. However, the limitations of the ground control station technology posed a problem. Meyer et al. [31] proposed a sigma point belief propagation positioning algorithm, which uses sampling points to characterize the state and can quickly perform collaborative positioning; this method is more suitable for indoor positioning and navigation and does not consider the use of measurement information devices such as satellites. However, a large number of coordinated drones lead to a large computational load, which remains an issue yet to be solved. Vetrella et al. [32] proposed a collaborative positioning method that integrates inertial, magnetometer, available satellite pseudo-range, cooperative drone position, and monocular camera information, which effectively improves the positioning performance of the drone swarm under the GPS-constrained conditions. However, a large number of sensors were used. Wan et al. [33] designed a dynamic nonparametric belief propagation algorithm to obtain the UAV location information, but the amount of calculation was large, and only a two-dimensional space was simulated, which is not suitable for large-scale cluster UAV use. Fan et al. [34] proposed a multi-AUV colocation method based on factor graphs and product algorithms.
Experimental results show that, its positioning accuracy is better than those of the extended Kalman filter (EKF) and unscented Kalman filter (UKF) algorithms; however, it is a centralized processing method. The aforementioned algorithms consider the difficulty of UAV positioning under conditions such as limited GNSS signals. However, problems such as the inability to eliminate the constraints of the ground control station and the requirement of a large number of sensors remain, making such methods unsuitable for use in disaster environments. The human-machine coordinated positioning does not consider the influence of the speed and heading errors of the drone on the positioning accuracy during flight. In response to the above problems, this paper proposes a multi-UAV colocation algorithm based on a factor graph (MUAV-FGC). The main contributions of the algorithm are as follows: (1) Analyzed the problem of multi-UAV colocation error in a disaster environment, proposed a multi-UAV factor graph colocation algorithm, and designed a factor graph model of the master-slave UAV coordinated location

Wireless Communications and Mobile Computing
Cartesian coordinate system is established with the UAV transmitting base station B as the origin. In the figure, MUAV is the main UAV, and UAV is the UAV to be positioned. In the positioning process, it is assumed that the position of the main UAV is accurate, that is, the main UAV is an airborne 5G base station, and the position information of the main UAV is obtained by 5G data communication between the UAV and the main UAV.
The three-dimensional motion model of UAV can be expressed as where ðx, y, zÞ, v, and ðα, βÞ represent the position coordinates of the slave drone, speed of the slave drone, and heading angles, respectively.
If the speed of the drone at time k, the measured horizontal heading angle, and the vertical heading angle are v k , α, and β, respectively, then the position of the drone at time k is ðx k , y k , z k Þ, and the position of the drone at time k − 1 is ðx k−1 , y k−1 , z k−1 Þ, and Δt is sampling periods; thus, the following three-dimensional discrete motion model for the UAV can be obtained: The slave and master drones communicate with each other to obtain the relative position information. The calculation equation for the relative distance information is where ðx m k , y m k , z m k Þ represents the position of the main drone at time k.

Positioning Error Analysis.
According to the threedimensional motion model of the slave drone, the speed and heading position of the slave drone measured at time a are as follows: where v a represents the true speed of the slave drone at time a andṽ a represents the speed error of the slave drone at time a. α a , β a represents the true heading angles from the drone at time a, andα a ,β a represents the heading errors from the drone at time a. Substituting equation (5) into equation (4), we can obtain On expanding equation (6), we obtain Assuming that the speed and heading errors are infinitesimal, then Substituting equation (8) into equation (7), we can obtain From equation (9), it can be concluded that the coordinate position error is directly proportional to the speed error; it is also positively correlated with heading angle error. It can be seen that both the speed and heading error will have an impact on the positioning, resulting in unmanned machine positioning error increases. Therefore, based on the above analysis, this article introduces an analysis of the speed and heading angle errors in the positioning process.

Multi-UAV Factor Graph
Colocation Algorithm 3.1. Colocation Analysis of Factor Graph. Assuming that the UAV cannot receive the location information of the main UAV at time a, after a period of time, the location information of the main UAV is received at time k. At this time, the posterior position estimation from time a to time k is difficult to calculate, which easily leads to complicated speed and heading errors. Therefore, assuming that the amount of time from time a to time k is short, theñ Accordingly, the error δx a , δy a , δz a of the coordinate position at time a can be written as where x − a , y − a , z − a represent a priori estimates of x a , y a , z a , respectively. Because no position information is received from time a to time k, an a priori estimation is used to express the position as follows: Substituting equation (11) into equation (12), we obtain equation (13) as For representation, the coordinate position error δx k , δ y k , δz k at time k can be written as It can also be rewritten as When the master-slave drone is colocated, the position update needs to be calculated after subtracting the corresponding estimation error from the measured value of the speed and heading: According to equation (14) and the UAV's threedimensional discrete motion and relative distance model, make the information transfer between its variables can be obtained, as demonstrated in Figure 2.The rectangular and elliptical boxes in the figure represent function and factor nodes, respectively.

3.2.
Procedure of the Factor Graph Colocation Algorithm. As shown in Figure 2, the multi-UAV factor graph colocation algorithm is used to calculate the mean and variance of the transfer between the variable and function node. The specific steps are as follows: 3.2.1. Initialization. Initial state before colocation:

Update of A Priori Estimatex
At noninitial time, dead reckoning can be performed according to equation (2). μ v and σ 2 v are the expected speed and variance, and μ α , σ 2 α and μ β , σ 2 β are the expected heading and variance, respectively. Because the speed is not related to the heading, cov ðv, αÞ = 0 cov ðv, βÞ = 0 ( , we can obtain the following formula:

First Update ofx
At this time, x ′ , z ′ , y ′ after the coordinate transformation ( Figure 2) have not been calculated, and the variable nodes x k , y k , z k andx − k ,ŷ − k ,ẑ − k are equal:

First Update ofx
To avoid the collision of the coordinate axes of the master and slave drones and to prevent the resultant large calculation errors, the coordinates of the master and slave drones are converted once in the function node to obtain x k ′ , y k ′ , z k ′ and x k ′ m, y k ′ m, z k ′m.
x k ′ = x k cos θ y cos θ z − sin θ x sin θ y sin θ z À Á − y k cos θ x sin θ z + z k À sin θ y cos θ z + sin θ x cos θ y sin θ z Á , y k ′ = x k cos θ y sin θ z + sin θ x sin θ y cos θ z À Á + y k cos θ x cos θ z + z k À sin θ y sin θ z − sin θ x cos θ y cos θ z Á , y k ′ m = x m k cos θ y sin θ z + sin θ x sin θ y cos θ z À Á + y m k cos θ x cos θ z + z m k À sin θ y sin θ z − sin θ x cos θ y cos θ z Á , where θ x , θ y, and θ z are the angle of rotation along the x-, y-, and z-axes, respectively.

3.2.5.
Δx k , Δy k , Δz k Update. The role of function nodes B, C, and D is to convert the relative and absolute position information. Therefore, the function probability density of these function nodes transferred to node Δx k , Δy k , Δz k is Owing to d 2 = Δx 2 k + Δy 2 k + Δz 2 k , the probability density function passed from the function node E to the variable node can be expressed as 3.2.6. Second Update of x k ′ , y k ′ , z k ′ . The probability density of the function nodes B, C, and D transfer function to ðx k ′ , y k ′ , z k ′ Þ is 3.2.7. Second Update of x k , y k , z k . After calculation and finishing, the updated function probability density of ðx k , y k , z k Þ is as follows: x k~N x k , μ x k , σ 2 x k , y k~N y k , μ y k , σ 2 y k , Wireless Communications and Mobile Computing where μ x k ′ ′ , μ y k ′ ′ , μ z k ′ ′ and σ 2 The coordinate conversion process is as follows:

Error Analysis of Factor Graph Colocation Algorithm.
The error formula can be rewritten as In the case of colocation, the position update needs to be calculated after subtracting the corresponding estimation error from the measured value of the speed and heading as follows: ; therefore, the expecta-tion and variance values can be expressed as follows: After receiving the measurement information of the main drone from the drone at time a and performing error estimation, δxṽ a = 0, δxṽ a is used as the starting point of δxṽ k for the next update. At this time, the expectation and variance of δxṽ a are both 0. Subsequently, δxṽ k after time a can be recursively expressed as δxṽ k = δxṽ k−1 + Δt cos β k cos α k ; however, because the actual value of β, α cannot be determined, it is represented by Further, the expectation and variance of δxṽ k can be expressed as Similarly, the expressions of δyṽ k , δzṽ k , δxα k , δyα k , δzα k , δxβ k , δyβ k , δzβ k and the corresponding expectations and variances can be obtained as follows:

Equation (32) presents the updated formula ofṽ
If the slave drone does not receive the position information of the master drone at time k, thenṽ , and the expected variance can be expressed as follows:

Wireless Communications and Mobile Computing
As shown in Figure 2, the information of the variable node δxṽ k is obtained by the variable nodes δxṽ k−1 ,α k , and β k through the function node, which can be expressed as Þ. In addition, the information of the variable node δyṽ k is obtained by the variable nodes δyṽ k−1 ,α k , andβ k through the function node, which can be expressed as Nðδyṽ k , μ δyṽ k , σ 2 δyṽ k Þ. Further, for the variable node δzṽ k , its information is obtained by the variable nodes δzṽ k−1 ,α k , andβ k through the δyα k−1αk Nðδzṽ k , μ δzṽ k , σ 2 δzṽ k Þ function node, which can be expressed as Nðδzṽ k , μ δzṽ k , σ 2 δzṽ k Þ.
The information of the variable node δxα k is obtained by δxα k−1 ,α k , andṽ k through the function node, which can be expressed as Nðδxα k , μ δxα k , σ 2 δxα k Þ; the information of the variable node δyα k is obtained by δyα k−1 ,α k, andṽ k through the function node, which can be expressed as Nðδyα k , μ δyα k , σ 2 δyα k Þ ; and the information of the variable node δzα k is obtained by δzα k−1 ,α k , andṽ k through the function node, which can be expressed as Furthermore, the information of the variable node δxβ k is obtained by δxβ k−1 ,β k , andṽ k through the function node, which can be expressed as Nðδxβ k , μ δxβ k , σ 2 δxβ k Þ; the information of the variable node δyβ k is obtained by δyβ k−1 ,β k , and v k through the function node, which can be expressed as N ðδyβ k , μ δyβ k , σ 2 δyβ k Þ; and the information of the variable node δzβ k is obtained by δzβ k−1 ,β k , andṽ k through the function node, which can be expressed as Nðδzβ k , μ δzβ k , σ 2 δzβ k Þ.
The information passed from the function nodesṽ k−1 , α k−1 , andβ k−1 to variable nodesṽ k ,α k, andβ k can be expressed as Nðṽ k , μṽ k , σ 2 v k Þ, Nðα k , μα k , σ 2 α k Þ , and Nðβ k , μβ The information transferred from the function nodes F, G, and H to the variable nodes δx k , δy k, and δz k can be represented as Nðδx k , μ δx k , σ 2 δx k Þ, Nðδy k , μ δy k , σ 2 δy k Þ, and Nðδz k , μ δz k , σ 2 δz k Þ, respectively. The information transferred from function node I to the variable nodeṽ k can be represented as Nðṽ + k , μṽ+, σ 2 v + Þ, the information transferred toα k can be represented as Nðα + k , μα+ , σ 2 α + Þ, and the information transferred toβ k can be represented as Nðβ The a priori estimate ofṽ k and the information passed from function node I to the variable nodeṽ k can be combined to obtain the posterior estimate ofṽ k at time k: Similarly, the posterior estimates ofα k andβ k are

Simulation Results and Analysis
The movement of the drone was simulated in MATLAB to verify the feasibility of this algorithm. The simulation conditions were as follows: five UAVs were moving in an area of 1000 m × 600 m × 300 m. The position of the main UAV is accurately determined. The slave UAVx and main UAV communicate with each other, and the slave UAV obtains the location and observation information from the main UAV. Figure 3 illustrates the initial position of the slave and main UAV.
In the experiment, the MUAV-FGC algorithm estimates the speed error of the slave UAV, as shown in Figure 4. The abscissa in the figure represents time (s), and the ordinate is the speed error estimate (m/s). The figure depicts that, as the experiment progresses, the speed error continues to accumulate. The estimated average speed error from the UAV was 0.0644 m/s during the experiment time of 600 s. Therefore, the speed error must be compensated during the positioning process to improve positioning accuracy.
In the experiment, the MUAV-FGC algorithm estimates the dead angle error of the slave UAV, as shown in Figure 5.

Wireless Communications and Mobile Computing
In the figure, the abscissa represents the time of the experiment (s), the ordinate represents the dead-end angle error (°), the curve with " * " is the horizontal dead-angle error curve, and the curve with "⚪" is the vertical dead angle curve. As shown, the vertical dead-angle error fluctuates between -0.2°and 0.2°, and the horizontal dead-angle error stabilizes at approximately -0.58°after 400 s. In addition, the error of the dead angle will affect the positioning; therefore, it is necessary to compensate for the heading angle error in the positioning process to improve the positioning accuracy. Figure 6 presents the positioning error curves of the MUAV-FGC and the extended Kalman algorithms. In the figure, the abscissa represents the time (s), the ordinate represents the positioning error (m), the curve with " * " is the positioning error curve of the MUAV-FGC algorithm used in this study, and the curve with "⚪" is the positioning error curve of the EKF algorithm. As shown in Figure 6, the MUAV-FGC algorithm curve is below the EKF algorithm curve, indicating that the average positioning error of the MUAV-FGC algorithm is smaller than that of the EKF algorithm. Thus, the positioning accuracy of the MUAV-FGC algorithm is higher than that of the EKF algorithm. This is because the EKF algorithm does not estimate and compensate for the speed and heading errors, resulting in a high positioning error, whereas the MUAV-FGC algorithm significantly reduces the positioning error. Figure 7 demonstrates the mean and root mean square values of the positioning error of the EKF and FG algorithms in the experiment. The root mean square and mean values of the EKF algorithm are 16.47 and 5.09, whereas those of the MUAV-FGC algorithm are 13.58 and 4.13, respectively. The positioning error of the MUAV-FGC algorithm was found to be smaller than that of the EKF algorithm. Compared with the EKF algorithm, the mean positioning and

10
Wireless Communications and Mobile Computing root mean square errors of the MUAV-FGC algorithm was reduced by approximately 18.86%, and 17.54%, respectively.

Conclusion
In this study, we addressed the problem of multi-UAV colocation in a disaster environment, proposed a multi-UAV factor graph colocation algorithm, and derived formulas for speed and heading angle. In the factor graph model, the expectation and variance of variables are used as the information transferred between factors, and the process of error estimation algorithm based on the factor graph is derived. Based on the simulation experiment results, the average positioning error and root mean square error of the MUAV-FGC algorithm are reduced by approximately 18.86% and 17.54%, respectively, along with providing a better positioning performance, compared to the EKF algorithm.

Data Availability
No data were used to support this study.

Conflicts of Interest
The authors declare that they have no conflicts of interest.