Multirobot Collaborative Navigation Algorithms Based on Odometer/Vision Information Fusion

Collaborative navigation is the key technology for multimobile robot system. In order to improve the performance of collaborative navigation system, the collaborative navigation algorithms based on odometer/vision multisource information fusion are presented in this paper. Firstly, the multisource information fusion collaborative navigation systemmodel is established, including mobile robot model, odometry measurement model, lidar relative measurement model, UWB relative measurement model, and the SLAMmodel based on lidar measurement. Secondly, the frameworks of centralized and decentralized collaborative navigation based on odometer/vision fusion are given, and the SLAM algorithms based on vision are presented. 'en, the centralized and decentralized odometer/vision collaborative navigation algorithms are derived, including the time update, single node measurement update, relative measurement update between nodes, and covariance cross filtering algorithm. Finally, different simulation experiments are designed to verify the effectiveness of the algorithms. Two kinds ofmultirobot collaborative navigation experimental scenes, which are relative measurement aided odometer and odometer/SLAM fusion, are designed, respectively.'e advantages and disadvantages of centralized versus decentralized collaborative navigation algorithms in different experimental scenes are analyzed.


Introduction
With the development of the age of intelligence, the application scenarios and task requirements of intelligent mobile robots are becoming more and more complex [1,2]. A single robot has difficulty in meeting the needs of humans for highly automated robots. erefore, the multirobot system, which fully embodies the group wisdom, has a broad prospect. e subject has attracted extensive attention of researchers [3][4][5]. e premise of robot collaborative navigation operation is that each robot needs to have a certain ability of navigation and localization. erefore, it is very important for the research of robot collaborative navigation system [6]. e navigation accuracy of a single robot depends on its own navigation system, independent of other motion bodies.
is navigation method is relatively simple, but due to the computing power of its own processor, sensor quality, and sensor field of vision and other factors, it limits the working range of the navigation system to a certain extent in the process of navigation and positioning. It also affects the ability of the navigation system to suppress noise, reducing errors and adapting to the complex environment. e collaborative navigation system of robot can make up for the single robot navigation system. When there is relative measurement between each robot, the relative information can be fully used to correct the navigation results. It is also possible to establish links with each other to realize the sharing of navigation resources among robots and to obtain better navigation performance [7][8][9].
In the research of mobile robot navigation and location technology, the development of Simultaneous Localization and Mapping (SLAM) technology provides a new thought for autonomous positioning of mobile robot under complex scenes [10]. At the same time, as a significant technical means to realize relative navigation, radio navigation and visual navigation are being paid much attention. It provides important technical support for further research on the collaborative navigation of multirobots [11][12][13]. However, the traditional combined navigation technology is far from meeting the growing demand for the high performance of collaborative navigation.
At present, most of the data fusion methods of multisource heterogeneous sensors in integrated navigation system use centralized data fusion algorithm. is algorithm has some defects in collaborative navigation of mobile robots, high communication cost, and poor robustness. However, the current research on decentralized data fusion algorithm, which is suitable for collaborative navigation environment is not mature enough [14,15].
erefore, based on the information fusion of multisource heterogeneous sensors and related SLAM data association algorithms, this paper puts forward a new concept, model, and solution of multisource heterogeneous sensor information fusion, which can significantly improve the performance of collaborative navigation, address some problems of cooperative navigation technology, explore the key technologies that need to be solved in the collaborative navigation process of multirobot platform, and provide theoretical basis and technical support for high performance of collaborative navigation application. e structure of this paper is outlined as follows: in Section 2, in the multisource information fusion collaborative navigation system model, the principle of the module involved, and the source of error are analyzed. In Section 3, the framework of centralized and decentralized collaborative navigation framework based on odometer/vision red fusion is given, and the SLAM algorithms based on vision are presented. In Sections 4 and 5, the centralized and decentralized odometer/vision collaborative navigation algorithms are derived, respectively. In Section 6, different simulation experiments are designed to verify the effectiveness of the algorithms.

Motion Robot Model.
Suppose that the mobile robot moves according to the circular arc trajectory. Figure 1 shows the motion of a single mobile robot (see Figure 1).

Odometry Measurement Model.
According to the principle of odometer motion calculation [16], it is assumed that the starting position of the mobile robot is (x, y, θ), after ΔT of movement to the end. e end position is (x + Δx, y + Δy, θ + Δθ), and the trajectory can be regarded as a circular arc ΔS with radius R. In this case, the pose of the robot (in discrete time) is given by where ΔN L , ΔN R are increment of pulse number of odometer left and right wheel encoder. (ΔS r , ΔS l ) is the distance between the right wheel and the left wheel of a mobile robot. P are the resolution parameters of left and right wheel encoder. e pose increment of sampling period ΔT is obtained by solving the above equations. e discrete motion equation of the mobile robot is further obtained by

Odometer Error Model.
ere are systematic errors and random errors in odometer based positioning [17]. In the simulation experiment, the influence of random error is mainly considered. e corresponding covariance matrix Σ odom is given by

Lidar Environmental Detection Model.
As the main module to realize the SLAM, environment detection module mainly obtains the surrounding environment information through the external sensing sensor.

Lidar Scanning
Model. e two-dimensional lidar outputs the point cloud data from the surrounding environment and contains the angular information. e scanning schematic of the two-dimensional lidar is shown in Figure 2 (see Figure 2). As shown in Figure 2, the lidar is scanned in the sensor coordinate system along the scanning direction, with a fixed scanning angle resolution. e measured point is detected in the ith direction. Denote θ i as the angle between this direction and the positive direction of the X s -axis, and θ i can be obtained by the resolution of the scanning angle with the inherent. A frame of point cloud data is obtained after a scan, which can be recorded as a set where n is the number of scanned data points in this frame of data, (d i , θ i ) is the coordinate of the measured point in the polar coordinate system. And the final point cloud data set B � (x i , y i ) | i � 1, . . . , n is obtained too.

Lidar Error Model.
In the simulation experiment, the above error can be simplified to Gaussian white noise along the x and y directions in the carrier coordinate system of the mobile robot [18]. It is assumed that the positioning errors are in the two independent directions, so the corresponding covariance matrix is given by where σ b x and σ b y represent the variance along the x and y directions of the observation in the carrier coordinate system of the mobile robot, respectively.

UWB Relative Measurement Model.
e relative measurement module is an important part of the cooperative navigation network in which nodes relate to the state information of the surrounding nodes. UWB (Ultra Wide Band) can measure relative distance and lidar can measure relative position which are selected as research objects in this paper. Because the scanning model of lidar has been described in the previous section, this section only establishes the corresponding ranging model and error model.

UWB Ranging
Model. At present, there are two main ranging methods, that is, two-way ranging (TWR) method and symmetric double-sided two-way ranging (SDSTWR) method. SDSTWR method is greatly improved compared with TWR method in terms of ranging error caused by clock synchronization. erefore, SDSTWR method is often used for distance measurement [19] (see Figure 3). e distance expression between transceiver A and transceiver B can be obtained from Figure 3: where d is the distance between the two tested transceivers. T A1 � T a2 − T a1 is the time interval between the first signal and the received signal of transceiver A. We can get T A2 , T B1 , T B2 in the same way.

UWB Error Model.
One of the main sources of error in UWB ranging is the error caused by the clock synchronization of the two transceivers [20]: where k A and k B are the ratio of transceivers A and B between the actual and the expected frequency.

SLAM Model Based on Lidar
Measurement. SLAM is widely used in the field of mobile robot navigation and positioning [21]. It is the key technology to solve the mapping problem at the same time, and it also provides a new idea for solving the path planning problem. SLAM technology mainly aims at the unknown location of the mobile robot. e environmental map is built in the form of an increasing perception of the surrounding strange environment according to the external sensor. At the same time, the state estimation of the mobile robot itself is obtained by using the built map information. e SLAM localization technology is the key technology to realize the autonomous positioning of the mobile robot under the unknown environment. It is of great research value to realized more accurate navigation and localization under the condition of reducing the limitation of moving robot (such as no GPS) [22]. By applying the technology to the framework of the collaborative navigation system of the mobile robot, the navigation performance of the whole collaborative navigation system can be effectively improved (see Figure 4).
It can be seen from Figure 4 that a mobile robot's localization using a SLAM navigation system is essentially a process, which is continuous estimation to approximate the true value [23]. e triangle in the graph represents the mobile robot and the circle represents the observed landmark, where the grey circle represents the estimated landmark. e solid line connecting the triangle represents the real trajectory, and the dotted line is the estimated trajectory (see Figure 4).

Collaborative Navigation Framework Based on Odometer/Vision
When nodes in collaborative navigation network participate in collaborative navigation, different data fusion algorithms can be used for data fusion, which is obtained multisource heterogeneous sensors of different nodes. ere are two data fusion algorithms generally used, centralized and decentralized data fusion algorithms [24].

Framework of Collaborative Navigation Algorithm.
In this subsection, a corresponding data fusion framework of the centralized collaborative navigation algorithm is designed for the general odometer/vision model, and a data fusion framework of decentralized collaborative navigation algorithm is designed in the same way.
In the centralized collaborative navigation structure, the measured data obtained by each node are concentrated in a data fusion center for data fusion. In a distributed collaborative navigation structure, each node shares some information with other nodes while processing its own sensor data (see Figure 5).
According to the odometer/vision collaborative navigation model, we use the most common EKF algorithms to simulate centralized and decentralized cooperative navigation. A centralized location algorithm (CL) and a decentralized location algorithm (DCL) are designed.
As shown in Figure 6, in the CL algorithm corresponding to the centralized cooperative navigation structure, each node sends the information obtained by the sensor itself to the central server and realizes the data fusion through the EKF algorithm. Its state vector is a set of state vectors of each node, which is updated with the principle of track reckoning. After that, the measurement information obtained after data association in the SLAM process and the relative measurement information between nodes are selected. CL algorithm gathers the state and covariance information of all nodes, corrects them uniformly, and sends the corrected estimation results to each node. Because of participating in the joint measurement update process, the state information of each node is related to each other after the first update (see Figure 6). is correlation is reasonably estimated based on the CL algorithm, and the task of data fusion is distributed to each node; then the DCL algorithm is proposed accordingly. Since the position of each node in the algorithm is equivalent, only one node needs to be discussed (see Figure 7).In order to avoid overoptimal estimation to the greatest extent, the DCL algorithm in this paper introduces the concept of separating covariance crossover filter. e covariance of each node is divided into correlation term and irrelevant term, and the time update process is basically consistent with the time update process of a single node. e measurement update process will take two steps. Firstly, according to the measurement information of the SLAM navigation system, the state estimation results are obtained by propagating the state and integrating the related information of the auxiliary navigation system. en, the state information sent by the adjacent nodes and the relative measurement information between the nodes can obtain another state estimation of the node. Here, relevant state information sent to neighboring nodes is used to update local maps (see Figure 7).

Landmark State Estimation Algorithm.
e key of SLAM navigation algorithm lies in the process of data association. e positioning process of this SLAM navigation system is essentially a process, which is continuous estimation to approximate the true value. is kind of probability estimation problem is usually solved by introducing appropriate filter. e most common is the EKF algorithm (see Figure 8).
Because of the high sampling frequency of the odometer selected in this paper, the lidar also has the advantages of high precision and high reliability; the EKF algorithm with better real-time performance is selected. e state estimation process of landmark information in SLAM based on EKF is described below. e observation equation of the feature information obtained by lidar is as follows: where X k,l � (x k,l , y k,l ) T is state vector of landmark at k time and X k,r � (x k,r , y k,r , θ k,r ) T is state vector of mobile robot at   Mathematical Problems in Engineering k time. n k is measurement noise. Its variance matrix is R k , which can be denoted as n k ∼ N(0, R k ). Since the landmarks are static, the state estimation of k − 1 time landmark can be regarded as a priori estimation of k time landmark state. e measurement update process based on EKF is as follows: Step 1: calculating the innovation and the filter gain: Step 2: updating the state estimation and the corresponding covariance: where Σ k is the covariance matrix for state estimation of landmark at k time and H k is the measurement matrix at k time.
Remark 1. Any observed landmark information can be position corrected by the above method, and it is noted that such position correction is limited to the landmark in the local map observed at the k time.

Data Association Algorithm.
In a SLAM navigation system, the process of data association is an important prerequisite for state estimation. Incorrect data association is likely to lead to serious deviation of estimation results [25,26]. At present, there are two data association algorithms commonly used in SLAM technology, that is, the nearest neighbor data association algorithm (NN, nearest neighbor) [27] and joint compatibility body & body data association algorithm (JCBB) [28].
e NN algorithm has less computation, but it is easy to form wrong data association when the density of feature information is large, which leads to the divergence of SLAM results, so it is only suitable for the environment where the density of feature information is small and the system error is small. JCBB is the improvement of the NN algorithm, which extends the association of single features in the NN to all observed feature information, which   is more binding and more reliable. e JCBB algorithm can obtain more credible association assumptions than the NN algorithm and exclude some wrong association assumptions. However, the computation amount is obviously increased, which to some extent affects the real-time performance of SLAM navigation system. To ensure the accuracy of data association in the process of the SLAM, reduce the amount of computation as much as possible, and enhance the real-time performance of SLAM algorithm, this subsection describes an optimized data association algorithm. e classification method mentioned in [29] is used to divide the related feature information set; finally the appropriate feature information set in the local map and the preprocessed observation feature information set are selected to form the association space.
First, the collection of feature information in the local map is divided as follows: where D[(x m , y m ), (x k , y k )] is the relative distance between the feature information (x k , y k ) of the local map and other feature information (x m , y m ). en, the observation feature information set is preprocessed and divided. In the actual navigation process, the observation feature information obtained by lidar contains noise information. e purpose of preprocessing is to filter out some noise information, improve the accuracy of data association, and reduce the amount of computation at the same time. e judgment process is as follows: where Δ D is the threshold, which is determined by the performance of the laser sensor. When the relative distance between the two observation feature information is less than the threshold, the observation feature information is considered to be the feature point; otherwise the noise point does not participate in the subsequent calculation.
When the set is divided, the set of observed feature information is sorted according to the order of observation. Based on the process of the local map feature information set above, the subset is divided in turn, and all points are not involved in the division repeatedly.
Finally, we select the appropriate association set to execute the data association algorithm. e subset of feature information of each local map and the subset of observed feature information at the current time are joint compatibility test, and the feature information with the best test results is selected to form a new subset as the data association object.

Time Update.
First of all, the state space model should be established. e state vector of a single mobile robot with three degrees of freedom contains position and heading angle information. Suppose the number of nodes is N , the state space of collaborative navigation system in centralized framework contains the state vectors of all mobile robots in the group, and the state vector of mobile robot i is X i k and the state of system is X k . en the state space equation of the system can be expressed as follows: where the function f i (X, u) describes the kinematic characteristics of the mobile robot, represents the input required by the mobile robot i to calculate the track at time k, w i k−1 is the system noise, and . It is assumed that the motion of any node is not affected by any other node, and each node moves independently without being controlled by other nodes. erefore, the state transfer matrix for centralized collaborative positioning is given by where J i u(k−1) and J i X(k−1) are the Jacobian matrices of function f for state vectors and control inputs, respectively. e system noise variance matrix of collaborative navigation system in centralized framework is as follows: where and Σ u is the covariance matrix for controlling input. en, the time update process of collaborative navigation system in centralized framework can be deduced: 6 Mathematical Problems in Engineering where 4.2. Single-Node Measurement Update. In this section, the measurement updating process involving only one node in the centralized framework is described. e aided navigation system selected is SLAM navigation system, which integrates the landmark information of the surrounding environment measured by lidar. In this paper, a measurement model based on this navigation system is built, and the process of measurement updating based on EKF is described.

Measurement Model Based on SLAM.
e measurement model based on SLAM is the measurement model after data association. In this paper, the position information of landmarks obtained by lidar is taken as the observation equation.
where (x b l , y b l ) is position information for landmarks obtained by lidar. (x w l , y w l , θ i k ) is the coordinates of landmarks in the world coordinate system. (x i k , y i k , ) is the state of the mobile robot at the time of k. n i k is the measurement noise and its variance matrix is R i k , which can be denoted as n i k ∼ N(0, R i k ). After linearization and state extension, the observation equations of the whole system can be obtained: where and ∇h i i is Jacobian matrices of function h i (X i k ).

Measurement and Update Based on EKF.
Combined with the basic principle of Kalman filter, the measurement and update process of the aided navigation system for a single node can be obtained as follows: Step 1: calculating the innovation and the filter gain: Step 2: updating the state estimation and the corresponding covariance: where n ij k is the measurement noise, its variance matrix is R ij k � σ UWB , which can be denoted as n ij k ∼ N(0, R ij k ), and σ UWB is the variance for UWB ranging.
After linearization and state extension, the observation equations of the whole system can be obtained: where and ∇h ij i and ∇h ij j are Jacobian matrices of function h ij (X i , X j ), respectively.

Measurement Model Based on Relative Position.
Using lidar as the sensor to realize the relative observation among nodes can be divided into two kinds: direct method and indirect method. e direct method is to measure the relative position between the two nodes directly; the indirect method is to use lidar to observe the nearest landmark between the two nodes. e relative position between the two nodes is obtained by correlation calculation. e state of mobile robot i is denoted by (x i k , y i k , θ i k ), at time k and the state of mobile robot j is denoted by (x i k , y i k , θ i k ). e coordinates of landmark L 1 adjacent to Mathematical Problems in Engineering mobile robot i in the world coordinate system are (x w l1 , y w l1 ), the coordinates in the mobile robot i coordinate system are (x i l1 , y i l1 ), the coordinates of landmark L 2 adjacent to mobile robot j in the world coordinate system are (x w l2 , y w l2 ), and the coordinates in the coordinate system of mobile robot j are (x i l2 , y w i ). e specific solution process of the indirect method is as follows (see Figure 9): when mobile robot i observe mobile robot j at k time, the coordinates of mobile robot j in the mobile robot i coordinate system as the observation. e observation equations are as follows: where n ij k is the measurement noise, its variance matrix is R ij k , which can be denoted as n

Measurement Update Based on EKF.
Similarly, we can finally get the measurement update process for the relative observation between nodes.

Decentralized Collaborative
Navigation Algorithm e state and covariance information of each node under the decentralized collaborative navigation algorithm is, respectively, calculated. In order to avoid overoptimal estimation to the maximum extent, the concept of the covariance intersection filter is introduced and the covariance of each node is divided into related and irrelevant items.

Covariance Intersection Filter.
Given the state estimation vector X and corresponding covariance matrix P, assuming that P * is the covariance of the error between the state estimate X and the state real value X * , it can be expressed as follows: Consistency is a characteristic of the covariance matrix of the estimation [30]. When the covariance matrix of the state estimation is not less than the real covariance, it is said that the estimation satisfies the consistency, that is, no overoptimal estimation is produced. Suppose two state estimates X 1 and X 2 are independent and satisfy the consistency, the corresponding covariances are P 1 and P 2 . If there is a correlation between the two estimates, the Kalman filter may produce inconsistent results; in other words, it leads to overoptimal estimation: Figure 9: Indirect observation schematic diagram.

Mathematical Problems in Engineering
where the covariance corresponding to two state estimates X 1 and X 2 is P 1 d + P 1i and P 2 d + P 2i , respectively. P 1 d and P 2 d are the correlation covariance components corresponding to the maximum correlation between the two state estimates. P 1i and P 2i are independent covariance components corresponding to absolute independence of two state estimates, W within interval [0, 1]. It is an optimization parameter that minimizes the covariance after fusion, and the w in this interval can ensure the consistency of the fusion results.

Time Update.
Before describing the time update process of DCL algorithm, it is necessary to decompose the state information of the system in the framework of centralized collaborative navigation, which can be expressed as where E G is the set of states under the centralized collaborative navigation framework, and X G and P G are the state space and the corresponding covariance matrix under the centralized collaborative navigation framework, respectively. e state propagation process under the decentralized collaborative navigation framework is the state propagation process of a single node, and the propagation process of covariance can be expressed as where P i,− k is the one-step prediction covariance matrix of the mobile robot i at time k and P i,− k,i is the covariance. e independent covariance components of the matrix, J i X(k −1) and J i u(k−1) , are Jacobian matrices of function f i (X, u) for state vector and control input and Σ u is the error matrix of the control input.

Single Node Measurement Update.
e measurement and updating process of a single node only involves the aided navigation system of a single node, so there is no need to estimate the correlation, that is, the process of saving formulas (28) and (29). Similar to the single node measurement update process in centralized collaborative navigation, the single node measurement update process in distributed collaborative navigation can be expressed as Step 1: calculate the innovation and the filtering gain: Step 2: update the state estimation and the corresponding covariance: (37)

Collaborative Measurement Update among Nodes.
In the framework of decentralized collaborative navigation, the state estimation results of a single node aided navigation system and the state estimation results based on information sharing among nodes are integrated in the process of internode collaborative measurement updating, and the corrected state information is derived. For the decentralized collaborative navigation framework, any node can clearly estimate the state of other nodes. In order to save the communication cost and reduce the computation of a single mobile robot platform, this paper sets up that the information exchange among the mobile robots is only taking place between the two adjacent mobile robot nodes.
Assuming that the mobile robot i performs relative observations of the mobile robot j at the k time and shares its own state and covariance information with the mobile robot j, the state of the mobile robot j can be clearly expressed with information received state of the mobile robot i and the relative measurement information between the two nodes: where (x coj k , y coj k ) is the partial state estimation of the mobile robot j obtained by the information sharing between the mobile robot i and the mobile robot j at k time, is the state vector shared by the mobile robot j in the k direction of the mobile robot i, and u rel � (x j,i k , y j,i k ) is the relative measurement information of the two nodes in the i coordinate system of the mobile robot.
If there is a direct relative observation between the two nodes, the relative measurement information can be obtained directly by the sensor that carries on the relative observation. If the relative observation between the two nodes is with the help of the indirect relative observation of the surrounding landmark information, then the relative measurement information needs to be solved to a certain extent, and the concrete solution method can be combined (25) and then converted into the mobile robot.
Finally, based on the principle of covariance intersection filter, the updating process of collaborative measurement among nodes in the framework of decentralized collaborative navigation can be obtained.

Simulated Experimental Environment.
In this section, the nodes of the mobile robot network involved in collaborative navigation are 3. Assuming that the area of the moving 2 D environment is 25 m × 25 m when the mobile robot population group works together, each node in the environment is assigned to an initial position, and each node can follow random trajectory motion in this area. Assuming that all nodes follow the same simulation trajectory, only the initial position is different. e maximum speed of the mobile robot in the straight line is 0.3125 m/s and the angular velocity at the bend is 0.1°/s. It is assumed that the environment around the simulated trajectory of this rectangle can be extracted by lidar scanning to 88 landmarks (environmental feature points) for the SLAM-assisted navigation system (see Figure 10).
During this simulation, mobile robots as carriers can carry different types of sensors, including odometers, UWB, and lidar. Suitable sensors are selected according to the requirements of positioning accuracy, among which Time Domain P410 UWB sensors are used to measure the relative distance, and lidar is selected from LMS291 series 2D lidar produced by a German company. Based on the relevant parameters of these sensors, which are shown in Table 1, a simulation model for mobile robots carrying different types of sensors is built using MATLAB.

Relative Measurement Aided Odometer Collaborative
Navigation. In the experiment, all three mobile robots are equipped with odometer capable of moving monitoring, UWB capable of measuring relative distance, or lidar capable of measuring relative position.
From Figure 11, it can be seen that the collaborative navigation system which realizes relative information sharing has significant advantages over the case of not sharing any information in positioning accuracy. Besides, the improvement of group navigation performance of mobile robots is affected by the type of shared relative information. When the relative position information is shared, the growth of the error can be effectively limited; relatively speaking, when the relative distance information is shared, the position error is still growing slowly, which only reduces the growth rate of the error (see Figure 11). e analysis shows that the relative distance information is weakly constrained, so sharing this information cannot effectively realize the navigation and localization of mobile robots. In contrast, the sharing of relative position information includes the solution to mobile robot navigation and information. Information accuracy is significantly improved. It can even be increased by more than 60% at some time.
is difference is more obvious in the angle error diagram (see Figure 11).
In this paper, two observation methods, direct relative measurement and indirect relative measurement, are mentioned in the description of the measurement model based on relative position. Based on this experimental scene, scenario I is three mobile robots to observe the relative position information directly through lidar. Scenario II is three mobile robots to extract the surrounding landmark information through lidar, and based on this solution, the relative position information is calculated. In the above experimental scenario, the centralized collaborative navigation algorithm is used to solve the navigation problem. e two relative position measurement methods are compared through the above simulation experimental scenarios. e comparison results are shown in Figure 12 (see Figure 12).
rough Figure 12, it is clear that the collaborative navigation and positioning accuracy of relative position measurement using the direct method are better than those of indirect method. However, cost calculation cannot be ignored while navigation performance is considered. e direct method requires that the measurement range of lidar includes the activity range of the whole mobile robot population group while the measurement range of lidar required by indirect method only needs to include the surrounding landmarks. is greatly reduces the cost calculation. Considering that the accuracy of collaborative navigation and positioning using the two relative position measurement methods is not much different, it is obvious that the indirect method is more suitable for practical application (see Figure 12). e difference of the decentralized collaborative navigation framework compared with the centralized collaborative navigation framework is that the correlation among   the different node states is accurately calculated in the centralized collaborative navigation framework, and this correlation cannot be used in the decentralized collaborative navigation framework. In order to better reflect the impact of this correlation, the navigation errors of the two collaborative navigation algorithms in the odometer collaborative navigation system are shown in Figure 13 (see Figure 13). To compare the two algorithms, 20 experiments are carried out in this paper, and the root mean square error (RMS) and formulas of the two collaborative navigation algorithms are calculated as shown in the following formula: where n is the total number of samples, x i is the actual value, and x i is the estimated value. RMS parameters for the odometer collaborative navigation are shown in Table 2.
As can be seen from Figure 13 and Table 2, the error of the centralized collaborative navigation algorithm is smaller than that of the decentralized collaborative navigation algorithm.
is can be predicted because the correlation among node states in the centralized collaborative navigation algorithm can be calculated accurately, which is estimated in the decentralized collaborative navigation algorithm. However, the improved accuracy of the navigation is at the expense of high computing power and high quality data communication. erefore, although the performance of centralized collaborative navigation framework is better than that of distributed collaborative navigation framework, the centralized collaborative navigation framework is not applicable in some practical scenarios (see Figure 13).

Odometer/Vision SLAM Collaborative Navigation.
In the odometer/vision collaborative navigation model, scenario I is designed that all the mobile robots are equipped with an odometer which can monitor the motion. One of the mobile robots is equipped with SLAM-aided navigation system and can work properly.
Firstly, the mobile robot with SLAM-aided navigation system is studied, and it only runs its own integrated navigation algorithm without sharing the relative position information. Using the centralized collaborative navigation algorithm, the navigation error of nodes with SLAM-aided navigation system is shown in Figure 14 (see Figure 14). Figure 14 fully verifies the correctness of a centralized collaborative navigation algorithm based on the odometer/ vision collaborative navigation model. e SLAM-assisted navigation system is based on the relative observation. e position estimation of the node itself and the position estimation of the landmark have the error accumulation, but the association algorithm of the SLAM is combined, the centralized collaborative navigation algorithm makes the position estimation of the landmark closer to the real value while the positioning accuracy of the mobile robot is improved, the data association is more reliable, further correcting the state estimation of the mobile robot itself. erefore, the algorithm is very obvious to improve the navigation accuracy of mobile robot (see Figure 14). en, the mobile robots without the SLAM-aided navigation system in the experiment are studied. In order to fully reflect the influence of the SLAM-aided navigation information on the navigation performance of other nodes, Scenario II is designed that all mobile robots are equipped with odometer which can monitor the motion, and two of them are equipped with SLAM-aided navigation system and can work properly. e navigation error of other nodes without SLAM-aided navigation system is shown in Figure 15 (see Figure 15).
As shown in Figure 15, the mobile robot with SLAMaided navigation system performs loop-back detection in about 320 seconds and data associated with the local map created at the initial location, thus eliminating most of the accumulated errors. e unique superior performance of the SLAM-aided navigation system is transmitted to other nodes in the group through the process of information sharing in the process of collaborative navigation, so that it can also eliminate most of the accumulated errors in the vicinity of the time, which is an important advantage of the collaborative navigation system (see Figure 15).
To verify the NN algorithm, JBCC algorithm, and the optimized data association algorithm on the navigation performance of nodes without SLAM-aided navigation system, the experimental scene is designed that all mobile robots are equipped with odometer which can carry out motion monitoring. One of the mobile robots is equipped with SLAM-aided navigation system and can work normally, and the CL algorithm is run. e navigation error of nodes without SLAM-aided navigation system is shown in Figure 16 (see Figure 16). e performance of the centralized collaborative navigation algorithm under the three SLAM data association algorithms is shown in Table 3.
From Figure 16 and Table 3, it can be seen that the navigation performance of nodes without SLAM-aided navigation system is affected by the SLAM data association algorithm used by nodes carrying SLAM-aided navigation system. Running the NN algorithm, the matching accuracy of feature information is not high, so that the navigation accuracy is poor. Running the JCBB algorithm, the correct rate of data association is the highest, but the operation time is the longest. Running optimization data association algorithm, the navigation accuracy is slightly reduced, but the operation time is less, which can meet the real-time requirements (see Figure 16).
In this subsection, to compare the performance of collaborative navigation systems in odometer/vision collaborative navigation systems under centralized and   decentralized collaborative navigation algorithms, we run the CL and DCL algorithms separately under the experimental scenario I. e navigation errors of the two collaborative navigation algorithms are compared as shown in Figure 17. Under the experimental scenario II of this subsection, we run the CL algorithm and the DCL algorithm, respectively. e navigation errors of the two collaborative navigation algorithms are compared as shown in Figure 18 (see Figures 17 and 18).
After 20 experiments, the RMS parameters of collaborative navigation with single node SLAM information are shown in Table 4. e RMS parameters of the coordinated navigation with the fused multinode SLAM information are shown in Table 5.
As can be seen from Figures 17 and 18 in conjunction with Tables 4 and 5, in the odometer/vision collaborative navigation system, the error of the centralized collaborative    navigation algorithm is less than the distributed collaborative navigation algorithm; after the landmark information collected by the single node or the multinode is fused, there is a small gap between the two algorithms. In other words, the distributed collaborative navigation algorithm based on the odometer/vision collaborative navigation model can well estimate the correlation of the internode information (see Figures 17 and 18).
Considering the high requirement of the centralized collaborative navigation algorithm to the computing power and the communication level, the application scenarios of the two algorithms are analyzed in combination with the abovementioned collaborative navigation experiment: the centralized collaborative navigation algorithm is suitable for the case that there are few nodes and the nodes are not equipped with additional aided navigation system, the decentralized collaborative navigation algorithm is suitable for the large number of nodes and the large amount of information shared, and some nodes are equipped with additional aided navigation systems, especially in the case of SLAM-aided navigation system.

Conclusion
In order to improve the performance of cooperative navigation system, a multirobot collaborative navigation algorithm based on odometer/vision multisource information fusion is studied. On the basis of establishing the multisource information fusion collaborative navigation system model, the centralized collaborative navigation of odometer/ vision fusion, the decentralized collaborative navigation framework, and the vision-based SLAM are given, and the centralized and decentralized odometer/vision collaborative navigation algorithms are derived, respectively. e effectiveness of the proposed algorithm is verified by the simulation experiments, which has some theoretical value and application value in high performance collaborative navigation applications.

Data Availability
e data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest
e authors declare that they have no conflicts of interest.