Probabilistic Membrane Computing-Based SLAM for Patrol UAVs in Coal Mines

State Key Laboratory of Mining Response and Disaster Prevention and Control in Deep Coal Mines, Anhui University of Science and Technology, Huainan City, Anhui Province 232001, China School of Computer Science and Engineering, Anhui University of Science and Technology, Huainan City, Anhui Province 232001, China School of Electrical and Information Engineering, Anhui University of Science and Technology, Huainan City, Anhui Province 232001, China Department of Information Engineering, PLA Army Academy of Artillery and Air Defense, Hefei City, Anhui Province 230031, China School of Biomedical Engineering, Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian City, Liaoning Province 116024, China


Introduction
UAV application scenarios have high requirements for navigation performance, especially in complex environments, and a single navigation mode has difficulty meeting these requirements. At present, inertial navigation sensors, lasers, vision sensors, and other sensors are mainly used for multisensor data fusion with an effective SLAM control algorithm. Navigation with a single sensor is gradually being replaced by integrated navigation due to environments with increasing complexity. Reference [1] improves the accuracy of data association by using the complementary advantages of point and line features. This method is only tested in a simulation environment and lacks training on actual scenes. In reference [2], when the laser cannot work normally, a visionbased path planning method for UAVs is implemented by using a fully convolutional neural network; this approach achieves good results, but its generalization ability is limited for high-dimensional data or large scenes. In reference [3], UAV SLAM is realized by neural cell population coding, but the simulation can only be completed under specific constraints. In reference [4], a neural network is used to predict the error before and after particle filtering, which reduces the positioning error caused by unreliable laser data. In reference [5], based on adaptive filtering, the state noise variance matrix and noise variance matrix of the Kalman filter are modified in real time by means of the residual mean and variance, thereby improving the adaptability and robustness of the model. Reference [6] proposes a control strategy to avoid collisions. The detection model uses distance and image width information to optimize the barrier-free space. In reference [7], based on the inertial navigation method of scene recognition, a drift error model between nodes is established, and the purpose of correcting the drift error is achieved. In reference [8], an integrated navigation control system based on biomimetic polarized light navigation sensors, metal-insulator-metal sensors (MIMs), and GPS is designed. The system error does not accumulate with time and meets the accuracy and reliability requirements of autonomous navigation. In reference [9], a visual pose estimation method for underground UAVs based on a depth neural network is proposed. This method can significantly improve the positioning accuracy of roadways in complex environments, but there are some problems, such as large computing resource demands and relatively long computing times. In reference [10], a UAV rotor system that can sense an unknown outdoor environment autonomously and plan a path automatically in real time is designed and implemented. The system basically realizes the real-time autonomous perception and path planning of UAVs in unknown outdoor scenes. In reference [11], a monocular vision semidirect visual odometry/integrated navigation system (SVO/INS) algorithm is proposed to estimate the position, attitude, and velocity of a UAV. In terms of vision and inertial measurement unit (IMU) combinations, whether from a single or dual combination mode and combined with traditional or intelligent algorithms, the autonomous navigation of UAVs can be realized under certain conditions. Because the accuracy of pose acquisition depends mostly on the extraction and matching of environmental features, it is a challenging task to carry out effective SLAM in combination with the characteristics of sensors in the face of a complex coal mine environment.
Membrane computing (a p-system) is derived from natural computing and was developed by Pǎun in 1998 [12]. As a framework of computational model research inspired by the mechanism of cell organization, membrane computing has made rich research achievements in theoretical explorations and application studies and has gradually extended its advantages to the field of optimal control [13][14][15]. Based on the membrane computing model, an effective membrane algorithm has been designed to solve optimization control problems in various fields of application. An optimization algorithm, which integrates a membrane structure and computing methods, has successively solved SAT [16][17][18], satisfiability [19][20][21][22], HPP [23,24], and function optimization problems [25,26]. In recent years, the author's team has been engaged in attitude control and path planning for UAVs in coal mines [27]. At the same time, membrane computing is gradually achieving breakthroughs in the field of optimal control, and it has become a hot research topic.
In this paper, a navigation algorithm for UAVs, IMUs, LiDAR systems, and depth cameras based on probabilistic membrane computing-based SLAM (PMC-SLAM) is proposed, and the main contributions are as follows: (1) Based on the sensor model, a corresponding mathematical model is given (2) A probabilistic membrane system-based calculation model is further constructed (3) An algorithm is designed to realize the map construction process in the membrane 2. Sensor Model and Information Fusion 2.1. Inertial Measurement Unit. An IMU is composed of accelerometers and gyroscopes. Gyroscopes measure angular velocity and accelerometers measure linear acceleration; the former is the principle of inertia and the latter is the principle of force balance. The accelerometer is correct in a long time; it has errors in a short time due to signal noise. The gyroscope is more accurate in a short time and has errors in a long time due to drift. Therefore, the two need to adjust each other to ensure the correct heading. The IMU is used to measure the object's uniaxial, biaxial, or triaxial attitude angle (or angular rate) and acceleration device. For inertial applications in attitude, it consists of a combination of a three-axis gyroscope and a three-axis accelerometer. The depth camera used in this paper, Intel RealSense D435i, integrates an IMU unit, Bosch BMI055, which can synchronize IMU data and depth in real time, suitable for UAV system position awareness. The BMI055 is a 6-axis inertial sensor consisting of a digital 3-axis 12-position acceleration sensor and a 3-axis digital 16-position gyroscope with ±2000°/ SEC. It has a total of 6 degrees of freedom. It allows the measurement of the angular velocity and acceleration of the three vertical axes with very low noise, thus detecting the UAV posture. The measurement value of an inertial measurement is based on the coordinate system of the sensor itself and the coordinate system of a non-UAV body. According to the UAV mechanical model constructed by the author of this paper [27], the IMU is modeled; the body state vector pose is assumed to be fc vi , r vi g, the angular velocity is w vi , and the acceleration is _ w vi . Furthermore, it is assumed that the position relationship between the sensor and the body coordinate system is determined, where r sv represents rotation and c sv represents translation. The relationship between the angular velocity w and the speed of the body in the sensor coordinate system is shown as follows [27,28].
According to the principle of acceleration measurement, the measured value of acceleration is expressed as r is the acceleration of the origin s of the sensor coordinate, and g is the acceleration of gravity. Since the value expressed in Formula (2) is not the value in the body coordinate system, it is necessary to convert the coordinates of the sensor and the body. Journal of Sensors By using Poisson's formula to derive Formula (3) twice, it can be concluded that The right side of Formula (4) consists of the initial parameters. Substituting Formula (4) into Formula (2) yields Formulas (1) and (5) are further expressed in matrix form: 2.2. LiDAR. LiDAR uses a laser pulse to realize the position measurement of the sensor and describes the azimuth and elevation angle by controlling the reflection angle of the laser beam; additionally, the distance is measured by the flight time. In LiDAR ranging methods, both direct and oblique laser triangulations can achieve high precision and noncontact measurement of the measured object, but the resolution of the direct laser is not as high as that of the oblique laser. The RPLIDAR LiDAR of Silan Technology used in this paper adopts the oblique laser triangle ranging method with high resolution. Its unique RPVision 3.0 laser ranging engine can conduct up to 16,000 ranging actions per second with a range radius of 25 meters and an angular resolution of up to 0.225°. Driven by the motor, the ranging core will rotate clockwise, so as to realize the 360°omnidirectional scanning ranging detection of the surrounding environment. In each ranging process, the RPLIDAR series laser radar will emit a modulated infrared laser signal; the laser signal after irradiating the reflective target object will be received by the visual acquisition system and then through the embedded RPLIDAR DSP (Digital Signal Processor) real-time solution. The distance between the irradiated target object and the LIDAR including the current included angle information is outputted through the communication interface; the schematic diagram is shown in Figure 1 [29]. In Figure 1, the coordinates of point P are expressed as [30] In the form of a matrix, the above equation can be rewritten as In Formula (8), r is the distance, ε is the pitch angle, α is the direction angle, and c is the rotation matrix along the coordinate axis. To further express the relationship between Formulas (7) and (8), the rotation is inverted.
2.3. Depth Camera. Compared with a traditional camera, a depth camera can measure the depths of pixels. The Intel RealSense D435i depth camera is taken as an example and is shown in Figure 2.
In Figure 2, the coordinates of point P (x, y, z) and the models of the left and right cameras are defined as [31]  : ð10Þ In Formula (10), K is the camera parameter, f u is the horizontal pixel focal length, and f v is the vertical focal length. Since the parameter matrices of the two cameras are the same, when the left camera obtains an observation value, the corresponding observation value (vertical) of the right camera can be obtained along the real line in Figure 2. Therefore, Formula (10) can be further developed to obtain the model of the whole camera as

Journal of Sensors
In Formula (11), there is a pose relationship ðc l:r = 1Þ between the left and right cameras, and v l = v r can be obtained.

Sensor Information Fusion.
The UAV used in this paper is equipped with an IMU, LiDAR (an SLR A2), and a depth camera (an Intel D435i). During the process of mapping, multisensor information needs to be fused. The multisensor information fusion technology makes the multilevel and multispace information complementary and optimal combination processing of various sensors and finally produces the consistent interpretation of the observation environment. In this process, multisource data should be fully used for rational control and use, and the ultimate goal of information fusion is to extract more useful information through multilevel and multiaspect combination of information based on the separated observation information obtained by each sensor. This process not only takes advantage of the cooperative operation of multiple sensors but also comprehensively processes the data of other information sources to improve the intelligence of the whole sensor system. Hybrid fusion framework has strong adaptability, including the advantages of centralized fusion and distributed, and its stability is strong. The structure of the hybrid fusion method is more complex than that of the first two fusion methods. Although it increases the cost of communication and computation, the airborne computer in this paper is sufficient to bear the load of computation force. Gaussian convolution is used for image projection. The convolution is a filtering operation, weighted by adding the center point to its neighborhood to get a new value for the center point. After filtering, the pixel value of the central point is replaced by the weighted average of the pixel value of its surrounding points, making the boundary more blurred (low-pass filtering). The function image of the Gaussian kernel is a normally distributed bellshaped line. The closer the coordinates are to the center point, the larger the value is, and vice versa. The closer you are to the center, the more weight you have; the farther away you are from the center, the less weight you have. According to the point coordinates of the filter, the value calculated by the Gaussian kernel is the value of the filter, which is the corresponding weight of each point on the image. The final processing result is obtained by rolling multiplication of the filter and the original image. If obstacles are not identified by simultaneous interpretations under different sensors, this situation may lead to serious differences or even errors in information, and thus, the dependence on map generation cannot be coupled.
In this paper, simultaneous interpretation maps are built under different sensors because the data between different sensors are not interdependent. Suppose that the maps generated by the three sensors are represented as map 3 , which can be decomposed by de Morgan's law: 3. Extended SLAM Based on Probabilistic Sparsity 3.1. Probabilistic SLAM. When the UAV does not know its position, it cannot obtain a map of its environment, and all the data are concentrated on the measurement data and control data. From the perspective of probability, the SLAM problem is divided into full SLAM and online SLAM. Online SLAM only includes the variable estimation problem at time t, and its algorithm is incremental; the corresponding control quantity and measurement value are discarded after processing. In full SLAM, in addition to the pose of the UAV, the path and the posterior map need to be calculated. The probability at time t can be expressed as In online SLAM, the real-time pose and map posterior are considered, and the temporal probability is expressed as In Formulas (13) and (14), x t represents the position and attitude of the UAV at time t, respectively, m is a map, and y 1~t , u 1~t denotes the measurement and control data. The respective models of these two types of SLAM are shown in Figure 3.
The difference between online SLAM and full SLAM mainly lies in the different branches of the algorithm, especially in the application of practical scenarios. The online SLAM problem is the integration of the past positions of the full SLAM problem; the process of online SLAM can be realized by integrating the past state of the full SLAM; this  Journal of Sensors integration presents a continuous state, which is carried out in turn, as shown in the following formula: Among continuous and discrete problems, a continuous problem includes the UAV pose and location in its map, and the study object is represented by a beacon during the feature representation process. When an object is detected, the algorithm calculates the correlation between the detected object and the UAV, exhibiting a discrete feature, namely, the "0" or "1" state. Therefore, it is necessary to define a consistency variable. Combining Formulas (13) and (14), the online and full SLAM models with consistency variables are as follows: In Formula (16), f t is the vector corresponding to the consistency variable.

Sparse Extended
Filtering. The Kalman filter in the linear problem was proven to be the optimal estimation; it is a very big limitation; only processing the linear model and measuring model for precise estimates of the nonlinear scenario does not achieve optimal estimates of the effect; in order to be able to set up a linear environment, a false process model for the constant velocity model is required, but in the actual application, this is not the case. Both process models and measurement models are nonlinear. Compared with the extended Kalman filter, for the purpose of online operation and high computational efficiency, the sparse extended filter represents information with high efficiency, inherits UAV poses and map posteriori, and maintains the sparse matrix through nonzero elements. The calculation process of the sparse extended filter includes measurement updates, motion updates, sparsification, and estimation.
The information matrix Ω and vector ζ are updated to complete the processing of the control information according to the Kalman filter [32].
In Formulas (17) and (18), Σ is the covariance matrix, F x is the UAV state vector matrix, G t is the derivative of the Jacobian matrix with respect to time t, and _ μ t represents the estimated mean value at time t. The expressions of G t , F x , and δ are as follows: , ð20Þ

Journal of Sensors
The following can be deduced from Formulas (18) and (19).
In Formula (22), the dimension of Ω is random and implemented in finite time. Assuming that Ω is sparse, the update efficiency is enhanced, and the following can be defined: According to Formula (22), it can be concluded that From the inverse of the matrix, it is further obtained that It is assumed that Φ t is calculated based on Ω in a limited time frame; then, the calculation is feasible under the condition of finite time. Using the matrix elements (nonzero) of the UAV poses and map features, the calculation does not depend on the size of Ω. Considering the inverse of G t , G −1 t can be calculated as follows: In Formula (26), the corresponding map feature element is nonzero.
The measurement update considers the filter update in the flight process of the UAV, which is realized by an extended Kalman filter.
In Formula (27), Q t is the noise covariance matrix.

Sparseness.
Sparse extended filtering is necessary for sparse Ω information matrices. Through sparse representation, the posterior distribution is in a sparse state. Based on this, the relationship between the UAV positions and the map features is eliminated, and the number of features is further limited. To realize the above idea, two new connections are introduced. First, a feature is activated by an inactive connection, and a new connection is introduced between the UAV pose and the corresponding feature. Second, UAV motion introduces two new connections between active features to limit the number of active features and to avoid having two nonsparse boundaries. At this time, the sparsity is obtained by less active features.
In the process of sparse definition, the feature set is divided into three (disjoint) subsets.
In Formula (28), m 1 is the feature set of activity continuation, m 0 is the characteristic of the activity to be stimulated, and m 2 is inactive. The inactive state is continued in the sparsification step; at the same time, the connection between the UAV pose and m 0 is deleted. Sparseness is introduced into the posterior because m 1 and m 0 contain all the current features, and the posterior can be characterized as follows: In Formula (29), if m 0 and m 1 are known, x t does not depend on the inactive feature m 2 , so m 2 can be taken as a random value. Using the general term sparsification protocol, reducing the dependence on m 0 , and taking m 2 = 0, 4. Probability Membrane System-Based SLAM 4.1. Probabilistic Membrane System. In a probabilistic membrane system, rules are executed in a probabilistic mode, so the system can process data quickly. Combined with the powerful distributed and parallel characteristics of membrane computing, an independent and collaborative membrane system for data processing is constructed. The modeling principle and process for UAVs in coal mines are shown in Figure 4 by using a probabilistic membrane system.
To determine the real-time position of a UAV, the membrane controller receives the airborne sensor data ðx, y, θÞ T and the output position update data ðx ′ , y ′ , θ ′ Þ T at each 6 Journal of Sensors cycle during the beginning of the execution process, and a probabilistic membrane system with a degree of 3 is established.
Formula (31) has the following characteristics: where M is the character set element contained in the probabilistic membrane system (2) μ = ½ ½ 2 3 1 , where μ is the membrane mechanism of the probabilistic membrane system of the UAV (3) w 1 = pðx t | μ t , x t−1 Þ, where w 1 represents the solution probability density (4) w 2 = pðx t ′ | μ t ′ , x t−1 ′ Þ, where w 2 represents the probability density after noise interference (5) w 3 = pðx tˇ| μ tˇ, x t−1 Þ, where w 3 represents the ideal probability density (6) R is the rule set, and energy conservation is maintained during the transfer process (7) c r is the probability of the evolution of objects based on R rules 4.2. Probabilistic Membrane Calculation Model. According to Section 3.2 of this paper, the state variable submatrix is extracted by calculating the matrix for all excluded variables that obey the distribution pðx t , m 0 , m 1 | m 2 = 0Þ.
The combination of Formulas (32) and (34) can be obtained: Due to the limited cycle time of the probabilistic membrane algorithm, it does not depend on the scale of the map itself. At the same time, control is added to the calculation estimation. Through the efficient calculation of features, the UAV position and feature vectors are represented during the process of building the map and updating the elements in the matrix (vector). The P-Lingua file framework for the probabilistic membrane calculation model is shown in Figure 5 The state vector derivation for Formulas (39) and (40) can be obtained as follows: According to the similarity principle of spatial coordinate transformation, the posterior model of the UAV with respect to time is defined by translation and rotation, the information matrix, and the vector as follows: Because ℝ T = ℝ -1 , the following exists for ℝ: There is a problem of data equivalence in the process of data fusion. By adding constraints to further control and map the feature penalty matrix C, the larger the value of C is, the stronger the constraint. The fusion algorithm is as follows in Algorithm 1.
The implementation process of Algorithm 1 is described. The relative pose between the UAV coordinate systems is determined by d x d y α Â Ã T , which includes the local rotation and translation of the information matrix and vector while maintaining the sparsity of the algorithm. Map fusion is realized by constructing a joint posterior map, which contains the corresponding features in different maps. For the same two features, a connection between the two features in the information matrix is added.

Experimental Verification
To verify the proposed SLAM algorithm, experiments are performed, and the experimental software and hardware platform and parameters are shown in Table 1.
The coal mine roadway shown in Figure 6 is selected for the experiments. The roadway is 200 meters long, 3.5 meters wide, and 4 meters high. The UAV flies from point A along the green line to point B. The methods based on "LiDAR," "LiDAR+IMU," and "LiDAR+IMU+vision (depth camera)" are used for positioning and mapping experiments, respectively. The mapping effects are shown in Figures 7-10. The attitude error and trajectory error are shown in Figures 11-13.
By comparing the experimental results in Figures 7-9, it can be seen that when using only LiDAR for mapping, the mapping is more accurate in a small area, but in a long-distance, large-scale (space) area, the mapping accuracy decreases, and the precision and shape distance error are large. In the case of the "LiDAR+IMU" mapping, due to the auxiliary role of the IMU, the mapping accuracy and effect are significantly improved. However, as the UAV flies farther away, the cumulative error gradually increases. As shown in Figure 7, for the terminal B area, the corridor arc area, and the square area overlap, mapping errors occur. In the "LiDAR+IMU+vision (depth camera)" method, a depth camera is added to provide image feature information. Even in the long-distance scene, the mapping effect is very stable; as shown in Figure 9, the end B area, arc area, and square area are more accurate. The experimental results in Figures 7-9 show that the effect sensor fusion with three kinds of sensors in the same scene is obviously better than that of a single sensor or a two-sensor combination. The 9 Journal of Sensors mapping effect of "LiDAR+IMU+vision (depth camera)" developed based on PMC in this chapter is shown in Figure 10. Compared with that in Figure 9, the mapping effect obtained based on the PMC is obvious, especially in the red box marked in the figure. Furthermore, the simultaneous interpretation errors of the LiDAR+IMU+vision (depth camera) approach based on PMC are compared with those of Figures 11, 12, and 13. The experimental results show that the SLAM effect of the fusion approach with three sensors based on PMC is better than that of one sensor or any combination of two sensors in the long and narrow roadway that simulates the complex environment of a coal mine.

Conclusion
Based on the analysis of LiDAR, IMU, and depth camera sensor-based mathematical models, this paper designs a probabilistic membrane system model and membrane algorithm. In the same tunnel application simulation scenario, a theoretical analysis and an experiment are combined. The method is compared with the single-, two-, and threesensor fusion mapping methods, and it is verified that PMC has a good effect on the mapping performance of underground UAVs. The research results of this paper will provide good theoretical support for engineering practice with respect to disaster prevention and control during the process of precise coal mining in the future.
As an expansion of the tissue membrane system model, this paper puts forward the calculation model that each membrane has the same probability of membrane structures and evolution rules of operation, in the form of probability in data operation process of randomness and uncertainty, including the process of data fusion for collaboration. Between cells may lead to mapping of the fusion accuracy and computational efficiency decline. Next, to improve the research of this paper, we seek a better membrane computing model and algorithm for mapping effect optimization that will pave the way for the study of patrol UAV path planning in coal mines.

Data Availability
The experimental data used to support the findings of this study are included within the article.

Conflicts of Interest
The authors declare that they have no conflicts of interest.