Artificial Intelligence Algorithms for Multisensor Information Fusion Based on Deep Learning Algorithms

Arti ﬁ cial intelligence (AI) has been widely used all over the world. AI can be applied not only in mechanical learning and expert system but also in knowledge engineering and intelligent information retrieval and has achieved amazing results. This article aims to study the relevant knowledge of deep learning algorithms and multisensor information fusion and how to use deep learning algorithms and multisensor information fusion to study AI algorithms. This paper raises the question of whether the improved multisensor information fusion will a ﬀ ect the AI algorithm. From the data in the experiment of this article, the accuracy of the neural network before the improvement was 4.1%. With the development of society, the traditional algorithm ﬁ nally dropped to 1.3%. The accuracy of the multisensor information fusion algorithm before the improvement was 3.1% at the beginning; with the development of society, it ﬁ nally dropped to 1%; it can be known that the accuracy of the improved neural network is 4.6%, and with continuous improvement, it ﬁ nally increased to 9.8%. The improved multisensor information fusion algorithm is the same, the accuracy at the beginning was 3.9%, and gradually increased to 9.5%. From this set of data, it can be known that the improved convolutional neural network (CNN) algorithm, and the improved multisensor information fusion algorithm should be used to study AI algorithms.


Introduction
Since AI was proposed, the progress of AI research can be described as "fast." Now, AI has been widely used in many disciplines and has formed an independent system. The history of AI is only a few decades, and its achievements and bumpy experiences have also attracted people's attention, especially that the traditional AI algorithms were once in trouble. Therefore, the study of AI algorithms based on deep learning methods has great research significance.
Human progress is inseparable from the development of information. Information development makes people realize an intelligent life. People's life is more and more intelligent. Intelligent technology will bring people a better life. It also promotes the development of society. Therefore, we should make rational use of AI to make people's life more convenient.
With the development of society, AI has become more and more important. Yanming found that many scholars recently proposed deep learning algorithms to solve previous AI problems. The purpose of this work is to confirm the latest technology of deep learning algorithms in computer vision through epoch-making topics. He gave an overview of various deep learning methods and their latest developments. Finally, Yanming summarized the future trends and challenges of neural network design and training [1]. Levine explained the coordination method based on the learning of the image grabbing robot. In order to control the coordination of hands and eyes, Levine only uses single-lens camera images to predict the success probability of the final control of spatial motion. This requires the use of the network to observe the spatial relationship in the scene to learn the coordination of hands and eyes. Then, he used the network to acquire masters in real time and successfully mastered the training network. Levine collected more than 800,000 crawling attempts in two months. The experimental evaluation of Levine showed that this method achieved effective real-time control [2]. Oshea introduced and discussed the application fields of deep learning. It can not only have a great impact on communication, but also play a great role in radio transformer. Compared with the previous traditional scheme, his scheme is obviously more advantageous. He applied deep learning to the sending and receiving of the network and achieved great success [3]. Ravi found that deep learning is applied more and more in life, its foundation is neural network. As it is used in more and more fields, its popularity is higher and higher [4]. Makridakis found that AI had a great impact on people's lives. It will have a great impact on people. It will not only have an impact on people's lives, but also promote social development. Through AI, people all over the world communicate more conveniently, and the network makes people's contacts more close. Obviously, AI has great advantages [5]. Lemley found that CNNs in deep learning are widely used. It is caused by the reliability of big data, which can make a lot of data and information perfectly solved in a fast time. Its emergence has improved the data problems in computers, the Internet, and so on. It makes deep learning more widely used. This leads to the wider applicability of deep learning. A new generation of intelligent assistant has also appeared, which is closely related to learning algorithm and deep learning. AI is also inseparable from deep learning [6]. Burton found that AI can help people solve moral, ethical, and philosophical problems. Burton hopes that students can be developed from AI and understand the role that AI will bring to society. Burton also discussed how to use AI to solve ethical and moral issues [7]. Polina found that although AI promoted the development of medical industry, it also brought challenges to the traditional medical system. The new deep learning is helpful to transform any patient's data into medical data and analyze the face and patient information. Patients cannot see their medical data. Polina outlined the advantages of the next generation of AI and proposed solutions. These solutions can solve medical problems and motivate relevant personnel to continuously monitor health [8]. Through the experimental analysis of scholars, people cannot stand still and must break through some traditional concepts and algorithms. Therefore, it is still necessary to study AI algorithms based on deep learning algorithms.
The innovations of this article are as follows: (1) Introduce the relevant theoretical knowledge of deep learning and multisensor information fusion, and use the neural network algorithm based on deep learning to analyze how deep learning and multisensor information fusion research AI algorithms. (2) Base on neural network algorithm and multisensor information fusion weighted similarity improvement D-S evidence theory to launch experiments and analysis of AI algorithm research. Through investigation and analysis, it is found that AI algorithms are inseparable from neural network algorithms and multisensor information fusion.

Network Algorithm and Multisensor Information Fusion
The so-called multisensor information fusion is the use of computer technology to automatically analyze and synthesize information and data from multiple sensors or multiple sources under certain criteria to complete the required decision-making and estimation information processing process. Multisensor information fusion is an important content in target recognition, and the similarity between unknown and known targets is the key technology [9,10].
The common methods of multisensor information fusion can basically be summarized into two categories: AI method and random method; random algorithm is an algorithm that uses probability and statistics to make a random selection for the next calculation step during its execution. The random algorithm has certain advantages, as shown in Figure 1. As shown in Figure 1, since the 1970s, ultra-high-speed, ultra-large-scale integrated circuits have been realized through the use of high-precision machine tools. With the improvement of sensor performance, many sensor information systems have appeared, and these information systems are mainly used for applications with a variety of complex application backgrounds [11]. System information is monitored by multiple sensors, and the requirements of information processing speed, information expression form, and information storage capacity are no longer what the human brain's information comprehensive ability can bear.

Mobile Information Systems
This chapter introduces the functional model of multisensor information fusion; the structural model of sensor information fusion generally has three basic forms: centralized, decentralized, and hierarchical structure. The hierarchical structure is divided into feedback structure and nonfeedback structure, as shown in Figure 2.
As shown in Figure 2, in the model of Figure 2, the functions of the information fusion system mainly include feature extraction, classification, recognition, and estimation. Among them, feature extraction and classification are the prerequisites for recognition and speculation, and the actual information fusion process is performed through recognition and speculation [12].
2.1. Improved D-S Evidence Theory. Scholars have studied a new fusion method, the theory of evidence. The D-S algorithm belongs to the field of information fusion. The D-S theory is the promotion of Bayesian inference method, which is mainly carried out by using Bayesian conditional probability in probability theory. It is necessary to know the prior probability. The D-S evidence theory does not need to know the prior probability, can represent "uncertainty" well, and is widely used to deal with uncertain data. Supposing Bel 1 and Bel 2 are two trustworthiness functions on the same recognition framework Θ, w 1 and w 2 are their corresponding basic trustworthiness distributions, respectively, and the focal element of the two is A i , B j ði = 1, 2, 3, ⋯nÞ. If A j ∩ B j = A I , on the premise that the evidence is independent of each other, the following two trust merging rules are defined as The improved fusion rule is This method divides all conflict information into unknown item Θ and re-judges when new evidence comes. This method solves the problem of evidence synthesis when there is a high degree of conflict and improves the classic DS evidence theory into Here, the element in D is DðA, BÞ; d BPA ðw 1 , w 2 Þ represents the comprehensive influence of the two evidence focal elements and the basic probability assignment, which reflects the difference between the evidence. In order to comprehensively reflect the independence and difference between the evidence, based on the evidence distance, a new method of evidence conflict is proposed: Here, k is the conflict coefficient of the classical evidence theory. The fusion rule is improved to Formula (5) adopts a weighting method for the evidence body, assigning weight to each evidence body, and realizes the improvement of evidence theory by adjusting the weight value. This method defines the weight probability distribution function on the recognition framework Θ: Based on the above two types of improvement methods, which adjusts the weight of each evidence subject according

Mobile Information Systems
to the similarity between the evidence subjects, it does not completely negate the conflict evidence, but distributes the conflict part according to a new way of expression, which is also an amendment to the conflict part [13].
In order to reflect the effectiveness of method w 1 proposed in this paper, specific simulation experiments are used to compare with other methods. Now, suppose that the recognition frame is Θ = fx, y, zg, and four sensors are used to observe the three characteristics of the recognition frame to form four evidence bodies, as shown in Table 1.
As shown in Table 1, for the fusion of multiple evidence bodies, two pieces of evidence can be fused first and then fused with the others, and the fusion sequence is not affected. This method reduces the conflict of evidence very well. It can be seen from the size of the impact factor that the newly added evidence plays a major role in the fusion, and its impact factor is always 1. This shows that when the sensor collects data, it is very important. If a sensor fails, this method can accurately locate which sensor has failed [14].

CNN Algorithm Based on Deep
Learning. The deep learning method represented by the convolutional neural network realizes object recognition and classification, and the feature extraction is completely handed over to the machine. The entire feature extraction process does not need to be manually designed, and all is automatically completed by the machine. Feature extraction is achieved through different convolutions, and scale invariance is achieved through maximum pooling layer sampling. While maintaining the three invariances of traditional feature data, the manual design details are minimized in the feature extraction method. Through supervised learning, the computing power of the computer is brought into play, and the appropriate characteristic data are actively searched for. In fact, the research on CNN has been developed since the last century, but it was limited by the computing power at that time, and it did not bloom like it is today [15].
The CNN is composed of multiple single-layer basic structures such as input layer, convolution layer, fully connected layer, and output layer [16]. These network layers form a deep network structure in a hierarchically connected manner [17]. Take the classic network model as an example, as shown in Figure 3.
As shown in Figure 3, the CNN uses the sample image to propagate forward layer by layer through these structures to obtain an output corresponding to the input sample and then use the output value and the actual label corresponding to the sample to define the cost function. Use the BP algorithm back propagation to update the parameters of each layer and minimize the cost function. After a large number of samples are iteratively learned, a trained network is finally obtained; use the network to obtain the output characteristics of each layer for the newly input image, which can be used in tasks such as image representation and image recognition [18].

Convolutional Layer.
It is linearly convolved through the convolution kernel when the current layer is input, and after adding an offset, the activation function is nonlinearly mapped. The feature map of the convolutional layer is Among them, E i is the feature map of the previous layer, which is also the input of the i-th layer, h represents the weight vector of the i-th layer convolution kernel, w i represents the convolution operation, f i−1 * w represents the offset vector, and a i represents the nonlinear activation function.
In image processing, the convolution operation of the convolution layer is a linear convolution of the N × N size convolution kernel and the feature map. The value of each pixel in the convolution kernel is the weight: Among them, Size i represents the size of the feature map of the i-th layer; Size i−1 represents the size of the sliding window, which is also the size of the convolution kernel; and S tride is the step size of each movement of the convolution kernel. In this process, the advantages of CNNs are reflected. A certain size of convolution kernel is used for convolution with the input, which is equivalent to the weight of the neuron is only connected to a part of the input.
When the convolution kernel slides according to a certain step size, after experiencing the entire input image, the corresponding output is generally the convolution result plus the offset; instead of directly taking the convolution result, this is to increase the nonlinearity of the network [19]. The commonly used activation function is the Sigmoid function: The output of Tanh function form is zero mean, and the effect is better than Sigmoid in actual use. It is defined as 2.2.2. Pooling Layer. The pooling layer is to down-sample the image according to a certain pooling method. Commonly used pooling methods include maximum pooling and average pooling. The specific operation is shown in Figure 4. As shown in Figure 4, the pooling layer is sandwiched between successive convolutional layers to compress the amount of data and parameters and reduce overfitting. In short, if the input is an image, then the main function of the pooling layer is to compress the image. Among them, Mobile Information Systems the maximum pooling method is shown on the left, and its pooling result is the value with the largest absolute value in the window, corresponding to the points with slashes at the same position before and after pooling. The role of the pooling layer includes two aspects. One is to reduce the dimensionality of the feature map. The feature map will become smaller as the number of layers increases, but it maintains a certain scale invariance [20]. It is defined as Among them, a represents the input, e a represents the transpose of a, and b represents the corresponding output.

Training of CNN.
Deep neural networks are sometimes called deep CNNs [21]. Artificial neural network is also referred to as neural network or connection model for short. It is an algorithmic mathematical model that imitates the behavioral characteristics of animal neural network and carries out distributed and parallel information processing. This kind of network relies on the complexity of the system and achieves the purpose of processing information by adjusting the relationship between a large number of internal nodes. The neural network structure diagram is shown in Figure 5.
As shown in Figure 5, people often like to use the cost function to measure the network [22], and its form is a represents the input sample, n represents the number of samples, b represents the corresponding ideal value, and h θ represents the actual output of the network output layer. In the classification task, h θ ðaÞ is a K-dimensional vector, corresponding to K classifications, as is the case with the output of the classifier as described above. The CNN framework used in this article uses a quadratic cost function.
The deconvolution layer is the linear convolution of the feature map and the filter and summation to reconstruct the image. Taking the first layer as an example, each color channel of the input image can be expressed as the linear convolution sum of a feature map and a series of filters corresponding to the channel [23], which is Among them, b e 1 represents the approximate reconstructed image of the c-th color channel of the input image b 1 of the first layer; K 1 represents the number of feature maps of the first layer; and Z k,1 represents the k-th feature map. For brevity, Formula (13) is abbreviated as Formula (14) in a matrix form: Among them, b 1 represents the reconstructed image of the input image, F 1 represents the convolutional summation matrix, and Z 1 represents the one-dimensional vector

Clustering Algorithm
Based on AI Algorithm. AI is widely used and has become a cross domain advanced science [24]. Generally speaking, the purpose of AI is to make computers and machines think like humans. In doing so, these machines can be used to replace a lot of labor. AI has brought great convenience to mankind, and mankind has conducted more and more detailed research on this. One of the important areas of AI is the intelligent retrieval system [25].
Clustering is a widely used exploratory data analysis technique. People's first instinct for data is often through meaningful grouping of data. By grouping objects, similar objects are classified into one category, and dissimilar objects are classified into different categories. Clustering is an effective method for analyzing data and mining potential information. It is a clustering algorithm based on partition and widely used. Therefore, the improvement research of clustering algorithm for the purpose of improving the efficiency and clustering results has important theoretical significance.
Since cluster analysis is based on the similarity between individuals, the measurement of similarity between individuals, between classes, and between different clustering results runs through each stage of clustering [26] .
Assuming ðm, nÞ is two points in space, their distance is When m = 1, 2, three commonly used distances are obtained, respectively; when m = 1, it is the absolute value distance: It should be noted that distance is only limited to measure the similarity between numerical individuals, and cannot be used to measure the similarity of attribute individuals. Two vectors are randomly selected from the sample, and the distance between the two is where ∑ −1 represents the covariance matrix between samples. Assume that the mean of all data is μ, the standard deviation is σ, the number of classifications is k, and the initial classification center of the i-th category is x i : If it is the classification of d-dimensional data, set the initial center value of the i-th cluster to x il : In summary, although clustering has been well developed, its own shortcomings limit its application in practice. The large and more complex data volume in practice can easily cause the calculation of the clustering algorithm to be too large, and it is difficult to perform effective clustering. Therefore, the traditional clustering method needs to be further improved and perfected [27].

Experiment and Analysis of AI Algorithms
The sensor is used to collect environmental information and combined with the conventional robot to form an intelligent robot. If the robot can perceive the surrounding environment information, its performance advantage is also great. It happens that multisensor fusion can solve this problem.
The outlier point is the point where the gradient of the sampled value cannot be reached within a sampling period in the actual system. In the field of mathematics, outliers are recorded as the first type of singularity. Outliers are composed of one or more observation points, which contradict other data in the observation data set. There are many methods for outlier elimination: visual inspection, mean square method, point discrimination method, Wright method, etc., among which Wright method is the most commonly used. But using the Wright method to eliminate outliers requires the residuals to meet the normal distribution, which is often not satisfied in the measurement data. If the feature change item in the measurement data is extracted  Mobile Information Systems as a trend item, and then the remaining part is eliminated according to the Wright criterion, it will have a good effect. In order to verify the effectiveness of the outlier detection and elimination algorithm, this paper conducts experiments on 4 intelligent robots and controls their time period, respectively, which is between 0 and 0.4 seconds, and the measured value is between 5.7 and 6.8. This paper collects the data of the robot over a period of time and eliminates the outliers in the data. The theoretical value, measured value, and eliminated data of the robot are shown in Table 2.
As shown in Table 2, the data collected by the sensor has a certain error compared with the theoretical data, and there are outliers. Therefore, the data collected by the sensor must be preprocessed. This paper compares the data before and after eliminating outliers, as shown in Figure 6.
As can be seen from Figure 6, before the outliers are eliminated, the sensor's acquisition accuracy has been declining, from 6% at the beginning to 4.2% at the end. After the outliers are eliminated, the acquisition accuracy of the sensor has been increasing, from 8% at the beginning to 16%; the filtered data can reduce the side effects of abnormal data on the system, but the filtered data accuracy still cannot meet the accuracy requirements of the system. After eliminating outliers, it can meet the accuracy requirements of the system. The data obtained after eliminating outliers is more conducive to the subsequent obstacle detection and path planning of the robot. The results show that prepro-cessing the data collected by the sensor can promote the increase of accuracy and effectiveness of the data. The data fusion and path planning are carried out by the method after the outliers are eliminated. Under the premise of ensuring the accuracy, the convergence speed is accelerated.
Many scholars propose to improve the simple model into a practical model. The specific method is to first establish a rough model and then fit a large amount of data to obtain model parameters; this method is called data AI algorithm. As a kind of AI algorithm, the main purpose of wavelet neural network is to establish the inherent mathematical relationship between the input value and the output value through a certain number of training. However, the discovery of an accurate physical model is difficult and lengthy, and its application at the system level is limited and lacks scalability. The results obtained are incomplete, but they are sufficient to guide practice.
The measured data can be used as the sample library for wavelet neural network training, as shown in Table 3.
As shown in Table 3, through the fitting of the MTF values of the meridian and sagittal directions of the optical system to be measured at different spatial frequencies, it can be observed that a small defocus has a relatively small effect on the MTF value of the low-frequency optical system. As the amount of defocus increases, the MTF value of the optical system decreases. The initial value of the wavelet neural network parameters is particularly critical for fast and accurate calculation of the optimal wavelet neural network, but the initial parameter calculation is more discrete and less convergent. When trying to use the normal distribution, which is more commonly used in nature, the accuracy and efficiency meet the requirements.
Through artificial optimization, the optimal COMS position can be found, and the MTF values of its meridian and sagittal directions can be obtained. Furthermore, combined with the neural network calculation method of wavelet function, the optimal MTF value and the corresponding CMOS    Table 4.
As shown in Table 4, it can be seen that the MTF value optimized by wavelet neural network is higher than the MTF value obtained by artificial optimization. The wavelet neural network and support vector machine algorithm are used to realize the nonlinear function fitting of the relationship between the focus position and the MTF value, and then the best focus position is approximated by AI to obtain the optimal solution. This paper analyzes the MTF value optimization algorithm through the optimization results of the meridian direction and the sagittal direction, as shown in Figure 7.
As shown in Figure 7, the MTF values corresponding to the meridian and sagittal directions of the current AI algorithm can be obtained according to the fitted MTF. It can be seen that the MTF values in the meridian and sagittal directions have little fluctuation, within the allowable range of error. Based on the wavelet neural network, the optimal algorithm of the MTF value optimization algorithm can be obtained.
This paper uses CNN and multisensing algorithm to compare the benefits of AI algorithms brought by two estimation algorithms based on machine learning before and after improvement and proves the superiority of the algorithm in the subjective effect and objective evaluation index, as shown in Figure 8.
As shown in Figure 8, the accuracy of the neural network before the improvement was 4.1%. With the development of society, the traditional algorithm finally dropped to 1.3%. The accuracy of the multisensor information fusion algorithm before the improvement was 3.1% at the beginning; with the development of society, it finally dropped to 1%. The improved CNN and multisensing algorithm bring high accuracy to the AI algorithm, and it has been on the rise. The improved CNN rose from 4.6% to 9.8% at the beginning, and the multisensing algorithm rose from 3.9% to 9.5% at the beginning.
Through scientific research, human intelligence can be imitated by machines. Therefore, AI is conducive to the development of intelligent machines. At the same time, AI is widely used in many fields, so AI is very comprehensive.

Discussion
This article analyzes how to research AI algorithms based on multisensor information fusion. In addition, the concepts related to deep learning algorithms and multisensor    Mobile Information Systems information fusion are expounded, and related theories of AI algorithms are analyzed, and the research methods of AI algorithms are explored. Through experiments and analysis of various algorithms, the importance of deep learning algorithms and multisensor information fusion to AI algorithms is discussed. Finally, the AI algorithm incorporates deep learning algorithms and multisensor information fusion as an example for analysis. This paper also makes reasonable use of the neural network based on deep learning and the improved D-S evidence theory based on multisensor information fusion weighted similarity. As the scope of application of multisensor information fusion has become larger and larger, its importance has also increased. Many scholars have begun to apply multisensor information fusion to all aspects of life. According to these two algorithms, it is meaningful to study AI algorithms based on deep learning algorithms and multisensor information fusion.
Through experimental analysis, this paper knows that deep learning algorithms and multisensor information fusion are necessary to study AI algorithms, which will make AI algorithms more advanced and accurate.

Conclusion
This article explains the concepts of deep learning and multisensor information fusion. In the method part, the neural network based on deep learning and the improved D-S evidence theory based on multisensor information fusion weighted similarity are introduced in detail, and finally the clustering algorithm through AI algorithm is introduced. This article has conducted experiments and analysis on wavelet neural network training through intelligence algorithms and found that the use of wavelet neural network for AI approximation is currently a more reliable and true method. Comparing the algorithm and multisensing algorithm before and after the improvement finds that the improved algorithm can improve the authenticity of the AI algorithm. As a scientific cross-discipline, AI research does not have a unified concept so far, and it is difficult to give an accurate definition of AI. Therefore, this article has some shortcomings in the generalization of the concept of AI, but it has been described in the largest scope. Finally, it can be seen that the research of AI algorithms through deep learning has a significant impact on social development.

Data Availability
The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest
The authors declare no conflicts of interest. 9 Mobile Information Systems the project is Research and Application of Educational Technology based on Artificial Intelligence (taking artificial intelligence major in Higher Vocational Colleges as an example). The project number is 2020ITA05051. Thank the project for supporting this article!