Research on Massive Image Retrieval Method of Mobile Terminal Based on Weighted Aggregation Depth Feature

Image self-coupling and feature interference lead to poor retrieval performance in massive image retrieval of mobile terminals. This paper proposes a massive image retrieval method of mobile terminals based on weighted aggregation depth features. The pixel big data detection model of massive images of mobile terminals is constructed, the collected pixel information of massive images of mobile terminals is restructured, the edge contour feature parameter set of massive images of mobile terminals is extracted, the feature fusion processing of massive images of mobile terminals is carried out in gradient pixel space by means of feature reconstruction and gray moment invariant feature analysis, the depth feature detection of massive images of mobile terminals is realized by using weighted aggregation method, the gradient value of pixels of massive images of each mobile terminal is calculated, and the optimized retrieval of massive images of mobile terminals is realized according to the fusion result of gradient weighted information. Simulation results show that this method has better feature clustering, stronger image detection and recognition, and anti-interference ability and improves the precision and recall of image retrieval.


Introduction
With the increasing amount of multimedia image information in the database of mobile terminal, it leads to the increasing difficulty of accurate image retrieval and positioning. Especially, the environmental coupling interference in the massive images of mobile terminals leads to the poor search accuracy and search completeness of the massive images retrieval in mobile terminals. It is necessary to combine multidistribution feature detection and information fusion of images to build an optimized mass image retrieval algorithm for mobile terminals. In this way, we can realize the massive image retrieval of mobile terminals and improve the detection and positioning capability of massive images of mobile terminals. The related research on massive image retrieval methods for mobile terminals is important in the construction and access design of multimedia databases for mobile terminals [1].
The retrieval and recognition of multimedia images in the mobile terminal database are based on the feature analy-sis and extraction of massive images of mobile terminals. Combining the pixel feature analysis of the massive image of the mobile terminal, through the semantic ontology information segmentation and corner location detection of the image, the feature clustering algorithm and the adaptive learning algorithm [2] are used to retrieve the massive image of the mobile terminal. Among the traditional methods, the methods for mass image retrieval of mobile terminals mainly include the mass image retrieval method of mobile terminals based on semantic ontology information fusion, the mass image retrieval method of mobile terminals based on cloud computing fusion, and the optimization based on particle swarm PSO and genetic evolution clustering, mass image retrieval methods for mobile terminals, etc. [3]. The model of mobile terminal mass image retrieval based on similarity feature detection and semantic ontology fusion is proposed in reference [4], and the parameter model of feature distribution of mobile terminal mass images is also constructed. The mobile terminal massive image retrieval is performed by fuzzy degree matching. However, the search accuracy of this method for mobile terminal mass image retrieval is not high and the level of computational adaptation is not good. The method of mobile terminal mass image retrieval based on corner point localization is proposed in reference [5]. It improves the localization ability of image retrieval by performing mobile terminal mass image retrieval through Harris corner point localization analysis, but the method performs image detection with large ambiguity and poor feature matching.
In view of the drawbacks of traditional methods, this paper proposes a massive image retrieval method for mobile terminals based on weighted aggregated depth features. Firstly, we construct a pixel big data detection model for massive images of mobile terminals and combine the results of deep feature analysis with weighted aggregation based on the results of big data feature analysis. Then, the clustering analysis of mobile terminal massive images is carried out to achieve image retrieval optimization. Finally, simulation test analysis is conducted to demonstrate the superior performance of this paper's method in improving the massive image retrieval capability of mobile terminals.

Mobile Terminal Massive Image Big Data
Model and Feature Analysis 2.1. Mobile Terminal Massive Image Big Data Model. In order to realize the retrieval of massive images of mobile terminals, the feature extraction method is used to realize the feature analysis of massive images of mobile terminals by combining with the image database model construction and extracting the relevant feature quantities of massive images of mobile terminals. The mass image features extracted from mobile terminals mainly include pixel features, corner features, gray-scale features, and moment invariant features [6]. Deep fusion and big data analysis are carried out on the extracted massive image feature information of mobile terminals. Combining the data characteristics with its own analysis, build a dictionary library for massive image retrieval of mobile terminals. The weighted aggregation algorithm is used in the lexicon library to realize mobile terminal mass image retrieval [7]. According to the above steps, the overall implementation structure of mobile terminal mass image retrieval is obtained as shown in Figure 1. The general implementation structure diagram of mass image retrieval for mobile terminals shown in Figure 1 indicates that the first step to perform mass image retrieval for mobile terminals is to construct an image database and feature acquisition. According to the distribution of feature points of massive images of mobile terminals [8], the image fusion clustering model of massive images of mobile terminals is constructed using statistical dictionary set as Among them, A is the pixel value of the gray-scale pixel feature points of the massive image of the mobile terminal in the direction, tðxÞ is the dictionary matching pixel set of the massive image of the mobile terminal, and JðxÞtðxÞ is the associated distribution set of each image sample information in the statistical dictionary.
After normalizing the number of features, the n feature vector constraint functions of the image are as follows: In the formula, x ∈ Ω represents the big data detection feature quantity of the massive image of the mobile terminal. Combined with the results of statistical feature detection, adaptive learning and deep learning algorithms are used to achieve feature clustering analysis of massive images of mobile terminals. The learning and feature clustering model of massive images of mobile terminals is shown in Figure 2.

Analysis of Mass Image Features of Mobile Terminals.
Establish a pixel big data detection model for massive images of mobile terminals, so as to restructure the collected massive image pixel information of mobile terminals. Extract the edge contour feature parameter set of the massive image of the mobile terminal, and record the pixel distribution set of the massive image of the mobile terminal as kϕk = sup j ϕðθÞj, Cð½a, b, RÞ. F and Cð½a, b, RÞ are the matching entropy of the massive image pixel information of the mobile terminal. According to the result of the reorganization of the image structure information, the extracted feature distribution matrix is obtained as where f ðzÞ represents the SIFT feature moment of the massive image of the mobile terminal. Using the gray invariant moment detection method [9], the key feature sequence points for constructing massive images of mobile terminals are rðnÞ = rðnΔtÞ, n = 0, 1, 2, ⋯N − 1.
The key point detection method is used to detect the key cluster feature points of massive images of mobile terminals, and the detection results are obtained: Let the input pixel set of the massive image of the mobile terminal be xðtÞ, t = 0, 1, ⋯, n − 1. The joint feature distribution sets of the detected feature points are as follows: Wireless Communications and Mobile Computing Among them: Combining the semantic distribution of massive images of mobile terminals, we use visual word information fusion for image retrieval and recognition.

Optimization of Massive Image Retrieval for
Mobile Terminals

Mobile Terminal Mass Image Fusion
Processing. Based on the general structure analysis and feature detection of image retrieval conducted above, the set of edge contour feature parameters of the massive images of mobile terminals is extracted. Feature fusion processing of massive images of mobile terminals has been performed in the gradient pixel space by feature reconstruction and gray invariant moment feature analysis methods [10]. The feature fusion area pixel set is expressed as where R 2 ðkÞ represents the encoded feature components of massive images and T p represents the sampling time interval of the massive image coding of the mobile terminal. Build the BoF model, and according to the analysis results of the moment invariant characteristics of the image, the distribution sequence of the massive image resources of the mobile terminal is expressed as Gradient pixel decomposition and information fusion technology are used to construct a mean segmentation model for massive images of mobile terminals [11], shown as follows:

Wireless Communications and Mobile Computing
In the formula, x 1 , x 2 , x 3 ⋯ x T is the profile deformation parameter, and T is the time delay of the mean division.
In the HSV space of image distribution vision, through feature decomposition, the RGB three-dimensional reconstruction model for mass image retrieval of mobile terminals is constructed, and the three-dimensional reconstruction output feature value is N l , and the calculation formula is Among them, l triangle = π ⋅ D/2L represents the Retinex corner parameter value of the massive image of the mobile terminal, and L is the characteristic value of the image pixel series distribution [12].
The weighted aggregation method is used to realize the depth feature detection of the massive image of the mobile terminal, and the pixel image information of the massive image of each mobile terminal is calculated, and the comprehensive feature direction of the image fusion is k = 1, 2, :: ⋯ , n, zk ∈ w s ,ak ∈ f1, 2, ⋯, Rg. Based on comprehensive feature vector analysis, the image fusion result is obtained: Among them, σ x , σ θ and e i represent the fitness parameters of the massive image fusion of the mobile terminal, μ represents the detection statistical feature quantity of each pixel, and μ > 0. This realizes the fusion processing of the massive image of the mobile terminal and performs image weighted aggregation according to the result of image fusion and deep learning [13].

Row Image Weighted Aggregation and Retrieval Output.
The weighted aggregation method is used to realize the depth feature detection of the massive image of the mobile terminal, and the gradient value of the massive image pixel of each mobile terminal is calculated [14], and the gradient convergence value of the massive image pixel is Among them, x T represents the correlation parameter between feature sets, K is the scale of feature vector coding, Qðx i , y i Þ is the color moment of the training image, and i = 1. Read in the first mobile terminal image and input it into the weighted aggregator to get the weighted aggregation function: where C t = C e = 1/∑ x i ∈w kðkx i k 2 Þ represents the depth information parameter of weighted aggregation. b ine ∈ ½1, M represents the attribute type of output. VðsÞ is enhanced to N points along the s, and the weighted training is based on the depth feature to obtain the weighted aggregation output function of the massive image of the mobile terminal, which is expressed as where V i is a morphological function for weighted aggregation of massive images of mobile terminals, i = 0, 1, ⋯N − 1 is a collection of pixels, and the image aggregation output is G: F t = ½x t , y t T is the associated pixel value of the t-th frame of the image of the mobile terminal. Through weighted aggregation and feature detection, the tracking trace meets [15,16]: traceð:Þ represents the deep fusion parameters of image aggregation, and the iterative function for massive image retrieval of mobile terminals is X = ½x t , y t T is the training image set, and the output clustering matrix of image retrieval is Among them, Lxxðx, σÞ is the joint feature matching coefficient, Lxy and Lyy are image retrieval pheromone in different aggregation directions.
Combining the above steps, calculate the gradient value of the massive image pixel of each mobile terminal, and realize the optimized retrieval of the massive image of the mobile terminal according to the result of the gradient weighted information fusion [17]. The optimization process of the algorithm is shown in Figure 3.

Simulation Experiment and Result
In order to verify the application performance of this paper's method in realizing massive image retrieval for mobile terminals, MATLAB is used for simulation testing, setting the number of training samples for massive images of mobile terminals as 2400, the test set as 120, the attribute feature  Table 1.
According to the above parameter settings, the mass image retrieval of mobile terminals is carried out and the test image samples are obtained as shown in Figure 4.
Taking the image in Figure 4 as the test object, the retrieval of images of the same category is realized, and the retrieval result is shown in Figure 5. Figure 5 shows that the use of the method in this paper can effectively achieve massive image retrieval on mobile terminals, and the retrieval output performance is better. The   Using different methods, the comparison results of precision and recall for image retrieval are shown in Figures 6  and 7. Analyzing the simulation results, it is known that the precision and recall rates of this method for mass image retrieval on mobile terminals are higher; especially, the recall rate is significantly higher than that of traditional retrieval methods.

Conclusion
An optimized algorithm for massive image retrieval from mobile terminals is constructed, combining multidistribution feature detection and information fusion of images to achieve massive image retrieval from mobile terminals. This improves the ability to detect and locate massive images in mobile terminals. In this paper, we propose a mobile terminal mass image retrieval method based on weighted aggregated depth features. The extracted mobile terminal mass image feature information is deeply fused and analyzed by big data, and the dictionary for mobile terminal mass image retrieval is constructed by combining the data features with their own analysis. The gradient value of each mobile terminal mass image pixel is calculated, and according to the gradient weighting, a weighted aggregation algorithm is used in the lexicon to achieve mobile terminal mass image retrieval. The experimental results show that the method in this paper has a higher recall and precision for massive image retrieval on mobile terminals.

Data Availability
The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.