Digital Image Progressive Fusion Method Based on Discrete Cosine Transform

. Te current progressive fusion methods for digital images have poor denoising performance, which leads to a decrease in image quality after progressivefusion.Terefore,a newmethod for digitalimage progressivefusionwas proposed basedon discrete cosine transform, and its efectiveness was verifed through experiments. Te experimental results show that the proposed method has a PSNR value higher than 42.13db in image fusion, both of which are higher than the comparison method, and the fusion efect comparison also has higher image quality. In terms of fusion time, the time of the research method is lower than that of the comparison method when the data volume is between 10 and 100, while in the comparison of structural similarity, the structural similarity of the image fused by the research method is always higher than 0.83. Overall, the fusion method proposed in the study results in higher image quality and is efective in progressive digital image fusion, which is of great signifcance for practical digital image fusion.


Introduction
Digital images are the visual foundation for human perception of the digital world, possessing characteristics such as high fexibility, strong universality, and good repeatability. Tey are one of the main means for people to obtain and disseminate information through the Internet. In the era of the Internet, digital image fusion is the fusion of diferent images of diferent targets, while simultaneously interpreting multiple images on the same target feld, integrating image information into a complete image containing all input image feature information, thereby obtaining a more accurate description of the scene. It is a branch of data fusion, and its main purpose is usually through multiple images. Te efective processing of redundant information and complementary information between images can improve the clarity of the fused image. Image progressive fusion can efectively reduce the incompleteness and uncertainty of a single information source, resulting in the loss of image information through the complementary of diferent image information [1][2][3]. It is widely used in image feature extraction, classifcation, and information monitoring, so it is deeply concerned by relevant scholars. In general, according to the types of source images involved in the fusion, the progressive image fusion can be roughly divided into the fusion of infrared image and visible image, the fusion of multispectral image and panchromatic image, the fusion of multifocus image, and the fusion of medical image [4][5][6][7].
Aiming at the key problem of medical image fusion [8] in the mitigation of COVID-19 in Hubei Province, a new gradual fusion method is proposed based on the detailed analysis of air quality in Wuhan, thus providing help for the mitigation and control of the epidemic. A path analysis model is proposed based on regression analysis to address the issue of image gradual fusion in air pollution in Anhui Province [9], providing data support for controlling air quality. A fusion method based on wavelet transform and gradient feld is proposed to solve the problem of serious loss of image information and low image clarity after progressive digital image fusion [10]. Te detailed information of the image is enhanced by top-hat transform, and the image is decomposed and reconstructed by wavelet. Ten, the image information is analyzed according to the reconstruction results, the fusion factor is determined, and the gradient feld of the digital image is obtained. Te gradient feld image is combined with wavelet transform to complete the gradual fusion of digital image, but the actual test shows that the quality of the fusion image processed by this method is low. An infrared and visible image fusion method based on guided fltering and convolution sparse representation is proposed [11,12]. Using a guided flter and Gaussian lowpass flter, the source image is decomposed into lowfrequency approximation part, strong edge part, and high-frequency detail part, and the high-frequency detail part is fltered by nonsubsampling direction to obtain the high-frequency direction detail part. Te fusion rule based on local energy is applied to the low-frequency approximation part, a convolution sparse representation-based fusion rule is applied to the strong edge part, and the improved pulse-coupled neural network fusion rule is applied to the high-frequency direction detail part. Te corresponding fusion part is obtained, and the fnal fusion image is obtained by inverse transformation. However, the method is too complex, and there is a long time for image fusion. An infrared and visible image fusion method based on rolling guided fltering is proposed [13,14]. Tis method makes full use of the edge and local brightness-preserving characteristics of the rolling guided flter. On the basis of decomposing the input image into the basic layer and the detail layer through the mean fltering, the saliency map of the input image is obtained by combining the rolling guided flter and the Gaussian flter, and the weight map is used as the guidance basis of the weight. Finally, the fusion image is obtained by combining the fusion subimage reconstruction with the fusion layer and detail layer. However, the quality of the fused image is poor, and the actual application efect is not ideal.
Trough the above analysis, it can be seen that the denoising efect of the current digital image progressive fusion method is poor, which leads to the image blur problem after the gradual fusion of digital image. Terefore, a digital image progressive fusion method based on discrete cosine transform is proposed. In the design of the image progressive fusion method, the design method of the same optical axis is used to improve the performance of image registration. Te histogram equalization method is selected for image enhancement to enhance the collected image. It can promote the balanced distribution of gray value in the image, improves the image quality of low contrast image, and is more conducive to the subsequent processing of image or video. Terefore, the proposed method has certain innovation in solving the problem of poor denoising performance in digital image progressive fusion and can also provide data reference for image progressive fusion in related felds. Te specifc literature content is shown in Figure 1.

Design of Digital Image Progressive Fusion Method Based on Discrete Cosine Transform
Te study analyzed the design of digital image progressive fusion methods based on discrete cosine transform from 8 parts, as shown in Figure 2. . In the design process of the  digital image progressive fusion method, SIFT (scaleinvariant feature transform) algorithm is frst used to extract  image features. When extracting image features, SIFT algorithm needs to detect features in the image scale space,  determine the position and scale of the key points, and then use the gradient principal direction of the neighborhood of the key points as the direction of the key points, so as to realize the constraint of the scale and direction on the SIFT operator. SIFT algorithm can solve the problem of feature points matching in the situation of rotation, scaling, translation, projective transformation (viewpoint transformation), illumination infuence, object occlusion, clutter scene, and noise. Te image feature extraction process is shown in Figure 3.

Image Feature Extraction
In the process of image feature extraction, frstly, it is necessary to detect the key points in the scale space, search all the scales and pixel positions of the image, and use the Gaussian diference equation to detect the potential feature points, which are invariant to scale scaling and rotation changes. And then, it is necessary to locate the key points in the image and determine the position and scale information of each candidate's key point in the image. In order to ensure the invariance of all kinds of transformations, a key point feature description operator is generated, and the gradient information of the region around the scale of the key point is generated to generate the feature description operator of the key point. To sum up, the sub-eigenvectors of key points can be obtained, and image registration can be carried out in the next step.

Image Registration.
After obtaining the descriptor eigenvectors of key points by SIFT algorithm, it is necessary to match the key points of two images. It is necessary to compare the descriptor eigenvectors of the feature points on the registration image and the reference image and calculate the Euclidean distance of the two feature vectors. If the Euclidean distance of the two feature points is the shortest and the distance is less than 0.7 times of the second shortest distance, the two feature points are considered. Te feature points are the corresponding matching point pairs, and then, the matching point set of the two images can be obtained [15][16][17][18].
In the design of the image progressive fusion method, the same optical axis design is used to improve the image registration performance. Terefore, in theory, we only need to fnd a group of corresponding pixels in the digital image to complete the image registration, and the image registration speed is fast, and the accuracy is high [19]. However, in the actual processing of image registration, there may be infuence factors such as angle error. In the process of image registration, it is necessary to increase the image registration accuracy as much as possible. Terefore, in the image registration process, the collected images are placed in the spatial coordinate system (x, y).Te plane with the same characteristics of graphics and images is adjusted to the same shooting angle consistent with the image imaging position, and it is used as the registration image to register with the  Journal of Mathematics collected image, so as to realize the dynamic conversion from image and graphics registration to image and image registration. Assuming that the perspective transformation relationship between the two images is consistent with the perspective transformation, the perspective transformation is the projective transformation of the central projection [20].
In equation (1), a 1 ∼ a 8 represents eight parameters in the process of image registration. Specifcally, they represent the image points of eight quadrants in the coordinate system. By solving these eight parameters, eight equations of pixel position can be obtained. In the process of solving equation (1), it is necessary to fnd four groups of matching pixels in the two images collected. After obtaining the matching point set of two images, it is inevitable that there will be mismatching. If the problem cannot be solved, it will afect the efect of progressive image fusion. In the process of calculating the threshold value of the original image, we use the method of calculating the minimum value of the feature points in this model, so we use the minimum value of the feature set to get the feature value of the original image. If the distance between the matched feature points is less than the threshold, then the matching feature points are called interior points, and the set of all interior points is called a consistent set. In this way, all the points in the original image set are randomly equation (1) in the process of calculating the maximum matching points in the original equation [21,22].

Image Denoising Based on Discrete Cosine Transform.
DCT for discrete cosine transform is a transform related to Fourier transform. Te full name of the DCT transform is discrete cosine transform, which is mainly used to compress data or images. It can convert spatial signals to the frequency domain and has good decorrelation performance. Te DCT transform itself is lossless, image enhancement but it creates good conditions for the following quantization and Hufman coding in the feld of image coding. It is similar to DFT for discrete Fourier transform but only uses real numbers. A discrete cosine transform is equivalent to a discrete Fourier transform about twice its length. Tis discrete Fourier transform is performed on a real even function (because the Fourier transform of a real even function is still a real even function). In some deformations, the input or output position needs to be moved by half a unit (there are 8 standard types of DCT, of which 4 are common). After the image matching is completed, the discrete cosine transform is needed to denoise the image. Its purpose is to suppress or eliminate the image noise to improve the quality of the image. Te process is as follows: let the image width obtained by DCT is m, the height is n, the original image is f ′ (x, y), and the width and height of the original image f ′ (x, y) are r, then m, n � 0, 1, 2, · · · , r − 1. Te results of DCT are as follows: In equation (2), z(m) is discrete cosine transform coefcient, and z(m) is defned as follows: Combining equations (2) and (3), it can be found that DCT has low data correlation and high energy concentration. After DCT, a DCT coefcient can be obtained. Some DCT coefcients represent most of the image information, such as smooth part and background area, and the other part represents the details and noise part of the image [16]. For the image damaged by noise, the small pieces of the image are generally taken, and the image blocks are converted into the DCT domain by DCT. Te threshold shrinkage of DCT coefcients is performed. Finally, the denoised image blocks are obtained by inverse DCT. However, the image after noise reduction will be weakened and blurred due to the processing technology, so it is necessary to enhance the denoised image [23,24].

Image Enhancement.
Te main purpose of image enhancement is to enhance the contrast of the adjusted image by adjusting the gray value of image pixels. Te information contained in the image is richer, but the original content information and image structure in the image are not damaged at the same time [25]. Terefore, in this study, the histogram equalization method is selected for image enhancement to enhance the collected image. Histogram equalization is a method to adjust the contrast using an image histogram in the feld of image processing. In this way, the brightness can be better distributed on the histogram. It can promote the balance distribution of gray value in the image and improve the image quality of low-contrast image, which is more conducive to the subsequent processing of image or video. Te histogram equalization method is used to achieve a more even distribution of grey values in an image by spreading out pixels with more pixels in the grey level and combining pixels with fewer pixels in the grey level so that the number of pixels in each grey level in the image is as equal as possible [26]. Tis can make the dynamic range of the image larger, the contrast higher, and improve the image information entropy and other indicators.
After the image gray value is dispersed, the information entropy of the image increases obviously, which lays a good foundation for the subsequent image fusion, so that the fused image contains more useful information.
2.5. Image Segmentation. Te pixels with the same gray value attribute in the image are clustered by fuzzy clustering, and then, each type of pixel is calibrated to achieve image segmentation [27]. Te fuzzy c-means clustering algorithm is essentially a kind of fuzzy objective function method. If each pixel of an image is regarded as a sample point of a data set, and the characteristics of each pixel (such as gray value feature) are taken as the characteristics of sample points, then the following results can be obtained: When equation (4) is satisfed and then c is the number of clusters, n is the number of samples in the cluster space, i is the image category, k is the image sample, μ ik is the membership degree of sample k in the i-th sample set, and J is the standard scale of image segmentation. d ik can be defned as follows: In equation (5), x k is the location of sample k, v i is the clustering center, d ik is the Euclidean distance between x k and the cluster center v i , and T is the image segmentation period. When s is the dimension of the cluster space and x k ∈ R s , v i ∈ R s , then U � μ ik is a n × c dimensional matrix and V � v 1 , v 2 , · · · , v c is a s × c dimensional matrix.
Fuzzy c-means clustering methods are used to segment images. Of the many fuzzy clustering algorithms, the fuzzy cmeans clustering algorithm (FCMA) or (FCM) is the most widely and successfully used. It obtains the membership degree of each sample point to all class centers by optimizing the objective function, so as to determine the class of the sample points to achieve the goal of automatically classifying the sample data. Te process is as follows: Te frst step is to set the number of clusters c(2 ≤ c ≤ n) and weighted index m(m ∈ [2, ∞]); the second step is to give the initial value of fuzzy clustering matrix U, that is, J) ] and l � 0; the third step is to update the cluster center v i of each category.
According to the calculation result of equation (6), the updated fuzzy clustering matrix U (l) can be obtained.
In the fourth step, according to the fuzzy clustering matrix U (l) obtained by equation (6), all categories I k and I k of image sample k are calculated.
At this time, we also need to calculate the n × c matrix μ ik ; that is, when I k ≠ 0, there are In the ffth step, after the fuzzy c-means clustering algorithm converges, the segmentation threshold can be set to α, and when μ ik � max μ 1k , μ 2k , · · · , μ ck > α, the image segmentation results are obtained. Te calculation equation is as follows: In this case, only the information of the digital image is determined, and the progressive fusion factor of the image is set to complete the progressive fusion of the digital image.

Analysis of Digital Image Information.
Because the digital image usually exists in the network, the collected image form is mostly a two-dimensional digital combination, so the two-dimensional discrete wavelet transform is used to analyze the digital image information. Terefore, if the scale function of the digital image is φ(x, y) and the wavelet function is ψ(x, y), the corresponding flter parameters are expressed as h(x, y) and g(x, y), respectively, and the digital image processed by image registration, noise reduction, and enhancement is f(x, y), and then,

Journal of Mathematics
In equation (11), J � 1, 2, · · · , N; x, y ∈ Z, h(k), and g(k) are expressed as flter reconstruction coefcients. At this time, according to the transformation equation, the image characteristics of the digital image after segmentation by the fuzzy c-means clustering algorithm will be clear; that is, in the part of the original digital image where the data changes greatly, the wavelet coefcient will also become larger; on the contrary, in the part of the original digital image with small data change, the wavelet coefcient will also become smaller. For two digital images with the same target and diferent focus, the data between the lowfrequency parts of the images are similar or even the same. For subimages, the diference between the highfrequency data is large, so the image fusion factor can be set according to the enlarged feature points.

Setting Digital Image Fusion Factor.
Te setting of the digital image fusion factor is based on wavelet transform, standard, and variance characteristics, so it is necessary to set the fusion factor according to the variance of the wavelet coefcient neighborhood. Te fusion factor has the advantages of high fexibility, small trafc, best real-time performance, strong fault tolerance, strong antiinterference ability, etc. If the neighborhood value of the digital image is K, its calculation method is an iterative solution method. And the variance of neighborhood value K of a wavelet coefcient in ith type digital image is σ i (x, y), and then, In equation (12), w n (x, y) represents the gray value of pixel (x, y) in the image: In equation (13), p and q represent diferent image pixels respectively.
At this time, In the horizontal, vertical, and cross components, the fusion factors are as follows: At this time, if x, y ∈ [0, N/2] in equation (14) is brought into equation (15), then At this time, there are , y), In equation (17), is calculated by equation (11). Te digital image fusion factor is set by the above equation, and the digital image can be fused according to the digital image fusion factor.

Digital Image Progressive Fusion.
According to the above-mentioned digital image fusion factor, the digital image is divided into four parts: target low-frequency coefcient, target high-frequency coefcient, background lowfrequency coefcient, and background high-frequency coefcient, and the digital image fusion factors of the four parts are matched. Te fowchart of the image progressive fusion algorithm steps is shown in Figure 4. It can be seen from Figure 3 that the digital image includes the target lowfrequency coefcient, target high-frequency coefcient, background low-frequency coefcient, and background high-frequency coefcient, and the digital image fusion factors of these four parts are matched. Ten, local energy matching is carried out to calculate the low-frequency coefcient after fusion and determine the optimal threshold of the target to achieve digital image fusion.
Terefore, let the two images be and the scale and direction of the images are I and j, respectively. Te transformation coefcient matrix of the two images under diferent scales and directions is C 0 , y) , the low-frequency coefcient matrix of the two digital images is C 0 V (x, y) and C 0 R (x, y), and the high-frequency coefcient matrix of the two digital images is C Ij V (x, y) and C Ij R (x, y) respectively.
Terefore, in the low-frequency coefcient region of the target, if any position (m, n) has local energy of E(m, n), then In equation (18), W(m ′ , n ′ ) is the weighting matrix of digital image fusion factor. It is usually a uniform matrix or Gaussian matrix, and m ′ n ′ W(m ′ , n ′ ) � 1. At this time, the local energy matching degree of the fusion factors of the two images can be calculated according to the calculation.
In equation (19), the value range of S(m, n) is [0, 1]. When S(m, n) � 1, it means that the local energy of fusion factors of two images is completely matched; when S(m, n) � 0, it means that the local energy of fusion factors of two images is completely mismatched. At this time, the digital image threshold is defned as δ, and when S(m, n) ≥ δ, it indicates that the low-frequency information of the two images is relatively similar. Terefore, the low-frequency coefcient C F 0 after fusion is calculated by weighted average method.

Journal of Mathematics
When S(m, n) < δ, it indicates that the low-frequency information correlation of the two images is weak. Terefore, the low-frequency coefcient of the side with higher local energy is directly used as the low-frequency coefcient after fusion, and then, At this time, only the optimal threshold value of δ can be determined, and the two images can be perfectly fused in the low-frequency coefcient region of the target.
In the process of target high-frequency coefcient calculation, it is necessary to fnd the side with more details in the two images to be fused as the high-frequency part of the fused image. Terefore, if the local variance of the two images is calculated to be V 0 (m, n) million, then In equation (22), AVG C 0 m ′ n ′ (m, n) represents the mean value of the image matrix C 0 with (m, n) as the center and in the area of size m ′ × n ′ . In this case, the weighted matrix W(m ′ , n ′ ) is also usually an average matrix or a Gaussian matrix, and m ′ n ′ W(m ′ , n ′ ) � 1. Terefore, there is the fusion of digital images in the high-frequency coefcient region of the target.
According to the calculation result of equation (23), the digital image can be fused in the high-frequency coefcient region of the target.
In the low-frequency coefcient region of the digital image, the information content of the background part of the image segmented from the image is less, and the requirement for the information richness in this region after fusion is relatively low. At this time, the contour structures of the two digital images are similar, and there are few complex edge and region structures. Terefore, if the average value calculated by equation (22) is equal to the image decomposition coefcient after the fusion of the two images, the two images can be fused in the low-frequency coefcient region of the background.
In the region of the high-frequency coefcient of background, there is less high-frequency information in the background of the digital image. Terefore, the absolute value obtained by equation (20) is used as the digital image fusion rule in the high-frequency coefcient region of the background. When the coefcient after the fusion of two digital images is equal to the coefcient of the greater absolute value of the decomposition coefcient of the two digital images, it indicates that the two images are  Journal of Mathematics successfully fused and retained most of the digital image information in the high-frequency coefcient region of the background [28,29]. Te pseudo code of the algorithm is as follows: shinre (gon2013•>feedback_buf, 0, TRANS_BUF_ LEN); qciinl ("<1>Now complete malloc feedback_buf \n"); qciinl (" <1>Now begin malloc collect_buf \n"); gon2013 In summary, the methods proposed in the study are more conducive to the follow-up processing of images, which can help the processing of medical images, and thus provide the theoretical basis for the development of related felds during the COVID-19 epidemic and help promote social development.

Experiment Design and Result Analysis
In order to verify the efectiveness of the digital image gradual fusion method based on discrete cosine transform, the experimental analysis is carried out. Te experimental design is as follows: (1) Experimental environment: the hardware used in this experiment is a 64G solid-state hard disk, 16G cache, 8-core high-speed processor computer. Te experimental operating system is Windows 10, and the simulation software is Matlab 7.0. Te database is MySQL 5, and the client PC is TinkPad X260, the CPU is Intel Core i5 6200U, the memory is 4G, the hard disk is 500G, the operating system is Windows10, and the application system is Matlab2017a. (2) Experimental data: the experimental data source is Multitel (http://www.multitel.be/cantata/). 100 groups of digital images from the website were selected as experimental sample data. (3) A fusion method based on wavelet transform and gradient feld proposed by [10], an improved algorithm based on Gram Schmidt (GS) transform and intensity hue saturation (HS) transform (GS) proposed by [11], and an infrared and visible image fusion algorithm based on rolling guidance flter proposed by [13] were selected as experimental comparison methods. Firstly, the peak signal-tonoise ratio (PSNR) of diferent methods after fusion processing is compared. Te higher the PSNR is, the better the image quality is; then, the efect of digital image gradual fusion is compared; fnally, the fusion time of diferent methods is verifed, and the shorter the fusion time, the higher the recognition efciency. Some experimental sample data are shown in Figure 5.

Comparison of PSNR.
Firstly, the PSNR of images processed by diferent methods is compared. Te results are shown in Table 1. It can be seen from Table 1 that the peak PSNR of the image fused by the method in literature [10] fuctuates between 7.61 dB and 35.58 dB, the PSNR of the image fused by the method in literature [11] fuctuates between 24.36 dB and 34.84 dB, and the PSNR of the image fused by the method in literature [13] fuctuates between 10.26 dB and 17.42 dB, while the PSNR of the image fused by the research method is always higher than 42.1 3 dB, higher signal-tonoise ratio, indicating that the quality of the fusion image is better [30][31][32].

Comparison of Image Fusion Efects.
In order to present the application performance of diferent methods in digital image gradual fusion more intuitively, the fusion results of the frst group of images are shown in Figure 6.
Te results of the progressive fusion of the second group of images are shown in Figure 7.
Te results of the third group of images are shown in Figure 8.
Compared with the three groups of images, the quality of the fusion image obtained by this method is higher, which verifes the efectiveness of the method.    comparison is carried out, and the results are shown in Figure 9. It can be seen from Figure 9 that compared with the experimental comparison method, the image fusion time of the research method is signifcantly shorter, which shows that the digital image gradual fusion method based on discrete cosine transform has higher fusion efciency, which verifes the superiority of the method.

Comparison of Structural Similarity.
Te structural similarity of images processed by diferent methods is compared. Te results are shown in Table 2.  It can be seen from Table 2 that the structural similarity of the image fused by the method in literature [10] fuctuates between 0.51 and 0.67, the structural similarity of the image fused by the method in literature [11] fuctuates between 0.64 and 0.84, and the structural similarity of the image fused by the method in literature [13] fuctuates between 0.55 and 0.78, while the structural similarity of the image fused by the research method is always higher than 0.83, which shows that the quality of the fused image is good.

Conclusion
Te current progressive fusion methods for digital images have poor denoising performance, which leads to a decrease in image quality after progressive fusion. Terefore, in order to improve the progressive fusion efect of digital images and shorten the fusion time, the SIFT algorithm is used to extract image features and perform feature point registration on the fused image. After image matching, a discrete cosine transform is used to denoise the registered image. Te histogram equalization method is selected for image enhancement, and the pixels whose gray attribute is consistent with the calibration image are fuzzy clustered to complete the digital image segmentation. Te progressive fusion factor for digital images is set, and the progressive fusion of digital images based on the fusion is achieved. Tis method fully utilizes the characteristics of low correlation and high energy concentration of discrete cosine transform data, enhances the denoising efect of digital images, and makes the information of the two images clearly presented in the subsequent gradual image fusion process, thus achieving fast and accurate fusion of the two digital images. At the same time, the efectiveness of the digital image progressive fusion method was verifed through experiments. Te experimental results showed that in the comparison of PSNR, the peak PSNR values in reference [10] fuctuated between 7.61 db and 35.58 db, the peak PSNR values in reference [11] fuctuated between 24.36 db and 34.84 db, and the peak PSNR values in reference [13] fuctuated between 10.26 db and 17.42 db, while the peak PSNR values of the proposed method were always higher than 42.1-3 db. In addition, the method proposed in the study has shown better performance in actual fusion image quality comparison, and the fusion time of the method proposed in the study is lower than that of the comparison method under diferent data volumes. Meanwhile, the similarity of the fused image structure in reference [10] fuctuates between 0.51 and 0.67 while the similarity of the fused image structure in reference [11] fuctuates between 0.64 and 0.84. Te similarity of the fused image structure in reference [13] fuctuates between 0.55 and 0.78 while the similarity of the fused image structure in the research method is ultimately higher than 0.83. Overall, the proposed method has high performance in digital image progressive fusion, efectively promoting the balanced distribution of grayscale values in images, improving the image quality of low contrast images, and thus more conducive to subsequent processing of images or videos. However, the diferences in digital image focus, image edge clarity, and image restoration area were not considered in the research process. Terefore, in future research, it is necessary to study progressive fusion methods for digital images with multiple focal points, as well as image edge protection algorithms and image restoration algorithms, in order to further improve the efectiveness of digital image progressive fusion. At the same time, the proposed method has certain innovations in solving the problem of poor denoising performance in digital image progressive fusion and can also provide data reference for image progressive fusion in related felds.

DCT:
Discrete cosine transform DFT: Discrete Fourier transform FCMA: Fuzzy C-means clustering algorithm FCM: Fuzzy C-means GS: Gram Schmidt HS: Hue saturation JPEG: Joint photographic experts group MJPEG: Motion joint photographic experts group MPEG: Motion picture expert group MYSQL: Open-source database software PSNR: Peak signal-to-noise ratio SIFT: Scale-invariant feature transform.

Data Availability
Te datasets generated and/or analyzed during the current study are available from the corresponding author upon reasonable request.

Conflicts of Interest
Te authors declare that they have no conficts of interest.