Construction of Value Chain E-Commerce Model Based on Stationary Wavelet Domain Deep Residual Convolutional Neural Network

This paper mainly analyzes the current situation of e-commerce in domestic SMEs and points out that there are limited initial investment and difficulty in financing in China’s SMEs; e-commerce control is not scientific; e-commerce personnel of SMEs are not of high quality, in the case of improper setting of the e-commerce sector and shortage of talents, rigid management model, and outdated management concepts. By using the loss function and the value chain management theory of the deep learning in the stationary wavelet domain residual learning model, the e-commerce model of SMEs is newly constructed, and the e-commerce department as the core department of the enterprise is proposed. By training the optimal parameters of the deep residual network and comparing the results with other models, the method of this paper has a good effect against the sample. The original loss function based on the residual learning model deep learning is modified to solve the original model fuzzy problem, which improves the effect and has good robustness. Finally, based on the wavelet residual depth residual evaluation method, this paper evaluates the application effect of this model and proposes relevant suggestions for improving this model, including rationalizing and perfecting the external value chain coordination mechanism, establishing the e-commerce value chain sharing center, and promoting integration of e-commerce business, strengthening measures and recommendations in various aspects of e-commerce information construction. At last, taking the business activities of a company as an example, applying the theory described in this paper to specific practice proves the feasibility and practical value of the theory.


Introduction
Before the emergence of value chain management theory, the management mode of enterprises is static and only for its internal, and the emergence is of this theory has driven the development of enterprise management mode to open dynamic management mode. Value chain management can not only help enterprises to enhance their own strength in the increasingly competitive environment of the modern market, but also enrich the mode and method of enterprise e-commerce. Since its inception, the theory of value chain management has gradually extended to all aspects of the enterprise management system in decades, "becoming the theoretical basis and analytical basis for e-commerce, market strategy management, and sales management" [1][2][3]. E-commerce is an important part of enterprise management, in order to pursue the greatest enterprise value, while value chain management creates more added value for enterprises and consumers. e same control purpose allows the two to merge with each other, which leads to e-commerce concept modernity.
With the emergence and development of value chain theory in the late 20th century, more and more scholars began to apply it to e-commerce. In the 1990s, value chain thinking was applied in management accounting. e literature [4][5][6] incorporated accounting into the value chain management system and initially constructed its basic framework. e Accounting Professionals Committee of the Chinese Accounting Society held a seminar on "Value Chain Management and Value Chain Accounting" in Haikou City to discuss the related theoretical and practical issues of value chain management and value chain accounting [7][8][9][10]. In addition, the noise reduction method based on transform filtering theory [11][12][13] has also been widely studied by many scholars. Among them, the stationary wavelet transform has canceled the downsampling process of wavelet transform and has small translational degeneration, so it is widely used [14][15][16][17]. An improved adaptive closed-value denoising method for discrete stationary wavelets is proposed, which can remove some noise. e authors in [18][19][20][21] proposed an e-commerce closed-value denoising algorithm based on Lipschitz index and stationary wavelet, which can improve the signal-to-noise ratio of e-commerce. However, based on the transform filtering theory, most of the transform coefficients are closed-valued in the transform domain, and the closed-value filtering process is difficult to estimate and adjust, which makes it difficult to achieve the ideal denoising effect. e convolutional neural network is a neural network structure that simulates the human brain for analysis and learning. e above research shows that the main manifestations of weak e-commerce control are as follows: first, the lack of strict cash management, resulting in idle or insufficient funds, and the efficiency and effectiveness of fund operation are difficult to meet the target requirements; second, the slow turnover of accounts receivable makes it difficult to recover funds. Its training process is an adaptive process of network parameter small-off optimization. Based on its powerful feature extraction ability, it is easier to remove noise while retaining. In the value chain management system of enterprises, the above research has the following shortcomings in the face of the modern social and economic environment: the e-commerce model of Chinese enterprises rarely analyzes the whole process of operation from the market strategy and overall benefits of the enterprise, most of them. It chooses the part that is considered to be important to control, which is also not easy to maintain for a long time. Now, the actual implementation of the control method does not extend to the level of the value chain; the actual situation of enterprise development has not been related to its e-commerce model and value maximization. e development philosophy is consistent.
Inspired by the application of stationary wavelet transform and deep convolutional neural network in the processing of e-commerce mode, this paper proposes a deep residual neural network for stationary wavelet transform (SWT-deep) learning model. Firstly, the model performs the stationary wavelet three-level decomposition of the e-commerce model to obtain the high-frequency coefficient e-commerce model. en, the high-frequency coefficient of the e-commerce model is taken as input, and the highfrequency coefficient of the e-commerce model and the high-frequency coefficient of the e-commerce model of the value chain are obtained. Subtraction results in the residual coefficient as a label. After the model training, SWT-deep learning can learn the mapping relationship between input and label. Finally, using this mapping relationship, the high frequency can be indirectly predicted from the highfrequency coefficient of the street quality e-commerce model. e coefficient, through the stationary wavelet inverse transform, can obtain the predicted value chain e-commerce model. In addition, the bypass connection and residual learning strategies are added to the SWT-deep learning model to improve the convergence speed of the network and estimate the quality of the e-commerce model. Experiments show that the SWT-deep learning model of this paper can predict the value chain e-commerce model from e-commerce, and compared with other well-recognized noise reduction algorithms the prediction results of this algorithm are better than those of other algorithms. Finally, based on the wavelet residual depth residual evaluation method, this paper evaluates the application effect of this model and proposes relevant suggestions for improving this model, including rationalizing and perfecting the external value chain coordination mechanism, establishing the e-commerce value chain sharing center, and promoting integration of e-commerce business, strengthening measures and recommendations in various aspects of e-commerce information construction. Finally, taking the business activities of a company as an example, applying the theory described in this paper to specific practice proves the feasibility and practical value of the theory.

Value Chain Cost and Value Chain Analysis.
e cost management approach to analysis in the value chain begins in the middle and upper reaches of the value chain. ere is a big difference between the two. Traditional cost management is to control costs in the production process of enterprises [22][23][24][25], but due to the development of the market this control method cannot bring competitive advantages to enterprises. More analysis should be done on the customer side, even on the social side. When enterprises conduct their own cost management, they should expand the flow of internal funds to the outside, and the cost management activities should be forward-looking to the market so that the cost can be managed comprehensively.
Porter's main points about the value chain concept are as follows: (1) e value chain of an enterprise is composed of the nine basic activities of the enterprise itself. e certain value chain is the different combinations of different activities of the enterprise within a specific industry; the history, background, and strategy of the enterprise are different. Its value chain is also different.
(2) e company is engaged in activities in the value chain to create value on the one hand and the cost of the activity that creates value on the other hand; if the price paid by the customer exceeds the cost of creating value, then the enterprise can make a profit.
Creating value for the buyer over cost is the goal of any basic strategy. (3) Each of the upstream and downstream of the enterprise has its own value chain. ese value chains are related to each other, so they are called value systems. e value chain breaks down a business into many strategically related activities. It is embodied in the broader activities of the value system.
According to the value chain management, the business purpose of the enterprise is to bring value to the customer, and the value to them is the value chain; if the enterprise wants the goal to be better and better, then it is necessary to use the high quality value chain. To operate, quality management must be achieved with quality management [26]. When carrying out the management of the value chain, it is necessary to focus on the increase of its value. We explore its business process through the value chain level form value chain identification and establish value chain and other processes, then we optimize the process according to the continuous improvement of the company. e most important thing here is the analysis of the value chain, which is from the raw material until the finished product reaches the consumer. is explains the changes in the cost of goods and the specific advantages and disadvantages of enterprises in production activities. When enterprise value performs multiple functions for analysis, various methods are used, such as cost analysis methods for value chains, differential analysis of value chains, and analysis methods for competitive advantages of external value chains. ese methods are for their production. In terms of functions in each link, the operation and value of different value chains will produce differences [27]. But all value chain analysis is based on a common goal, which is to minimize unnecessary expenditures in the business process of the enterprise or to bring more profit to the enterprise under the same expenditure.
As shown in Figure 1, the value maximization of the enterprise is the value management of the enterprise. We can build an optimized competitive strategy based on the maximization of value. e monetary form of the operating activities of each value chain generated in the business process comes. rough analysis, through the in-depth understanding of the nature of the value chain, the company uses the corresponding methods to build a complete value analysis system. en, it calculates and analyzes each value chain and then cites the excellent ideas at home and abroad to establish a complete framework for value chain cost management.

Activity-Based Costing.
is method is one of the key methods of value chain analysis. It is analyzed according to the method of operation cost. In every step of the operation, resource consumption will occur, but at the same time there will be output. So, when it comes to production, not only will there be costs, but this will also be accompanied by a shift in output. When an enterprise produces a product, this is not only the result of all the linkages of all production processes, but also the result of all the business operations of the enterprise. is is a process of formation of the enterprise value chain. Activity-based costing introduces the concept of operations into the control of costs and analyzes and categorizes costs based on operations.
When incorporating value chain thinking into e-commerce, it is necessary to further improve its corresponding processes based on the e-commerce activities of enterprises and focus on the value of customers and the company itself through the use of activity-based costing, thereby improving the business level of the enterprise. e enterprise is essentially a combination of interrelated business processes, which is contributed to the execution of the transaction contract. e business operation behavior is one of the keys to promoting the value chain, and the operation behavior process can be one or more. e entire process of investing in outputs that are valuable to the customer can also be thought of as the achievement of a specific goal and thus some of the activities that result in the realization of the value from the supplier to the customer.

New Design Method.
e core connotation of this method is as follows: what the customer needs, why do you need this, and which is why. As long as there are available services, when the demand for available services is met, how to provide available services is particularly important.

Value Chain E-Commerce Model Network Model.
As shown in Figure 2, the value chain e-commerce model of this paper takes the high-frequency coefficient after stationary wavelet decomposition as input, and the residual coefficient obtained by subtracting the high-frequency coefficient from the high-frequency coefficient of the value chain e-commerce as a label, learning input and label. e mapping relationship is between them. e noise and contour information mainly exist in the high-frequency coefficient, so the low-frequency coefficient is small to participate in network training. By using the residual mapping learned by the residual learning strategy, the selfjoining mapping can be obtained indirectly. After the model training is completed, it can be indirectly predicted from the high-frequency coefficients, and then the inverse value of the stationary wavelet can be used to obtain the predicted value chain e-commerce network. If the actual value chain  Complexity e-commerce model is generally large, it is difficult to carry out network training. erefore, in order to improve the network training speed, the input and label are cut into 50 * 50 small patches. As shown in Table 1, the bypass connection module is set in the network. e excessive bypass connection module increases the complexity of the network, which is beneficial to the network training. Considering that there are 16 convolution layers in the network, a total of four bypasses are set. In the connection module, each bypass connection module consists of three convolutional layers and a bypass connection, which can speed up the network convergence speed while the antiratio gradient disappears, and this contributes to the protection of detailed information and improves the quality of the prediction model.

Deep Residual Learning Basic
Unit. e basic unit of depth residual learning is mainly composed of a mountain input layer, a convolution layer (Conv), a batch normalization (BN), an activation function, a downsampling layer, and an output layer: a convolution layer extracts an e-commerce pattern feature map; the BN layer introduces standardization and shifting steps in each nonlinear transformation, thereby avoiding the deep change of internal node distribution changes during internal network training, that is, the transfer of internal covariates, thereby greatly accelerating the convergence speed of the network [28]; and the activation function will further accelerate the convergence speed. e characteristic map is nonlinearly transformed. e Rectified Linear Unit (ReLU) is one of the commonly used activation functions. It is widely used in depth because it helps to reduce the gradient disappearance during deep convolutional neural network training. e downsampling layer reduces the feature graph dimension and speeds up the network training speed [29], but the feature graph will cause information loss after downsampling. erefore, considering the uniqueness of the noise reduction task in this paper, the SWT-deep learning model is proposed.
e middle and small units use the downsampling layer.

Bypass Connection.
Although multiple convolutional layers in the deep convolutional neural network can help extract the deeper features of the e-commerce model, the e-commerce model will inevitably have information loss after multiple convolutional layers and with the number of network layers. e deepening of the information, the phenomenon of information loss, is more serious, which is conducive to the protection of the details of the e-commerce model. Bypass connection is to combine the feature map with more detailed information and the later feature map to transmit the detailed information carried by it, which is helpful for the protection of the details of the e-commerce mode and the recovery of the e-commerce mode. Deep learning has the characteristics of input layer, convolutional layer, BN layer, ReLU function, and bypass connection.

Residual Learning.
e residual network was originally designed to solve the problem of small or even degraded network performance as the number of network layers deepens. Experiments show that the residual mapping is easier to learn than the original mapping. Using the residual learning strategy, it is easy to train the deep learning network and improve the accuracy of e-commerce pattern classification and target detection. e self-linking mapping corresponding to the task of this paper maps the high-frequency coefficients of LDCT e-commerce mode to the high-frequency coefficients of NDCT e-commerce mode. e residual mapping maps the high-frequency coefficients of LDCT e-commerce mode to the high-frequency coefficients of LDCT space e-commerce mode. e residual coefficient is obtained by subtracting the high-frequency coefficients of the NDCT e-commerce mode. Considering that the residual coefficient is more single than the NDCT e-commerce model, its statistical characteristics are easier to learn. erefore, the residual learning strategy is also introduced in this paper. Despite the uniqueness of the SWT-deep learning model for noise reduction tasks, the network model of this paper requires multiple residual units; only one residual unit is used across the entire network structure, through the learned residuals. e mapping indirectly gets a self-join map. It is obtained by the residual convolutional neural network, so the self-joining mapping problem can be transformed into the residual mapping problem. Researches on image enhancement and noise processing based on convolutional neural networks have proposed related methods [30,31]. Research on text detection and system recognition is also introduced in [32,33]. e PM10 concentration and nonferrous metal related recognition methods are also introduced [34,35].

Stationary Wavelet Domain Transform.
In the classical orthogonal wavelet transform algorithm, the value chain e-commerce mode is subjected to downsampling after highpass and low-pass filtering, so that the decomposed wavelet coefficients become half the size of the original value chain e-commerce mode, but this is easy. e loss of partial information of the wavelet coefficients causes a small stability of the e-commerce model of the reconstructed value chain and is prone to oscillating effects. e stationary wavelet transform is an improvement of the classical orthogonal wavelet transform, and its biggest features are redundancy and translational small degeneration. e multiscale analysis method represented by wavelet transform has been applied in the enhanced processing of the infrared value chain e-commerce model, and some research results have been obtained [36]. Compared with the traditional method, wavelet analysis has good local characteristics in both the time domain and the frequency domain, can selectively enhance the value chain e-commerce mode features of a certain scale, and is more suitable for the visual characteristics of the human eye. Wavelet transform can be divided into two categories: orthogonal wavelet transform and nonorthogonal wavelet transform. Stationary wavelet transform belongs to nonorthogonal wavelet transform, which is an improvement on traditional orthogonal wavelet transform. Compared with orthogonal wavelet transform, the main features of stationary wavelet transform are redundancy and translation invariance, which is more suitable for dealing with correlation problems. Since the correlation between the gray levels of the infrared value chain e-commerce mode is large, it is more suitable to use the stationary wavelet transform for the denoising and enhancement of the infrared value chain e-commerce mode.
Based on discrete stationary wavelet transform, this paper proposes a nonlinear enhancement method for electronic value model of infrared value chain. e experimental results show that the proposed method can effectively suppress the noise while effectively enhancing the value chain e-commerce mode, and its enhancement effect is obviously superior to the traditional enhancement method. e high-frequency subband contains the edge detail information of the value chain e-commerce mode and most of the noise.
is paper uses the following high-frequency enhancement operator: where T is the threshold; α 1 , α 2 is the enhancement factor and D i k (x, y) is the enhanced high-frequency wavelet coefficient; D j k (x, y) is the coefficient before enhancement. Wavelet transform concentrates the energy of the signal in some large wavelet coefficients, while the energy of the noise is distributed throughout the wavelet domain. erefore, after wavelet decomposition, the amplitude of the wavelet coefficient of the signal is larger than the amplitude of the coefficient of the noise. It can be considered that the wavelet coefficient with larger amplitude is generally dominated by signals, while the coefficient with smaller amplitude is largely noise. erefore, a threshold T can be set to divide the signal and noise in the high-frequency subband, the coefficient larger than the threshold is regarded as the edge detailed signal, and the enhancement processing α 1 ≻1 is performed; the coefficient smaller than the threshold is regarded as noise and is attenuated handled 0 ≺ α 2 ≺ 1. In this way, it is possible to enhance the edge detailed information of the infrared value chain e-commerce mode, improve the definition, and reduce the noise. e threshold T uses a common threshold.
where σ n is the noise standard deviation; N is the total number of wavelet coefficients in a given high-frequency subband. σ n is estimated by using the median absolute value of the D i H (x, y) coefficient of the diagonal high-frequency subband in the first layer.
Under normal circumstances, the background area temperature is lower, the corresponding value chain e-commerce mode gray value is relatively small, the target area temperature is higher, and the value chain e-commerce mode gray value is correspondingly larger. After the stationary value decomposition of the infrared value chain e-commerce model, the lowfrequency subband mainly represents the outline of the value chain e-commerce mode, the energy is relatively concentrated, and the amplitude is relatively large. e wavelet coefficients with larger amplitudes in the low-frequency subbands represent the target, while the wavelet coefficients with smaller amplitudes represent the background. According to this feature, the method of scaling the amplitude of the low-frequency subband coefficient is adopted; that is, the smaller amplitude coefficient (background) is compressed, and the larger amplitude coefficient (target) is stretched to achieve the value chain e-commerce. Value chain e-commerce using the expansion coefficient will further enhance the contrast effect. At present, the commonly used value chain e-commerce mode gray scale expansion enhancement function adopts S curve, which has hyperbolic tangent function, power function and gamma function, etc., since these functions all obtain the limit value through the asymptote, and most of them are about the inflection point. Symmetrically, it will lead to some gray areas that could not be achieved and could not be combined with the value chain e-commerce mode gray feature for targeted enhancement. is article uses a sine transform enhancement function: e waveform is shown in Figure 3. Compress on the left side of the inflection point and stretch on the right side of the inflection point. e exponential factor k changes the slope of each part of the transformation curve. When the k value increases, the expansion and contraction strength increase, and the enhancement effect increases. e whole transformation curve is composed of two sinusoidal curves, which are S-shaped. e derivative is smooth and the telescopic performance is good. e inflection point and the telescopic strength k can be controlled. e application is flexible and convenient, and the versatility is strong.
How to choose the parameters of the inflection point is a key issue in this method. e information of an infrared value chain e-commerce model can be divided into two parts: the target area and the background area. e threshold method is a simple and effective value chain e-commerce model target and background segmentation method. erefore, for the problems studied in this paper, the infrared value chain e-commerce model target and the background segmentation threshold are used as parameter selections for the inflection point. ere are many methods for calculating the target value and background segmentation threshold of the infrared value chain e-commerce model, such as iterative threshold method, optimal threshold method, and Otsu method. e thresholds obtained by these methods are more accurate, but the calculation is large. In this paper, a simple average threshold method is used to calculate the amplitude average T 0 of the low-frequency subband value chain e-commerce model. T 0 is used as the initial threshold to divide the low-frequency subband value chain e-commerce model into two parts, according to the infrared value. Considering the characteristics of the chain e-commerce mode, the brightness of the target is generally higher than the background, the part larger than T 0 is the target area A, and the part smaller than T 0 is the background area B. e average value of the wavelet coefficient amplitude of the background region B is calculated as the final threshold.

Selection and Experiment of Activation Function.
In the residual unit, we see ReLu, which is the activation function used by the residual network. e role of the activation function is to increase the nonlinearity of the network. Because each convolutional element is a weighted sum of the upper neurons, it is a linear structure, so the final classifier is a linearly separable classifier. However, real-life  6 Complexity sampling is generally nonlinear, so it is necessary to introduce nonlinear elements into the network. e general activation functions are the following: the Sigmoid function is a very common activation function in shallow networks. e advantage of this activation function is that it can fit a variety of complex functions very well; the disadvantage is that it is not suitable for deep networks, it is easy to appear, the calculation is slow, and the fitting and gradient disappear.
Compared with the Sigmoid function, the Tanh function is centered on the zero point. e other aspects are basically a magnified Sigmoid function, having the advantages of large derivative and fast gradient update, with still having the gradient disappearance of the Sigmoid function in the deep network.
e ReLu function is the activation function used in the residual network, and its expression is ReLu is currently using a lot of functions in deep networks. Its processing is very violent and directly sets the value less than 0. e advantage is that it greatly accelerates the training speed of deep networks, and its sparse effect is very good. Overfitting is avoided, but the node failure phenomenon is easy to occur in actual use. e node value is Nan, so the BN layer and the adjustment learning rate are needed here. is experiment uses the ReLu function, in addition to ReLu acceleration and training in deep learning, and the ReLu function can be regarded as high-pass filtering of the feature map. Later, experiments find that the noise against the sample is a background texture noise; different from Gaussian whitening noise of traditional denoisers, the filter may be more useful.
e average number of all value chain e-commerce patterns loss in an Epoch during the training process is shown in Figure 4. During the experiment, it was found that when only Relu was used, it sometimes converges. Sometimes, the loss is getting bigger and bigger. In the figure, it is an explosion. In the process of convergence, the value of the loss function will increase sharply. e value of the loss function is calculated by ReLu. e training is not stable enough. And even when using the ReLu + BN layer, the denoising effect is not as good as the ReLu + BN layer. So, the final experiment decided to use the ReLu + BN layer to train the residual network.

Value Chain E-Commerce Model Construction.
e main idea of VGG is to increase the network depth and reduce the size of the convolution kernel (3 * 3). e Conv layer is the convolutional layer, and the Max pooling layer refers to the maximum value of all neurons in the region using the maximum subsampling function. For each channel, the pixel value of the feature map of the channel is selected as being representative of the channel, thereby obtaining an n-dimensional vector representation. e flatten layer is used to flatten the input, that is, to multidimensionalize the multidimensional input for transition from the convolutional layer to the fully connected layer. Dense is the full link layer. e dropout layer will randomly disconnect the input neurons with a certain probability each time the parameters are updated during the training process to prevent overfitting. e probability set here is 0.5. Softmax is a classifier, which is actually a normalized exponential function, determining the probability of each category and finally selecting the category with the highest probability.
e VGG-Net structure is chosen here because the detector only needs accuracy and does not care about the recall rate. Moreover, the detector requires a lot of training and often updates its own feature library to ensure accurate classification. e ResNet and inception layers are deep, the training is slow, and the accuracy is not improved, as shown in Figure 5.
In Table 2, loss is the loss of cross-validation at 30 epochs, acc/val is the accuracy of cross-validation at 30 epochs, and acc/test is the accuracy of the test set. e test is on the CPU; the data set is relatively small cifar0, 50000 training samples. e training time of VGG is obviously less than the training time of ResNet, but the convergence of ResNet is much faster than VGG, and the accuracy of training set is also very high. However, in cross-validation and testing, the accuracy rate is much lower. e guess is that ResNet appears which is a little overfitting. So, finally choose the VGG-Net method. In the actual use process, Softmax will send the value chain e-commerce mode with the classification probability of the clean value chain e-commerce mode less than a certain value as noise to the denoiser, where the probability threshold is 0.8.
Evaluating the effect of a countersample is primarily about the probability of correct classification after the sample has passed the antisample defense system.
is is the score calculation formula for the NIP 5-2017 against the sample defense game on Kaggle. Attack refers to the sample attack, the defense is against the sample defense, and the defense gets a label. e score is the number of labels obtained by statistical defense equal to the number of real labels. In fact, the calculation of the correct rate of confrontation is basically the same, but the choices in the data set are really different.
Currently, the data sets used for the antisamples mainly include MNIST handwritten characters, CIFAR-10, and ILSVRC (2012, 2016). ese are data sets for value chain Complexity 7 e-commerce pattern recognition, and some tests against samples also utilize DREBIN (Malware data sets, MicroRNA (medical imaging data sets), etc.), to expand the use of antisamples. More advanced technologies and teams even use video data sets to train and illustrate the robustness of the countersamples to the time-domain airspace, increasing the likelihood that the sample will be effective in real-life use. At present, the most used one is the MNIST data set, which is a grayscale data set and has achieved good training results. But compared to the data sets of CIFAR and ImageNet, the results obtained on this one are not a good indication of their actual situation in real life. erefore, the selection of the data set should preferably be CIFAF-10 or ImageNet data set, because there are already good classification models on the two data sets; the classification can already achieve high precision and should be fully utilized. e defense against the sample has entered the stage of deep neural network, and, as a classifier, the evaluation criteria for the classifier model should also be applied, such as accuracy rate, recall rate, ROC, and AUC. In the case of the accuracy rate, the positive sample is detected as the proportion of the positive sample. ROC is calculated by the true positive rate and the false positive rate. It is mainly judged whether the overfitting is obtained by the smoothness of the ROC curve, and then the larger the area formed by the curve is, the better it is. F1 is a comprehensive evaluation that combines accuracy and recall. Because of the previous choice of the second data transmission method, the evaluation of the model focused on the accuracy and ROC curve of the detector classification. For the subsequent denoisers, more attention is paid to the accuracy of the model.
Because the denoising algorithm is used in the model, there should be an evaluation criterion for the denoising effect. ere are mainly SNR, PSNR, and SSIM. SNR is the signal-to-noise ratio, which is the ratio of signal strength to noise intensity. It can reflect the degree of noise pollution in the value chain e-commerce mode and can not reflect the   But the problem is that because the noise added to the antisample is very small, it is not different from the original picture, so the PSNR will be very high, but the denoising will inevitably lead to the distortion of the original picture. PSNR is used as a reference for training, check the value chain.
Whether the e-commerce model is denoised and blurred, the final denoising effect evaluation standard is based on the final value chain e-commerce model classifier effect, supplemented by PSNR.
PSNR � 10 * LOG 10 2 erefore, in training samples, the detector uses traditional accuracy and ROC graphs to evaluate the model; the denoiser uses PSNR to observe the training process, and the classifier is used to evaluate the denoised sample accuracy and clean sample accuracy. e final result is evaluated using the score of the game. e data set uses the ImageNet value chain e-commerce model data set and the NIPs-2017 game data set.

Introduction and Preprocessing of Experimental Data Sets.
e main data sets used in the antisample related experiments are MNIST handwritten numbers, CIFAR-10, and CIFAR-100. At present, the data sets used in related papers are mostly MNIST handwritten numbers, and the antisample attack and defense are very good on MNIST. But the MNIST data set is grayscale, and the value chain e-commerce model contains effective features compared to CIFAR and ImageNet much less; even if these attack defense methods are useful in MNIST, it is difficult to generalize to the real-life situation. CIFAR-10 is an RGB image with ten categories. e small image (32 * 32) is easy to handle and can be used as an evaluation of the model. ImageNet is an RB G value chain e-commerce model. e pictures are large, the number is large, the format is not uniform, and there are rich features, with 1001 classification labels. It can simulate the reality very well, but the application in the current article is relatively small on ImageNet; both attack and defense are promising. To verify the ImageNet e-commerce model, the simulation data selected ImageNet-6607, a relatively small data set in ImageNet2012.
is data set has 23 types of labels and requires a total of more than 3,000 images of different sizes. It needs to use the adjusted image in TensorFlow. Resi-ze_image and Central-crop parameters. e experiment used the CIFAR-10 evaluation model (need to adjust the size of the model volume base layer), using Image_6607 as the training set. e test set uses the value chain e-commerce model in the NIP 5-2017-adverseri al-examples in CleverHans. e data set of the NIPS competition mainly includes 1000 original e-commerce models of value chain, and the size is 299 * 2990.
ese cases are all aimed at ImageNet and can be classified. In addition to the original image, CleverHans also gave a nontargeted attack, two targeted attacks, and random noise. Use it to generate a countersample plus the original image, a total of 5,000 test cases. For the different cases of anti-like noise, two noise value chain e-commerce models of £ � 16 and £ � 25 were generated in the experiment to verify its robustness to noise. In Figure 6, the above value chain e-commerce model is the original value chain e-commerce model. e value chain e-commerce model on the left is the FGSM countersample of £ � 16, and the right-value chain e-commerce model is 25 for the FGSM confrontation sample. See the obvious difference. Figure 6 shows the value chain e-commerce model using another targeted antisample attack.
e experimental results show that the noise is completely different. In the experiment, it is also necessary to verify the robustness of the defense against different attack modes.
As shown in Table 3, it is obvious that the antisample noise in the value chain e-commerce model is removed, but there are still many noises remaining, which have little impact on the final classification of the value chain e-commerce model. e impact of the final classification of the value chain e-commerce model information mainly depends on the antisampling noise. ere is no big loss in details, and even the details are enhanced, PSNR � 25.13.

Experimental Comparison of Different Hidden Layers.
In the construction of this paper, the collaborative filtering recommendation algorithm based on deep belief network is used. e deep confidence network is composed of multiple restricted Boltzmann machines. erefore, there are multiple layers in the deep confidence network. Considering the difference in the number of hidden layers, experiments were carried out in this subsection with different numbers of hidden layers.
In the experiments of different layers of hidden layers, this section first fixed the number of loop iterations and the learning rate of each layer and then observed the experimental results under different layers. In the experiment, this section sets the learning rate to 0.1, and the number of Complexity loop iterations per layer is set to l000 times. Figure 7 shows the MAE of the depth learning algorithm for the hidden layer of different layers and the MAE of the comparison algorithm for the stationary wavelet domain depth residual learning.
rough Figure 7, when the number of neighboring users is the same, the deep learning algorithm has a lower absolute learning error when the number of hidden layers is 5, and the average absolute error is smaller than that of the deep learning algorithm when the number of hidden layers is 3, 4, and 6. Because deep learning can achieve the desired effect in the case of five hidden layers, this is consistent with the fact that the number of hidden layers of DBN is usually 5 to 6 layers. If the number of layers is larger, the error caused by the experiment is larger, and finally the clustering user group is not accurate enough to affect the experimental results. It can be seen from Figure 7 that when the number of neighboring users is 80, the average absolute error of the deep learning algorithm is 0.746 and the comparison algorithm stationary wavelet domain depth residual learning is in the case of clustering groups of 3, 4, and 5. eir average absolute error is above 0.808, and the former is at least 6.2% smaller than the average absolute error of the latter. Because the deep learning algorithm can cluster the users with higher relevance to the same group in the case of different number of hidden layers, the obtained neighboring groups of the target users are more accurate, and finally the recommendation quality is improved.  Figure 8.

Experimental Comparison of Different
In Figure 8, the abscissa axis, the ordinate axis, and the respective histograms represent the same meanings as described. And the training data set in this section of the experiment is a data set containing 943 users. When the number of iterations is 1000, the number of iterations is 1500, and the number of iterations is 2000, the value of the average absolute error decreases gradually with the number of neighbors, and the number of neighbors reaches 60. At that time, the value of the average absolute error tends to be relaxed. However, when the number of iterations is 500, the value of the average absolute error first drops, then rises again and then falls, and finally stabilizes. Because the number of iterations is 500, the sampling process in the deep confidence network fails to reach a stable state, which causes the probability value of the output layer to be biased, which makes the clustering not accurate enough, and the    recommended target user's neighbor group is recommended to the target user decline in quality. In Figure 8, when the number of neighboring users is 40, the value of the average absolute error of the deep learning algorithm is 0.768, and when the number of neighbors is increased to 60, the value of the average absolute error is 0.770. Because the user group obtained by the deep learning algorithm is not accurate enough when the number of iterations is 500, the increase of the neighboring users increases the prediction error of the score and finally reduces the recommendation quality. When the number of neighboring users is 80, the average absolute error values of the deep learning algorithm and the stationary wavelet domain depth residual learning algorithm are 0.767 and 0.808, respectively. e depth learning algorithm improves the recommendation accuracy by 4.1% in the stationary wavelet domain depth residual learning algorithm. Because the sampling process in the deep confidence network fails to reach a stable state even when the number of iterations is 500, the neighboring group of the target users obtained by using the deep learning algorithm is still better than the target users obtained by the stationary wavelet domain depth residual learning algorithm. e neighbor group is more suitable, so the recommendation quality of the deep learning algorithm is better than the recommendation quality of the stationary wavelet domain depth residual learning algorithm.
Based on the above analysis, the MAE value of the deep learning algorithm is less than the average absolute error value of the stationary wavelet domain depth residual learning algorithm in the above four cases. e former has more accurate recommendation efficiency than the latter. Because the deep learning algorithm can cluster users with similar similarities into the same group under different iteration times, the recommendation quality is effectively improved.

Experimental Comparison at Different Learning Rates.
In this section, the deep learning algorithm is tested at different learning rates. Five cases are set in the experiment. e learning rates are 0.05, 0.1, 0.15, and 0.2, and the learning rate is 0.25. From the experiment in the last section, we know that when the number of hidden layers of the deep belief network is 5 layers, the deep learning algorithm is better than the experimental results of the other three cases. And from the experiment, when the number of iterations is 1000, the effect of the deep learning algorithm is better than that of the other three cases. erefore, in this section, the number of iterations is specified as 1000 times and the number of hidden layers is set to 5 layers. e experimental results in the case where the number of iterations is 1000, the number of hidden layers is 5, and the learning rate is 0.1 will not be shown in the figure. e figure below shows the MAE values of the deep learning algorithm in the case of five different learning rates and the MAE results of the comparison algorithm stationary wavelet domain depth residual learning.
In Figure 9, when the number of neighboring users is 60 and the average absolute error value of the deep learning algorithm is 0.759, the stationary wavelet domain depth residual learning algorithm has an average absolute error when the number of clustering days is 5. e value is 0.809. e former is increased by 5.0% compared to the latter. In Figure 9, when the number of neighboring users is 60 and the average absolute error value of the deep learning algorithm is 0.749, the relative stationary wavelet domain depth residual learning algorithm has an average absolute error when the number of clusters is 5. When the value is 0.809, it is increased by 6%. In Figure 9, when the number of neighboring users is 60 and the value of the average absolute error of the deep learning algorithm is 0 : 766, the relative stationary wavelet domain depth residual learning algorithm has a clustering number of 5, corresponding to the case when the value of the average absolute error is 0 : 809, which it is increased by 4 : 30l0. rough the above experimental results, it is found that the learning rate in the algorithm should not be too large; otherwise it will increase the data error of the calculation process and affect the experimental results; if the learning rate in the algorithm is too small, it will affect the convergence process of the algorithm. e above experiments show that, in the case of different learning rates, users with similar habits can still be clustered in the same group and the average absolute error is smaller than the stationary wavelet domain depth residual learning algorithm.
Based on the above experiments, the deep learning algorithm is superior to the stationary wavelet domain depth residual learning algorithm. In this section, we also use algorithms to acquire features and perform experiments. e experimental results are shown in Figure 10. e abscissa in Figure 10 indicates the number of neighboring users, and its value ranges from 20 to 80 with an interval of 10. e ordinate represents the value of the mean absolute error MAE, which ranges from 0.75 to 0.82. e average absolute error value of the stationary wavelet domain depth residual learning algorithm is represented by the upper line segment, and the lower line segment indicates that the feature is obtained using Algorithm 3.2; the number of hidden layers is 5, the number of loop iterations is 1000, and the learning rate is 0.1. Learn the MAE value of the algorithm.
As can be seen from Figure 10, when the number of neighbors varies from 20 to 80, the average absolute error appears to decrease first and then slows down. However, the result is not as good as the depth learning algorithm in Figure 9. e reason for the analysis is that although the former considers the relationship between the user and the score value, there is a problem in the process of normalization of the score value. In the item that the user has not scored, it is set to a value of 0 in the scoring matrix, and in the normalization the value of 0 is regarded as the minimum value of the rating value of each item, thereby causing the characteristic matrix of the user. erefore, the absolute error value of the score obtained by the former experiment is greater than the error of the latter. Although the deep learning algorithm that uses Algorithm 3.2 to acquire features is not as good as the latter, it can better cluster the users to a certain extent, so the results are still better than those of the stationary wavelet domain depth residual learning algorithm.

Conclusion
In this paper, based on differential evolution and stationary wavelet transform, the infrared value chain e-commerce model enhancement algorithm effectively improves the contrast of the infrared value chain e-commerce model, enhances the details of the value chain e-commerce model, and considers the influence of noise. e visual effect is better and has a wide range of applications. e enhancement effect of the algorithm on the value chain e-commerce model is better than the multiscale nonlinear wavelet enhancement and the traditional antisharp mask enhancement method, which can have certain reference value for similar research fields. As mentioned above, the method proposed in this paper is essentially a method of detailed enhancement of the infrared value chain e-commerce model. e next step can be considered to combine the other intelligent technologies to enhance the global contrast of the infrared value chain e-commerce model.
Experiments were carried out on the algorithm proposed in this paper and the experimental results were analyzed. e deep learning algorithm was tested under different conditions and compared with the related algorithms. Finally, the effect analysis shows that the user clustering collaborative filtering recommendation algorithm has also the user clustering effect and recommendation quality of the deep learning algorithm. e DGCF algorithm is also compared with other algorithms on the same data set. At last, the analysis shows that the DGCF algorithm is superior to the comparison algorithm in recommendation accuracy. e features extracted in this paper could not reflect the differences between users to a certain extent, so better extract the user's preference features, form a better user clustering group and improve the similarity calculation, and finally improve the recommendation quality of the recommendation system.

Data Availability
e data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest
e author declares that there are no conflicts of interest reported in this paper.