Prediction of Chemical Gas Emissions Based on Ecological Environment

With the serious pollution of the ecological environment, there are a large number of harmful gases in the chemical gases emitted by the industry. Relevant intelligent chemical algorithms control the emission of chemical gases, which can effectively reduce emissions and predict emissions more accurately.-is paper proposes a gray wolf optimization algorithm based on chaotic search strategy combined with extreme learning machine to predict chemical emission gases, taking a 330MW pulverized coal-fired boiler as a test object and establishing chemical emissions of CNGWO-ELM. -e prediction model, by using the relevant data collected by DCS as training samples and test samples, trains and tests the model. Simulation experiments show that the chemical emission prediction model of CNGWO-ELM has better accuracy and stronger generalization ability, with higher practical value.


Introduction
In recent years, with the release of chemical gases, environmental pollution problems have become increasingly serious [1,2]. In order to effectively control environmental pollution, it is necessary to monitor the degree of environmental pollution in a timely manner and analyze the composition of environmental pollutants in order to better solve environmental pollution problems. More and more experts and scholars have found that analytical chemistry is an important way to effectively monitor the environment and is of great significance to environmental protection.
With the continuous advancement of industrialization, the dependence of economic and social development on energy will be further increased. Strengthening the alternative strategic research on fossil energy such as coal, oil, and natural gas is a necessary measure to solve energy supply shortage and promote economic development and environmental friendliness. Among the chemical emissions emitted from the ecological environment, circulating fluidized bed (CFB) combustion is one of the main coal combustion methods in China. It has the advantages of wide fuel application range, good load regulation performance, low pollutant discharge, and easy utilization of ash [2][3][4]. e combination of CFB combustion mode and ultrasupercritical parameter technology will be the inevitable development direction of CFB boilers in the future. e original NO X emission concentration of conventional CFB boilers is between 100 and 300 mg/Nm 3 [5,6], which cannot meet the national standard limit of NO X emission concentration below 100 mg/Nm 3 , and the NO X emission concentration in some areas is lower than 50 mg/Nm 3 . e ultralow emission standards, CFB boilers face the problem of having to further reduce NO x emissions.
Many scholars at home and abroad have devoted themselves to the study of optimizing combustion conditions to control NO x formation. Rajan and Wen [7] first established a comprehensive model of fluidized bed coal combustion chamber (FBC) simulation. e model can predict combustion efficiency, particle size distribution of coke and limestone, slag discharge rate of bed material, bed temperature change, and the distribution of SO 2 , NO x , O 2 , CO, CO 2 , and volatile matter along the furnace height. e factors influencing NO x emission from the CFB boiler are combustion temperature and uniformity, excess air coefficient, staged combustion, and so on [8,9]. In addition, the study of deep reduction of NO x in the CFB boiler with the selective noncatalytic reduction (SNCR) technology as the mainstream is also affected by reductant type, reaction temperature, ammonia-nitrogen ratio, and other factors [10][11][12]. Edelman et al. added the dynamic mathematical model of a steam-water system to the overall mathematical model of a circulating fluidized bed boiler on the basis of the Wei model [13] and Muir model [14]. e dynamic models of combustion chamber temperature, heat transfer rate of heat exchanger, and oxygen content in flue gas in circulating fluidized bed (CFB) were established, and the dynamic predictions were made. e structure of this paper is as follows. Firstly, the basic algorithm of CWO is explained and a chaotic nonlinear grey wolf optimization algorithm is proposed. Secondly, an ELM optimization model is proposed. Finally, the CNGWO-ELM algorithm proposed in this paper is tested to achieve the prediction effect and the relevant evaluation indicators are given.

Standard Gray Wolf Optimization Algorithm
e grey wolf optimization algorithm (GWO) is a group intelligence algorithm proposed by Mirjalili et al. in 2014 for the inspired grey wolf predation behavior [15]. e gray wolf group has a 4-layer hierarchical mechanism of α, β, δ, and ω. Among them, α wolf is the leader with the best fitness in the grey wolf group; β and δ are the two individuals with the second best fitness, and their task is to assist the α wolf in the management and hunting of the wolves; ω is the remaining common wolves. e predation process is described as follows: first, the α wolf leads the gray wolf group to search, track, and approach the prey; then, the β and δ wolves attack the prey under the command of the wolf and summon the ordinary wolf to attack the prey until the prey is captured. e GWO algorithm completes the predation behavior by simulating predation behaviors such as gray wolf enveloping, hunt, and attack, thus achieving a global optimization process.
Assume that, in the dimensional space, the gray wolf group X i , i � 1, 2, . . . , N consists of N gray wolves. e GWO algorithm is described as follows.
Surrounding stage: after the wolves determine the position of the prey, they first surround the prey. e mathematical description is as follows: where D is the distance between the grey wolf and the prey, X p (t) is the position of the prey after the t-th iteration (current optimal solution), X(t) is the position of the grey wolf after the t-th iteration (feasible solution), and A and C are coefficient factors, defined as follows: where r 1 and r 2 are random numbers in [0, 1] and a is a convergence factor, which decreases linearly from 2 to 0 as the number of iterations increases. Hunting phase: after the encirclement phase is completed, a wolf leads β and δ wolves to hunt down the prey. During the hunt, the individual positions of the wolves move as the prey escapes: where X α , X β , and X δ represent the current position of α, β, and δ wolves, X(t) represents the current gray wolf position, and C 1 , C 2 , and C 3 are random vectors. Update the location of ω wolf as follows:

Chaotic Nonlinear Grey Wolf Optimization
Algorithm. e literature [16] pointed out that the optimization process of the GWO algorithm is essentially dominated by three optimal solutions, α, β, and δ wolves, which easily cause the algorithm to prematurely converge and fall into local optimum. Chaos is a kind of nonlinear linearity with phase space ergodicity and inherent randomness. Combining chaotic variables for optimal search can effectively jump out of local optimum and achieve global optimization. e literature [17] pointed out that Kent chaotic maps have better performance than logistic chaotic maps. erefore, the introduction of Kent chaos optimization strategy in the basic GWO algorithm to optimize the solution that falls into the local optimum will effectively help the algorithm find a better solution. In addition, the introduction of nonlinear dynamic weighting strategy in the GWO algorithm will effectively balance the development and exploration capabilities of the algorithm and further improve the global optimization performance of the GWO algorithm.

Kent Chaotic Search Strategy.
e Kent chaotic map model is described as follows: where the control parameter a ∈ (0, 1) and the Lyapunov exponent of the Kent map is greater than 0 which is in a chaotic state. In this paper, the probability density function obeys a uniform distribution in (0, 1), that is, ρ(Z) � 1. e Lyapunov exponent can be used to characterize the divergence ratio of the initial state of small uncertainty. e Lyapunov exponent of Kent chaos is 0.696, which is greater than the classical logistic of 0.693.
In the chaotic search process, the ergodicity of the chaotic motion is used to generate the chaotic series based on the solution of the current search stagnation. e optimal solution in the sequence is taken as the global optimal solution, which makes it jump out of the local optimum. In the GWO algorithm, it is assumed that the solution does not improve significantly after continuous limit iterative search, indicating that the solution falls into local optimum, so Kent is used to optimize chaos. Chaos optimization is performed on α, β, and δ wolves of the GWO algorithm. e solution space of the optimization problem is [X min , X max ].
e Kent chaos optimization steps are as follows: Step 1: use equation (3) X α , X β , and X δ to map into the domain [0, 1] of the Kent: Step 2: generate chaotic sequences. Iteratively generate C max chaotic variable sequences by Kent equation Step 3: using the carrier operation, Z k is first amplified and then loaded onto the gray wolf individual X α , X β , and X δ to be searched so that the new gray wolf individual position U k in the field of the original solution space after the chaotic operator operation is obtained from formula (7), where k � 1, 2, . . . , C max : Step 4: calculate the fitness value f(U k ) of U k and compare it with the fitness value f(X) of X to retain the best solution.

Nonlinear Dynamic Weights.
For the GWO algorithm, global exploration capabilities mean detecting a wider range of search areas, while local development emphasizes the use of existing information to perform detailed searches on certain areas of the group. ere is no doubt that how the GWO algorithm seeks the balance between global exploration and local development is the key to ensuring the global search performance of the algorithm. In the GWO algorithm, A adjusts the balance between global exploration and local development. From equation (6), it can be found that the value of A changes with the change of control parameter α. erefore, the control parameters largely determine the global balance between exploration and local development.
In the standard GWO algorithm, the control parameter A decreases linearly from 2 to 0 as the number of iterations increases. However, this linearly decreasing strategy cannot fully reflect the actual complex optimization process of the algorithm. e nonlinear control parameters obtained better performance than the linear decreasing strategy. Based on this, the following nonlinear exponential decreasing strategy is proposed: where α start � 2, α end � 0.01, and t � 0 and the weight in equation (8) is α � α start � 2. When t � t max , α converges to 0.01. In the initial stage, α has a large weight and α weight decreases rapidly with the increase of the number of iterations. In the latter part of the iteration, the descending speed gradually slows down, compared with the linear decreasing adjustment scheme, the weighting strategy of the nonlinear exponential decreasing. It can improve the optimization performance of the GWO algorithm.

CNGWO Algorithm
Steps. e following are the basic steps of the CNGWO algorithm, as shown in Algorithm 1.

Fundamental Principles of Extreme Learning Machine (ELM)
. ELM is a new single hidden layer forward neural network learning algorithm that has received extensive attention in recent years. e difference from traditional neural network training is that the ELM hidden layer does not need to be iterated and the input weight and hidden layer node offset are randomly selected. With the minimum training error as the goal, the hidden layer output weight is finally determined. e algorithm is described as follows.
Let m, M, and n be the number of nodes in the network input layer, hidden layer, and output layer, respectively, g(x) is the activation function of the hidden layer neurons, and b i is the threshold. Let N samples be ( . , x im ] T ∈ R m is the network input vector and t i � [t i1 , t i2 , . . . , t in ] T ∈ R n is the target output vector. e ELM model is described as follows: where ω i � [ω 1i , ω 2i , . . . , ω mi ] represents the input weight vector connecting the network input layer node and the first hidden layer node, β i � [β i1 , β i2 , . . . , β in ] T represents the output weight vector connecting the first hidden layer node and the network output layer node, and o j � [o j1 , o j2 , . . . , o jn ] T represents the network output value. S � (ω i , b i , i � 1, 2, . . . , M) contains the network input weight and the hidden layer node threshold. ELM's training goal is to find the optimal S, β. min(E(S, β)) can be further described as follows: where H represents the hidden layer output matrix of the network with respect to the sample, β represents the output weight matrix, and T represents the target value matrix of the sample set. H, β, and T are defined as follows: Journal of Chemistry e ELM network training process can be reduced to a nonlinear optimization problem. When the activation function g(x) is infinitely divisible, the network input weight ω i and the threshold b i can be randomly assigned. At this time, the matrix H is a constant matrix and the learning process of the extreme learning machine can be equivalent to obtaining the linear system Hβ � T. e least squares solution of the minimum norm β is calculated as follows: where H † is the Moore-Penrose generalized inverse of the hidden layer output matrix H; after β is solved, ELM's network training process is completed. e implementation steps of the ELM algorithm are as follows: Step 1: given a training set (x i , t i ), the activation function g(x), and the number of hidden layer nodes M, randomly generate the input weight ω i and the threshold b i Step 2: calculate the hidden layer output matrix H Step 3: calculate the output weight β from formula (12)

ELM Optimization Model.
Although the ELM learning algorithm has certain advantages in the computational performance and accuracy of the regression problem, the ELM lacks the a priori knowledge to randomly determine the input weight and the hidden layer threshold and obtain the output weight of the network. If the input weight and hidden layer threshold are not properly selected, it will affect the prediction accuracy and generalization ability of the ELM. Aiming at this problem, the CNGWO algorithm is used to optimize the extreme learning machine prediction model (CNGWO-ELM). e core idea is to use the sample data as the input of ELM and search and adjust the CNGWO optimization algorithm to get the best input weight and hidden layer node threshold. e regression effect of the ELM algorithm is best when the hidden layer nodes are as small as possible, and the output weight β is obtained by parsing the MP generalized inverse. Figure 1 depicts the process by which CNGWO optimizes ELM model parameters. e specific steps of CNGWO to optimize ELM model parameters are as follows: Step 1: population initialization: randomly generate a population consisting of N individuals, each consisting of input weights and thresholds, encoded according to x j � (ω 11 , . . . , ω 1M , ω 21 , ω 22 , . . . , ω m1 , . . . , ω mM , b 1 , b 2 , . . . , b M ).
Step 2: variable selection and data acquisition: when modeling gas emissions, select reasonable input and output modes, collect and process operational data related to modeling from the combustion system, and divide into training data sets and test data sets.
Step 3: determine the fitness function J defined as follows: where

Experimental Index.
e CFB boiler adopts a single furnace, single air distribution plate, M-type arrangement structure, and circulating fluidized bed combustion mode. e boiler consists of 1 furnace, 4 steam-cooled cyclones, 4 return valves, 4 external heat exchangers, 8 slag coolers, and 2 rotary air preheaters. e tail is double flue. e preheater adopts the baffle to adjust the temperature, and the mechanical feeding mode of the starting bed material adding system is shown in Table 1.
e CFB boiler burns coal blended with coal slime, vermiculite, and terminal coal. e mixing ratio of designed coal slime, vermiculite, and terminal coal is 55 : 20 : 25; the ratio of coal mine slime, vermiculite, and end coal is 35 : 35 : 30. e specific coal quality information is shown in Table 2.
e boiler design has low nitrogen content in the coal quality, which reduces the formation of fuel-type NO X , but its high volatile content is not conducive to controlling NO X emissions. Data modeling and the modeling method proposed above are used to establish a CNGWO-ELM-based NO x emission prediction model and a boiler load prediction model. Among them, the boiler load prediction model includes 9 input characteristics, corrected total fuel quantity, feed water flow rate, A coal mill inlet air volume, B coal mill inlet air volume, D coal mill inlet air volume, E coal mill inlet air volume, the total primary air volume, the total secondary air volume, and the boiler load per unit time before the measurement time, and the boiler load as the output. e NO x emission prediction model includes 16 input characteristics, corrected total fuel quantity, main feed water flow rate, A coal mill inlet air volume, B coal mill inlet air volume, D coal mill inlet air volume, E coal mill inlet air volume, total primary air volume, total secondary air volume, furnace pressure, A coal mill inlet primary air temperature, B coal mill inlet primary air temperature, D coal mill inlet primary air temperature, E coal mill inlet primary air temperature, front wall outlet flue gas oxygen content, rear wall outlet flue gas oxygen content, and measurement time. e experimental data are the output values of each NO x emission index, as shown in Table 3.

Comparison of Model Prediction Control Results.
e comparison results of the CNGWO-ELM predictive control and the widely used actual values of the power plant proposed in this paper are shown in Figures 2 and 3. It can be seen from Figure 2 that there is a high degree of consistency between the load predicted value and the actual load value and the accuracy of the data higher. In Figure 3, the chemical gas emission prediction model proposed in this paper compares the actual emission with  Journal of Chemistry the predicted value, which shows that the effect of the gas emission prediction model is better and the error between the actual value and the predicted value is small. At the same time, the model's generalization ability test results show that the maximum relative error of chemical gas emissions is 3.56%, indicating that the model has strong generalization ability.
In order to better understand the algorithm applied in this paper, other methods are used to compare the predicted values. Figure 4 shows the prediction results of 50 models with heat consumption rate of 3 models. It can be seen that the CNGWO-ELM model can predict the test samples well. Compared with the other two models, the prediction accuracy is higher, indicating that CNGWO-ELM model has strong generalization ability.

Performance Comparison.
In order to facilitate the evaluation of the performance of the model, this paper defines root mean squared errors (RMSE), mean relative error (MRE), and decision coefficient R 2 as follows: where n is the number of samples, y i is the actual measured value, y i ′ is the corresponding predicted value, and y i is the average of the actual measured values. It can be seen that the predicted value and the true value are roughly distributed in the CNGWO-ELM model, which is relatively close to the one mentioned in the paper, indicating that the model can better predict the chemical gas. e effect is shown in Table 4. e CNGWO-ELM model has smaller RMSE and MRE for the training samples and the least error for the test samples, which indicates that the generalization ability of the CNGWO-ELM model will be better when the input variables are larger. As the sample size changes, the values of RMSM, MRE, and R 2 also have related changes, and the values also change well. In the optimization process of the ELM algorithm, the relevant data can be optimized and analyzed.

Conclusions
e characteristics of chemical gas emissions are affected by many factors, and the influence relationship is complex. In order to accurately predict the chemical gas emissions, a prediction model based on improved standard gray wolf optimization algorithm (GWO) and extreme learning machine (ELM) is proposed. CNGWO-ELM is used to preselect ELM model parameters to improve the accuracy and generalization capabilities of the predictive model. Taking a 330 MW pulverized coal-fired boiler as a test object and establishing a predictive model of chemical emissions of CNGWO-ELM, the model was trained and tested by using the relevant data collected by DCS as training samples and test samples. Simulation experiments show that CNGWO-ELM's chemical emission prediction model has good accuracy and strong generalization ability and has higher practical value. In the future, other optimization algorithms will be introduced to achieve fast and accurate prediction and improve the global optimization effect.
Data Availability e data used in this article are available in https://pan. baidu.com/s/1YHr7hRz25evFtIB1iNIpYw. Download code: dju4.

Conflicts of Interest
e authors declare that they have no conflicts of interest.