Research on the Impact of Interest Rate and Virtual Finance Reform on GDP Growth Based on Error Correction Model

. In the context of the virtual economy, monitoring and controlling the operation of the virtual economy to prevent excessive asset bubbles due to inherent volatility and minimize the harm to the macroeconomy is one of the important tasks of economic development. This study applies the error correction model to the analysis of GDP growth factors, improves the algorithm to adapt it to the needs of GDP growth analysis, and constructs an analysis model of the impact of interest rate and virtual ﬁnancial reforms on GDP growth. Moreover, this study combines data analysis to verify the performance of the model in this study. Through experimental analysis, we can see that the error correction model proposed in this study can play an important role in the analysis of GDP growth factors. At the same time, this study veriﬁes that virtual ﬁnancial reform and interest rate reform can have a certain impact on GDP growth and have a certain degree of relevance.


Introduction
Since the birth of currency, the development of human society and economy has entered a period of monetization. e development of currency was barter at the beginning, and it was physical, and then, it showed a fixed barter. At this time, the currency has its intrinsic value, and its effectiveness and credibility are guaranteed by its intrinsic value. Later, the currency has evolved from physical barter to lower and lower transaction costs into paper money with little intrinsic value. At this time, currency effectiveness and credibility are guaranteed by government laws, and physical currency is transformed into credit currency. However, with the disintegration of the Bretton Woods system and the implementation of the noncash paper currency standard system in various countries, the issuance of credit currency has become less and less binding, and it is mainly determined by the monetary authorities. It is precisely under the current credit currency system that the research on currency issuance becomes necessary. How much money should be provided to make the economy run smoothly without generating economic and financial risks is a question worthy of research [1].
If we want to examine the relationship between virtual economy and real economy, we must define the concept of virtual economy and real economy.
rough the investigation of the theoretical background, the concepts of virtual economy and real economy are based on the discussion of Marx and Xifaheng on the concept of virtual capital and real capital, Veblen on money capital and industrial capital, and Keynes on finance and industry [2]. Among them, Marx's understanding of virtual capital is more comprehensive and objectively dialectical. Combining the analysis and summary of Marx and many domestic and foreign scholars, we describe virtual capital as follows: it uses currency as a tool, relies on the financial system, and through capitalization pricing produces psychological expectations that have an important impact on price changes. Moreover, price changes are uncertain, asymmetry, and positive feedback, and they can realize value-added assets through market transactions. In addition, specific forms include debt certificates, ownership certificates, financial derivatives, or physical assets (such as real estate) [3].
Actual capital is a form of capital corresponding to virtual capital. e core of it is that both are pursuing monetized profits. Among them, virtual capital is an asset that realizes value appreciation through market transactions. Domestic scholars generally tend to accept Marx's statement that the concept of virtual economy is derived from virtual capital, so the understanding of the connotation of virtual economy is mostly formed on the basis of the concept of virtual capital. In foreign countries, scholars rarely mention the concept of virtual economy. e current understanding of the virtual economy mainly believes that the main body of the virtual economy is finance, and more is to understand the virtual economy from the perspective of finance. Based on the definition of the concept of virtual capital, we will define the virtual economy as follows: the concept of virtual economy is proposed relative to the real economy. It refers to the sum of economic operation mode and economic system arrangement, which takes virtual assets as the price carrier and pursues monetary income independently of the real economy.
e virtual assets here mainly refer to stocks, bonds, funds, forwards, futures, options, swaps, asset securitization products, etc. e real economy refers to the economic operation mode related to the circular movement of real capital.
Once the virtual economy runs out of control, it will cause very serious harm and negative effects on social and economic development and people's normal lives. In the context of the economic development of global financial integration, the intrinsic nature and operating laws of the virtual economy are studied further, the transmission mechanism of the virtual economy affecting the real economy is deeply analyzed, the statistical accounting of the virtual economy is explored, the management of the virtual economy is strengthened, and the macroeconomic policy is improved. e formulation of the implementation effect and the realization of the stable and coordinated operation of the virtual economy and the real economy are already urgent real needs. erefore, the research motivation of this article is based on the theoretical research and empirical analysis of the relationship between the virtual economy and the real economy. According to modern economic theories such as development economics, combined with the development status of the virtual economy and the background of economic globalization, a theoretical analysis of the development path selection of the virtual economy is carried out, and the development effect of the virtual economy is improved through the error correction model. e innovation of this article is to conduct a comprehensive and in-depth analysis of the structural levels of the real economy and the virtual economy, to find the intermediary transmission media that interact and penetrate each other, and to deduce the transmission mechanism of the virtual economy's influence on the real economy, revealing the relationship between the virtual economy and the real economy and the law of development; the thesis is also based on modern economic theories such as financial innovation and new institutional economics, an objective and dialectical exploration, and analysis of the essence of the evolution of the virtual economy. Combining the characteristics of the virtual economy and the transmission mechanism that affects the real economy, referring to the existing national economic accounting system and currency and financial statistics, defines the scope and caliber of the virtual economic statistical accounting, designs and proposes corresponding virtual economic accounting rules, and issues the error correction mechanism in a timely manner. e existing problems in economic development are trusted and the quality and effectiveness of economic development are improved.
Under such a general background, it is necessary to monitor and control the operation of the virtual economy while promoting the development of the virtual economy to prevent excessive asset bubbles from appearing due to its inherent volatility, thereby minimizing its harm to the macroeconomy. is is of great significance for ensuring the stable development of the entire economy.

Related Work
Literature [4] used the revised model framework of the quantity theory of money to deduce that the growth rate of money quantity is a function of the growth rate of the real economy and the growth rate of the virtual economy and analyzed the relationship between the virtual economy and the real economy. Literature [5] further pointed out that the trend of world economic virtualization has made the deviation between the virtual economy and the real economy a normal state. According to whether the proportion of the virtual economy and the real economy's capital flow in the currency circulation is equal to the ratio of its scale, the virtual economy in promoting the growth of the real economy will also bring negative effects. Literature [6] concluded through quantitative analysis that there is no long-term and stable cointegration relationship between the virtual economy and the real economy. e real economy is not the basis for the development of the virtual economy. e virtual economy deviates from each other. Literature [7] used the probability model of resource transfer to analyze the relationship between the virtual economy and the real economy, and the conclusion is as follows: from the longterm economic growth, any aspect of the waste is not beneficial to the growth. In the short term, any investment imbalance will also cause large fluctuations in the macroeconomy. Literature [8] researched the time-varying characteristics of the correlation between the virtual economy and the real economy and found that the virtual economy is relatively independent as an economic system relative to the real economy. Literature [9] used the Grange causality test to verify that the interaction between the real economy and the virtual economy is dynamic and non-bidirectionally balanced. e influence of the real economy on the virtual economy is getting weaker and weaker, while the influence of the virtual economy on the real economy is gradually strengthened. Moreover, the more developed the economy, the impact of the real economy on the virtual economy will be weakened, and the independence and virtuality of the virtual economy will gradually increase. e asymmetric influence of the virtual economy and the real economy is an inevitable trend in the process of economic virtualization. Literature [10] proposed the concept of financial correlation rate to observe the degree of deviation between financial assets and real economy, that is, the ratio of total financial assets to the stock of physical assets (or national wealth). Literature [11] described in detail the value of financial assets and physical assets, showing the gradual increase in the ratio of financial assets in a broad sense and the tendency of financial assets to deviate from physical assets. Literature [12] pointed out that in today's world economy, less than 2% of financial transactions are related to physical transactions every day. Virtual financial assets deviate from the scale of the real economy, and a large number of virtualized financial assets are growing indefinitely. Literature [13] believed that the development of the financial market and the real economy is increasingly separated and pointed out that in the study of macroeconomics the real economy is the first, followed by finance. Literature [14] examined the reasons for the separation of finance and the real economy from the perspective of credit expansion. Its research shows that the expansion of credit policies is a typical factor that causes the separation of finance and the real economy. Literature [15] conducted research from the perspective of the independence of financialization and believed that financialization, to a greater extent, begins to mark the shift from a booster of economic growth to its own growth. Literature [16] summarized the phenomenon of separation between financial markets and the actual economy and came up with a typical "deviation hypothesis" argumentation model. If the hypothesis is true, then the expansion of the virtual economy is like a virtual mirror when the real economy does occupy a fundamentally dominant position. In the past, the productive real economy was at the center of economic development, and the virtual economy sector existed as a role in assisting the development of the real economy. However, with the rapid development of the virtual economy, this relationship has now undergone major changes. e financial market has become relatively independent, operating according to its own logic and laws, while the real economy has to adapt itself to the operating laws of the virtual capital market.

Variable Selection of Semi-Parametric Variable Coefficient
Partial Linear Measurement Error Model. Traditional research models assume the specific parameter form first, but in many cases it is difficult to know or test the specific functional form of the dependent variable and the covariate. At this time, if we still use our conventional parametric model to process real high-dimensional data, there will be a large model deviation and loss of effect when predicting. At this time, nonparametric regression models are often used. e nonparametric model does not make assumptions about the specific form of unknown parameters, but assumes certain properties, so it has good robustness. Commonly used methods include local smoothing, spline approximation, and orthogonal series approximation. ese methods do have good statistical properties when the covariate is one yuan, but when they are extended to multiple variables, the so-called "dimensional bane" will appear and it is difficult to implement-the iterative process requires a lot of data, and the convergence speed is slow. Estimated results are unstable.
ese methods are not practical in financial data processing, so this article explores the application of error correction models in virtual financial data processing to improve the processing effect of virtual financial data. e variable coefficient partial linear model is a direct extension of the classic linear model. It not only retains the advantage of easy interpretation of the parametric model but also promotes the flexibility of the nonparametric model and avoids the curse of dimensionality. erefore, it has been extensively studied recently.
Y represents the linear model function. Among them, X and Z are d-dimensional and p-dimensional covariates, β is the unknown regression coefficient of d-dimensional covariates, α(.) is the unknown p-dimensional regression function vector, U is the nonparametric part of the unary scalar, and e is the model error of the finite variance with the mean of 0.
When using conventional parametric models to process real high-dimensional data, large model deviations and loss of effect will be produced when predicting. At this time, nonparametric regression models are often used. e nonparametric model does not make assumptions about the specific form of unknown parameters, but assumes certain properties, so it has good robustness. Commonly used methods include local smoothing, spline approximation, and orthogonal series approximation. ese methods do have good statistical properties when the covariate is one yuan. However, when they are extended to multiple variables, the so-called "dimensional bane" will appear and it is difficult to implement-the iterative process requires a lot of data, and the convergence speed is slow. Estimated results are unstable. erefore, this article first analyzes the variable selection of the semi-parametric variable coefficient partial linear measurement error model and then analyzes the error correction model on this basis. e traditional optimal subset selection, stepwise regression, ridge regression, principal component regression, partial least squares, and Lasso estimation can only achieve some of the goals. e smoothly clipped absolute deviation (SCAD) penalty function is specifically defined as follows [17]: where p λ ′ (θ) represents a penalty function and θ is a variable parameter. Among them, a is often taken as 3.7, and λ is a threshold parameter, as shown in Figure 1.

Advances in Multimedia
is article extends the "correction for attenuation" method often used in linear measurement error models to this model to eliminate the influence of measurement errors on the estimation and introduces a penalty function to achieve the purpose of estimating coefficients while selecting variables. is forms this article the penalty least squares. Considering that the penalty least squares contain an unknown nonparametric part (.), this study proposes two methods to solve it. e first estimation method is to use the kernel estimation method to convert the problem into the common penalty least squares.
is method is more straightforward. In the actual estimation, it takes up less computing resources, when the nonparametric part has a higher dimensionality. When it is time, there will be an annoying "curse of dimensionality" phenomenon. e second method is based on the local linear estimation to perform a local linear fitting of Q (.), and then, the fitted value is substituted into the penalty least-squares term, and then, it is transformed into the common penalty maximum tJ,-multiplication. As an improvement to Method 1, Estimation Method 2 effectively avoids the phenomenon of "dimensional curse" and shows a better and more robust estimation effect.
To better show the difference in several penalty functions, when the row vectors of X are orthogonal, we set z � XTy and y � XX T y to examine the following leastsquares estimation [18]: Among them, the penalty term p j (·) does not have to be equal for each term. For the convenience of writing, we mark it as p(·) and further define λp(| · |) as p λ (| · |). Minimizing the above equation is equivalent to minimizing each item.
(a) When q � 1, the Lasso estimate can be obtained (Tibshirani (1996)): Among them, Sgn is a symbolic function, and z means that when z > 0, the value is taken as z; otherwise, it is taken as 0. is penalty is called the soft thresholding rule, which was proposed by Donoho and Johnstone (1994). (b) When q � 2, the ridge regression can be obtained as follows [20]: (c) When q � 0, the general least-squares estimate θ � z is obtained. (3) SCAD : when pi (24) satisfies (2), the obtained SCAD estimate is as follows: From Figure 2, we can see that the purpose of selecting variables can be achieved by adding the three penalty items hard, L1, and SCAD. However, the estimation obtained by adding the hard penalty term is not continuous. is discontinuity may lead to the instability of model selection.
at is, small changes in some independent variable data may lead to large changes in the optimal model selected. e traditional parameter model is first given the specific model parameter form and then statistically inferring the specific coefficients based on the given data. However, in practice, there are often parametric forms that cannot be determined and tested. At this time, a better method is to not specify the specific parameter form of the model, but only assume that the unknown function has certain functional properties, and let the data determine the specific functional form by itself. Commonly used nonparametric regression function estimates include local smoothness, spline approximation, and orthogonal series approximation. Local smoothing methods include kernel estimation and local polynomial estimation. e data are (y i , X i ). Among them, y i is a real number, X i is a d-dimensional vector, and the regression function of y i on X i is [21] as follows: We want to estimate the function value m (zo) of the function m (z) at point z. When X � zo, there are a large number of observations, and these observations can be averaged. However, when X � ro, there is no observation value. A natural idea is to approximate the value of the function at other points in the z domain. Since m (z) is often assumed to be smooth, the function value of the point closer to xo should be closer to the m (zo) function value and then be given a larger weight. e one that is farther from x is given a smaller weight. e weight can be represented by a bounded support kernel function, which results in a kernel estimate.
In particular, when h ni � h n , the Nadaraya-Watson estimates are obtained. When d � 1, then there are the following: Among them, K h (·) � h − 1 K(·/h), and h is the window width.
When d > 1, then there are the following: is a p-dimensional kernel vector function with bounded compact supports and bounded Hessian matrix. f Ki (u) du � 1, and H is a p × p symmetric positive definite window width matrix. H is a diagonal matrix, and K is the Cartesian product of p univariate kernel functions. at is, e kernel function usually takes two types. One of them is the Gaussian kernel function: e other type is the symmetric family 3 kernel functions: When y � 0, it is a uniform kernel function; when y � 1, it is an Epanechnikov kernel function; when y � 2, it is a biweight kernel function; and when y � 3, it is a triweight kernel function. We consider the model Y � m(z) + σ(x)ε. Among them, E (e) � 0, Var (E) � 1, and X and e are independent.
e sample is ( According to least squares, we have the following: If we assume that m (z) is differentiable at x o point p + 1 order, then there is the following:

Advances in Multimedia
We also consider that the local approximation is relatively accurate for the one that is close to the Z point, and then, a larger weight is given, while the farther point is given a smaller weight. e following weighted least squares can be obtained: Among them, h is the window width and K (.-) is the symmetric bounded support kernel density function.
We define the following: Moreover, y � (Y 1 ..., Y n ) T , β � (β 0 , ..., β p ) T , and W � diag K h (X i − x 0 ) , and then, formula (17) can be written as follows: e corresponding solution is as follows: Usually, under the criterion of minimizing MSE, the optimal kernel function is the Epanechnikov kernel function. e use of local polynomials will involve the following issues: (1) e choice of window width: local polynomials are more sensitive to the window width. Too large a window width will cause a large estimation deviation; if the window width is too small, it will cause a large estimation deviation. Two window widths are often used, the global optimal window width and local optimal window width. When the estimated function m (z) has similar smoothness in the entire area, the global optimal window width is used; when the estimated function m (z) has a large difference in smoothness throughout the area, the local optimal window width is used. Because the optimal window width value contains unknown parameters, in practice, rule of thumb, PI, and other methods are often used. Some data-driven methods are also often used, such as cross-checking and generalized crosschecking; (2) e choice of order p: for a fixed window width, a large p will reduce the deviation, but will lead to an increase in the variance and an increase in the amount of calculation; a small p will increase the deviation. When estimating m (z), it is often used p � j + 1 or j + 3. In particular, when estimating m (z), it is often only expanded to the first order. At this time, a special case of local polynomial estimation method-local linear estimation-is obtained; and (3) Selection of kernel function: the choice of kernel function has little effect on the final estimate. Usually, under the criterion of minimizing MSE, the optimal kernel function is selected as the Epanechnikov kernel function . . , n to be a random sample from the following variable coefficient partial linear measurement error model (20).
Among them, V and Z are one-dimensional and p-dimensional covariates, respectively, and have no measurement error, X is the covariate with measurement error that is not actually observed in the d dimension, W is a substitute for the observed X, ε is the model error and E (ε|X, V, Z) � 0, and U is the measurement error of the covariance matrix uu with zero mean, and we assume that U is independent of X, V, z, Y.
Regardless of variable selection, since E(e|V, z) � E(E(e|X, V, 2)) � 0,  (20), we get the following: Using common nonparametric smoothing methods, such as the Nadaraya-Watson estimation, we get E(X|V i , Z i ) and E(X|V i , Z i ). At this time, the natural way to find β-estimation is the least-squares method, as shown in the following formula: (23) We can get the following: Due to the error in X measurement, if the measurement error is ignored and W is directly used instead of X, it will inevitably lead to inconsistency in the estimated results. In linear models, the so-called "correction for attenuation" method is often used to correct the inconsistency of estimates caused by measurement errors. We apply this method to the above model and get the following: (25) e obtained estimate of β is as follows: From the previous discussion, we know that most of the variable selection process can be implemented by adding different penalty functions, and then, the corresponding penalty least squares are as follows: From the variable selection part in the introduction, we know that the SCAD penalty function has good statistical significance in all penalty functions, so this study chooses the SCAD penalty function as the penalty item of 2-7, and the specific definition is as (2).
Next, we give the theoretical properties of the proposed penalty least-squares estimation. We set the following: Without loss of generality, we assume that β 20 � 0 and s � dim β 10 .
We know that when an appropriate window width and n are selected, and formula (27) has a penalty least-squares estimation with a convergence rate of √n, we further give the estimated Oracle properties.
If λ n ⟶ 0, � n √ λ n ⟶ ∞, and lim inf /λ n > 0, then the probability tends to 1, and the local maximum point in eorem 1 satisfies the following: Advances in Multimedia (a) e sparsity is as follows: β 2 � 0. (b) e asymptotic normality is as follows: In the same way, m 2 (V 0 , Z 0 ) can be found.
Step 1. Since the SCAD penalty function does not have a continuous second derivative, the Newton iteration method cannot be used directly. Fan and Li proposed a local quadratic approximation (LQA) to solve this problem. It first gives an initial value β of β, and if βo is very close to 0, then β � 0; otherwise, it is approximated by the following formula: So, we can get the following: At this time, minimization (27) is reduced to the problem of minimizing the quadratic term.
By minimizing (33), the following solution is obtained as follows: We use the above formula as the iterative formula of Newton's iterative method to iterate until convergence.
From the previous preliminary knowledge, we know that the choice of window width has a significant impact on the estimation of E(Y|V) and E(WIV). If the window width is too small, the estimated variance is too large. However, the selection is too large, and the deviation is caused by it. e choice of threshold parameters also has a direct impact on our final variable selection. If the threshold parameter value is set too large, some significant variables will be excluded from modeling, resulting in insufficient fitting. However, if the setting is too small, it will not serve the purpose of selecting variables. is study chooses the method of crossverification to select the window width and threshold parameters.
Among them, β − i is the estimate of β obtained by removing the main sample from the total sample for regression.
When p is small, the method of grid search can be used to find the optimal parameter value. When p is large, but the random variables of V are similar, such as independent and identically distributed, a consistent window width can be used. When p is large and the random variables of V differ greatly, it will be very difficult to directly minimize the p + 2-dimensional space.
Although the above method is relatively simple and intuitive, the process of estimating β does not make full use of the information of the model (20), but relies on multidimensional nonparametric estimation. In this way, when the dimensionality of Z is higher, the so-called dimensional bane problem will appear. In this section, the local polynomial method combined with penalized least squares is used to improve the above method.
When a (.) is known and X has no measurement error, we can use the following penalty least-squares method to get an estimate of β.
When X has a measurement error, if W is directly substituted for X into the above formula, it will lead to inconsistency in the estimates. Similarly, when applying the "correction for attenuation" method to correct, we get the following: Since the specific functional form of α (.) is not given, it is not possible to minimize the above formula directly with respect to β. We use the local linear method to first estimate a (V i ), i � 1,...,n and then bring it into equation (37). Since aT , we can directly use W instead of X to apply to the local linear estimation, and no correction is needed.
Specifically, for a point v in the u field, there are the following: 8 Advances in Multimedia We set a � (a1,...,a)T and b � (b,...,b) T. Regarding the minimization of a, b, and β, the following local least squares exist: Among them, K (.) is the kernel function, and KA(t) � h-1K(t/h). {a, 6,} is the solution of minimizing 2-12, and then, &(u) � a. We use the estimate of α to bring in (36) to get the following penalty least squares: (41) Equation (40) can be written in the following matrix form: e above one-step estimation does not require iteration, so the calculation speed is faster. Considering that the dimensions involved in minimizing a, b, and β (2-12) are too high, which may lead to Z instability of the estimation, the following full iterative algorithm can also be used: Step 2.
e algorithm gives an estimated β L of β. Regarding a and b minimizing the following formula, we have the following: Step 3. About β-minimization: Step 4. e algorithm iterates Step 3 and Step 4 until convergence.
Similarly, we choose the method of cross-checking to select the window width and threshold parameter values.
Among them, β − i is the estimate of β obtained by removing the ith sample from the total sample for regression. (47)

Error Correction
Mechanism. VECM is used for inspection. VECM is derived on the basis of the VAR model and is often referred to as the cointegrated VAR model. When economic theory cannot fully reveal the inherent tight specification of a multivariable system, the VAR model is often used to describe the multivariable dynamic system to avoid the problem of simultaneity bias. In fact, VAR is a linear model with n variables and n equations. In VAR (p), each variable in the system is gradually interpreted as a linear function of its own p-order lag value and the p-order lag value of other variables in the system. e main use of VAR is in causal test and impulse response analysis. e VAR model that introduces an error correction mechanism is called the vector error correction model (VECM). VECM is a derivative model of VAR and is mainly used for the correlation analysis of cointegration sequences. Equation (47) is the general form of VECM.
If Yt-I (1)3 is cointegrated, then the dynamic characteristics of Y t can be described by equation (48) (1) is the sequence of I (0), and its right side must be stable. is requires Y t-1 to be stable by multiplying with the matrix Π (i.e., after some linear transformations). erefore, the matrix Π contains a mechanism that can make Y t smooth without difference operation.
Π is a matrix of order n × n, assuming Rank (Π) � r(0 ≤ r ≤ n), and then, Π can be decomposed into the product of two n × r matrices α and β. It can be seen that Rank (α) � Rank (β) � r. Regardless of Yt-I (0) or Yt-I (1), and when Yt-I (1), whether Yt is cointegrated or not, there is β′Yt-1-I (0). e only difference is that corresponding to different situations of Yt, the rank of matrix Π will be different. ① When Yt-I (1) and Yt are cointegrated, ß is the cointegration vector of Yt, so β′Yt-1-I (0) can be satisfied, and 0 < Rank (Π) � r < n. e reason for time-series cointegration is that they are affected by the same random trend, so after a simple linear operation, the common trend can be eliminated and stabilized. At this time, r is called the cointegration rank of Yt. ② When Yt-I (1), and Yt is not cointegrated, Rank (Π) � 0. e non-cointegration of Yt means that there is no common I (1) trend in it, so the only way to satisfy β′Yt-1-I (0) is β � [0]; that is, Π � [0]. erefore, Rank (Π) � 0. At this time, equation (1) degenerates into a differential form of VAR (p-1). ③When Rank (Π) � n, there must be Yt-I (0). When Π is a full-rank matrix, the row vector and column vector of the Advances in Multimedia matrix ß are not correlated with each other, and any I (1) trend will not be subtracted. erefore, only when Yt is stable β′Yt-1-I (0) can be satisfied. At this time, the analysis of Yt should adopt the VAR (p) model. In summary, only Yt-I (1) and Yt cointegration are suitable for the description by formula (48). erefore, the empirical test must first determine the stationarity and cointegration of the sequence. VECM can well reflect the long-term and short-term adjustment relationships within the dynamic system. Equation (48) can also be written as follows: ... ...
Once the term β′Yt-1 is not equal to zero, △Yt at a later time will receive feedback from this term. When the mean value of the I (0) sequence of β′Yt is zero, whether β′Yt-1 is greater than or any value less than zero represents a deviation from its equilibrium state. Since △Yt is not divergent, once the value of β′Yt-1 shifts to its equilibrium state, the deviation will be α times the speed reduce and feedback to the value of △Yt to make β′Yt at the next moment closer to its equilibrium state. e adjustment of the sequence VECM with a nonzero mean value will also subtract the influence of the mean in a short time so that β′Yt.
e value oscillates near zero. e β′Yt-1 term in equation (20) is often referred to as the VECM error correction (error correction) mechanism, which represents the deviation of the long-term equilibrium state of the system at the previous moment. An important use is the analysis of causality. From the previous description, it can be seen that there are two levels of causality in VECM: long term and short term.
(1) e short-term causality. △y it is affected by △y 1 , △y 2 ,...,y n from 1 to p-1 lag value, and the influence is fed back to the value of Y t . ese influences can be regarded as a kind of noise that may cause Y t to move toward equilibrium or it may deviate from equilibrium. (2) Long-term causality. Since the cointegration relationship between vectors within △Y t exists for a long time, once EC t-1 feedbacks the deviation from the long-term equilibrium state in the long term, VECM can make △Y t adjust to the equilibrium state at a speed of α times the deviation. e shortterm causality is judged by the significance of the joint distribution of the lag value coefficients from 1 to p-1 in the VECM estimation result and expresses the situation that △Y t is affected by the lag difference terms. e long-term causality is the error correction term EC t-1. e significance of the coefficient is used to determine the expression of △Y t affected by Y t-1.

The Impact of Interest Rate and Virtual Financial Reforms on GDP Growth Based on the Error Correction Model
e error correction mechanism of this article is as follows: (1) Improve credit policy support and promote the rational allocation of financial resources (2) Actively promote the marketization of interest rates and comprehensively promote the improvement of capital utilization (3) Speed up the development of multiple financing channels and gradually improve the financial structure dominated by indirect financing (4) Attach importance to the establishment of a credit system and create excellent conditions for the optimal allocation of financial resources (5) Grasp the development of the situation and ensure the stability of market expectations When the nominal interest rate does not change, the actual interest rate and prices change in the opposite direction; when the actual interest rate does not change, the nominal interest rate changes in the same direction as the price. Based on this, we can construct an interest rate fluctuation mechanism based on changes in price levels, as shown in Figure 3. e interest rate is determined by the supply and demand of money. e interest rate is the currency loan price when the money supply and demand reach equilibrium, so the change in the interest rate level depends on the change in the money supply and demand. Based on this, we can construct an interest rate fluctuation mechanism based on changes in the money supply, as shown in Figure 4(a). e RMB exchange rate has a significant negative impact on interest rates, while the interest rate has no significant impact on the RMB exchange rate. We can construct an exchange rate-based interest rate fluctuation mechanism, as shown in Figure 4 (b). e level of the investment effect of interest rate fluctuations directly affects the sensitivity of investment changes to interest rate fluctuations and has an important impact on the effective implementation of interest rate policies. Keynes' interest rate transmission theory shows that the change in interest rate is negatively correlated with the change in investment. Based on this, we can construct the formation mechanism of the investment effect of interest rate fluctuation, as shown in Figure 4(c).
Changes in interest rates will inevitably cause consumer behavior and cause changes in the total consumption of the entire society. According to the neo-Keynesian monetary transmission theory, the rise in interest rates will cause a decline in investment and demand for consumer durables. Based on this, we can construct the formation mechanism of the consumption effect of interest rate fluctuations, as shown in Figure 4(d).
In the virtual economy market, if there is a clear connection between the original virtual capital such as stocks and bonds and the real economy, the trading activities of    financial derivatives such as options and swaps derived from stocks and bonds are a further manifestation of economic virtualization and the most obvious manifestation of virtual economy. e trading activities of options, swaps, and other derivative products are intertwined with the bank market and foreign exchange market and become the main body of the virtual economic system. e emergence of financial derivatives and the development of stocks and bonds in the global market have increased the size of the virtual economy and further strengthened its influence on the real economy.
With the continuous expansion of the form and scope of credit relationships, financial innovation activities have also gained sufficient sources and motivation, and financial derivatives have sprung up into the capital market. e margin system for transactions in the derivatives market enables transactions to have multiple creative capabilities, that is, financial leverage. As a result, the development of the financial derivatives market can be described as leaps and bounds. Figure 5 shows the model of credit creation and financial virtual operation. e decline in market interest rates, the rise in foreign exchange reserves, and the rise in exchange rates have caused a significant increase in liquidity. Under the conditions of slow technological progress and inadequate financial supervision, excess liquidity flows into the real economy in a small amount and flows into the virtual economy in a large amount, causing the price of virtual assets to rise. Under the circumstances that the macroeconomy is functioning well, the public is actively expecting and the supervision is not in place, the asset price will be promoted to further rise, and this cycle will lead to the formation of a virtual economic bubble. rough the above analysis, the process of the virtual economy bubble can be represented as shown in Figure 6.
After constructing the above model, this study uses the existing network data to study the model of this study and analyzes the relevance analysis of the impact of interest rate and virtual financial reform on GDP growth through the error correction model. e GDP growth rate and the lagging value of the virtual economy comprehensive index are selected to represent the macroeconomic operating conditions and public expectations, respectively. To quantitatively measure the level of science and technology, we must first distinguish two concepts, namely science and technology in a narrow sense and science and technology in a broad sense. Science and technology in the narrow sense refers to natural science, which mainly includes science and  technology related to the production and operation activities of the main body of microeconomic activities. In a broad sense, technology refers to factors other than input factors in production that have an impact on output, such as management level. e research in this study is broad-based science and technology. e commonly used Cobb-Daughter Glass production function, namely C-D production function, in A is also a reflection of the level of broad-based science and technology. erefore, the measurement in this article is broad science and technology. Among them, Figure 7 and Table 1 are the analysis of the relevance between the interest rate reform and GDP growth, and Figure 8 and Table 2 are the analysis of the relevance between the virtual financial reform and the GDP growth.
From the above experimental research, it can be seen that the error correction model proposed in this study can play an important role in the analysis of GDP growth factors and at the same time verifies that virtual financial reform and interest rate reform can have a certain impact on GDP growth and have a certain degree of relevance.

Conclusion
With the continuous development and deepening of the market economy, the virtual economy accounts for an increasing proportion of the economic system. However, with the development, the virtual economy has developed into a certain degree of independence through the initial development of the real economy as the basis and serving the real economy and even has a certain degree of departure from the real economy. e difference between virtual asset capitalization pricing and physical asset cost and technology pricing has become the fundamental reason for the deviation    Advances in Multimedia 13 between the virtual economy and the real economy. e capitalization pricing of virtual assets and people's limited ability to predict the market make the price of virtual assets extremely unstable, which results in the inherent volatility and high risk of the virtual economy. In addition, due to the continuous increase in the scale of the virtual economy, the inherent volatility and high risk of the virtual economy can easily cause severe damage to the entire economy. is study analyzes the relevance analysis of the impact of interest rate and virtual financial reform on GDP growth through the error correction model and concludes that virtual financial reform and interest rate reform can have a certain impact on GDP growth.

Data Availability
e labeled dataset used to support the findings of this study is available from the corresponding author upon request.

Conflicts of Interest
e authors declare no conflicts of interest.