Intelligent Recommendation Algorithm Combining RNN and Knowledge Graph

. With the continuous application and development of big data and algorithm technology, intelligent recommendation algorithms are gradually a ﬀ ecting all aspects of people ’ s daily life. The impact of smart recommendation algorithm has both advantages and disadvantages; it can facilitate people ’ s life, but also exists at the same time the invasion of privacy, information cocoon, and other problems. How to optimize intelligent recommendation algorithms to serve the society more safely and e ﬃ ciently becomes a problem that needs to be solved nowadays. We propose an intelligent recommendation algorithm combining recurrent neural network (RNN) and knowledge graph (KG) and analyze and demonstrate its performance by building models and experiments. The results show that among the ﬁ ve di ﬀ erent recommendation models, the intelligent recommendation algorithm model combining RNN and knowledge graph has the highest AUC and ACC values in the Book-Crossing and MovieLens-1M. At the same time, the algorithm ’ s rating prediction error values are small (less than 2%) in extracting di ﬀ erent users ’ ratings for di ﬀ erent books. In addition, the intelligent recommendation algorithm combining RNN and knowledge graph has the lowest RMSE and MAE values in the comparison of three di ﬀ erent recommendation algorithms, indicating that it has better performance and stability, which is important for the improvement of user recommendation e ﬀ ect.


Introduction
The continuous development of intelligent technology makes recommendation algorithms have an increasingly important impact on people's lives. For example, in daily life, shopping, travel, and watching videos are closely related to intelligent recommendation algorithms. Intelligent recommendation algorithms mainly rely on the application of big data, through the extraction and analysis of a large amount of data to know the behavior preferences of different users, so as to make intelligent recommendations to meet the actual needs of different users [1]. For example, in the field of short video, relevant short video platforms extract and analyze users' past viewing habits to generate user profiles and push their favorite contents or products to them based on this. With its excellent performance, smart recommendation algorithms can accurately push different information to each user, thus reducing the cost and difficulty of users' access to information. There are different types and forms of intelligent recommendation algorithms, such as contentbased, model-based, social relationship-based recommendations, and collaborative filtering. In addition, at the algorithm level, there are clustering algorithms, classification algorithms, association algorithms, etc. [2]. For different recommendation algorithms, the resulting recommendation effect varies. In this study, we conduct experiments and analysis on intelligent recommendation algorithms combining recurrent neural network (RNN) and knowledge graph (KG) to make a study on how to improve the performance and efficiency of recommendation algorithms.
There are two main innovation points in the research. The first point is to combine knowledge map with recommendation algorithm to solve the sparse problem and cold start problem of intelligent recommendation algorithm. The second point is to use deep learning recurrent neural network to conduct deep mining of user preferences and propose a recommendation algorithm based on recurrent neural network and weight knowledge map, thus solving the problem that the recommendation results are easily affected by low correlation. The proposed method solves the problem that the relationship between entities has phase weight when knowledge atlas is used as a heterogeneous information network for recommendation algorithm. The combination of knowledge map and deep learning improves the recommendation effect and solves the problem of data sparsity and cold start of traditional recommendation algorithm to a certain extent.

Related Works
RNN and knowledge graphs have been widely studied and applied in different fields. For example, to address the problem of the proliferation of fake news, Pimpalkar et al. applied techniques such as recurrent neural networks to analyze and detect fake news, as well as to classify them, and the results showed that the method was effective in intervening fake news [3]. Marquez et al. applied a recurrent neural network approach to predict ethanol consumption in Brazil and showed that it had good performance and prediction accuracy [4]. Ananthanatarajan et al. applied recurrent neural networks to the prediction of wind energy change and showed that the RNN model has better performance compared to other traditional methods [5]. For problems such as stock market value prediction, Moghar and Hamiche applied recurrent neural network methods for model construction and analysis, which helped to improve the prediction accuracy [6]. Zheng et al. applied recurrent neural networks to electricity consumption prediction and showed that the performance of household electricity consumption prediction can be significantly improved if prediction is performed at the granularity level of appliances [7]. Laib et al. applied recurrent neural networks to predict the consumption of natural gas and found that the method used in the study possessed higher accuracy by building a model and comparing it with the models of other methods [8]. Wang et al. applied recurrent neural network to the prediction of oil field production and verified that the method has good prediction effect and higher prediction accuracy by building a model [9]. To address the problem of frequent earthquakes in Taiwan, China, Chin et al. applied recurrent neural networks to earthquake early warning in the region and tested and analyzed them with previously recorded seismograms, showing that the method can reduce processing time and has high accuracy [10]. The study validated and analyzed the method through different scenarios and showed that it has better performance than traditional methods [11].
Shi applied the deep learning-based knowledge graph to the intelligent construction engineering Q&A system, and the study proved its good horizontal scalability [12]. Do et al. combined the knowledge graph with deep learning and applied it to the development of Vietnam tourism Q&A system, and the experiment showed its high accuracy and effectiveness [13]. For the problem of matching radiological imaging with medical reports, Hou et al. applied knowledge graph to improve the original method and proved that the improvement possessed better perfor-mance [14]. Huang C L. and Huang C C. applied knowledge graph to customize the flight training of student pilots, and the results showed that it could effectively demonstrate the problems in course learning as well as provide a basis for subsequent [15]. Fan et al. applied knowledge mapping to the analysis of sepsis-related knowledge literature to sort out its historical evolution, research content, etc., so as to achieve the prediction of its development trend [16]. Zeng et al. applied knowledge mapping to drug discovery by this method to promote that the method was used to facilitate the recycling of drugs and the prediction of adverse drug reactions [17]. In response to the problem of financial fraud in listed companies, Wu et al. constructed a knowledge graph framework to detect fraudulent companies and showed that it has high recognition ability, which helps to effectively regulate companies [18]. The knowledge graph was applied to the construction of a medical intelligent question and answer system for hepatitis B. Through this method, it provides reference for doctors' diagnosis and treatment, etc. [19].
By combing through related studies, it is found that most researchers have applied RNN and knowledge graph to prediction analysis of different problems and achieved good research results by combining other methods, but there are fewer studies combining RNN with knowledge graph. To address this, the study proposes an intelligent recommendation algorithm combining RNN and knowledge graph and conducts experimental analysis on some worthy optimization problems in user recommendation to improve the effect and performance of intelligent recommendation.

Intelligent Recommendation Algorithm
Model Construction Combining RNN and Knowledge Graph 3.1. RNN Algorithm Model Construction. RNN is a neural network that inputs sequential data and performs data processing in a recursive manner. It uses sequential relationships for input and output during operation, such that the output of the previous neuron can be used as the input of the next neuron, which makes the preceding and following data have a certain causal relationship [20]. The structure of RNN is shown in Figure 1.
As can be seen from the RNN structure diagram, when each input layer has data input, it performs the same computation internally, so the parameters in this structure are in a shared state, and the training time it needs to spend is greatly reduced. When a sequence of input layers x = fx 1 , x 2 ,⋯,x t g is given, the expression of the implicit layer s t is shown in In Equation (1), f is a nonlinear function, U and W are the weight matrices, and s 0 is made zero when the hidden state values are calculated for the first layer. In different task-specific environments, the output at the moment of t is not necessary; in some tasks, only the last output is of 2 Journal of Applied Mathematics interest; in others, the output at each moment is required. The expression for the output value o t is shown in Equation (2) indicates that the hidden layer s t captures the input information for all previous moments, where g is the activation function. Due to the problems of gradient disappearance and explosion in RNN, it is difficult to preserve long-term information effectively. Therefore, RNN have evolved into long short-term memory (LSTM) networks and gated recurrent units (GRU), which are the two main streams. There are three types of gating: input gates, forgetting gates, and output gates. During the operation of LSTM recurrent neural network, data sampling and preprocessing are required first, and the received transmission sequence can only be transmitted to the next layer after data normalization. Then the normalized data will be transferred layer by layer between the multilayer LSTM cells. During this period, the LSTM cells will compensate and reconstruct the distorted signals, and finally determine the estimated signal sequence as the output. The door structure of LSTM unit is shown in Figure 2.
The gradient disappearance problem is solved by changing the gradient of numeric decimals from concatenated multiplication to concatenated addition. The forgetting gate of LSTM f t is used to determine whether the memory of the previous unit should be forgotten or retained, and its expression is shown in In Equation (3), W denotes the weight, ½h t−1 , x t denotes the input dimension summed in series with the dimension of the previous hidden layer, and b denotes the bias. The input cell is extracted from the input information of this cell, and its expression is shown in The memory unit C t is formed by combining the current memory content with the new memory content, and its calculation formula is shown in Equation (5) is the update of the cell state C, which carries the memory signal C t which effectively solves the problem of network gradient disappearance. The expressions of the output cell o t and the hidden cell h t are shown in The gated loop unit removes the memory unit C, which transmits information by hiding it, and forms an update gate that combines forgetting and input, thus simplifying the internal structure of the unit for efficiency. The update gate z t plays a role in determining the degree to which the current and previous information is retained, and its expression is shown in The reset gate r t then plays a role in determining the extent to which previous information is discarded, and its expression is shown in The implied state of the gated loop unit h t is a linear interpolation between the previous implied state h t−1 and the candidate implied stateh t , whose expression is shown inh

Knowledge Graph Model Construction.
A knowledge graph is essentially a semantic network that describes various things and their relationships. In the semantic network of knowledge graph, nodes represent different things or concepts, and edges are constituted by semantic relationships. Its construction of knowledge graph is mainly achieved by extracting a large amount of data and then knowledge fusion, storage, editing, and annotation [21]. The knowledge graph structure diagram is shown in Figure 3.
As can be seen from the structure diagram, knowledge mapping is generally divided into three parts: information extraction, knowledge fusion, and knowledge processing.
Hidden layer Output layer Figure 1: RNN structure diagram.

Journal of Applied Mathematics
Information extraction means extracting things and relationships between things from various kinds of data, and the forms of data are mainly divided into unstructured, semistructured, and structured. The extracted information is often not complete and effective, but lacks hierarchy and logic, and even contains many duplicated or wrong information fragments, so knowledge fusion should be performed on the extracted information. In order to make knowledge mapping more accurate and of higher quality, after information extraction and knowledge fusion, knowledge processing is also needed to lay the foundation for subsequent applications. The application of knowledge mapping mainly includes two aspects, namely, general knowledge mapping and industry knowledge mapping. The general knowledge graph is a horizontal application with more extensive expansion, such as widely used in knowledge Q&A and Internet search. On the other hand, industry knowledge graph emphasizes vertical expansion, which is mainly applied in a certain industry or field and has in-depth mining and application. For example, intelligent recommendation is one of the application scenarios of knowledge graph, which constructs knowledge graphs for a large number of different user information to form user portraits and lay the foundation for recommending preference information to users. When applying knowledge graph for intelligent recommendation, we can incorporate collaborative filtering information technology and build a knowledge graph with weights to reduce the influence of interference factors and improve the accuracy of recommendation. There are three steps in building the knowledge graph with weights: constructing the similarity set of things, generating the explanation path, and generating the knowledge graph with weights. First, let    Journal of Applied Mathematics the number of users be m, and its set be fQ 1 , Q 2 ,⋯,Q m g; the number of items be n, and its set be fG 1 , G 2 ,⋯,G m g, then the scoring matrix between them is shown in : In Equation (10), R ij is the rating of the user Q i on the thing G j , which indicates the preference of the user i on the thing j. From the matrix, for thing a and thing b, their corresponding rating vectors are I a = fR 1a , R 2a ,⋯,R ma g and I b = fR 1b , R 2b ,⋯,R mb g, then the expression of cosine similarity between thing a and thing b is shown in The rating matrix tends to have problems such as uneven distribution, i.e., when the number of participants in rating two things is low, higher similarity tends to occur. To address this problem, the item similarity calculation is improved, as shown in In Equation (12), sim ′ ða, bÞ is the improved similarity, and jQ ab j indicates that there are both jQ ab j individuals rating things a and b. β is a shrinkage parameter. When jQ ab j is very small, β can shrink the similarity. At the same time, when jQ ab j ≫ β is small, then it has less effect on the original similarity. According to Equation (10), it can be transformed into a similarity matrix between things and things, as shown in In Equation (13), S ab is the improved similarity between items a and b. First, the threshold k is set initially, and then, the similarity matrix is filtered to exclude the values smaller than k to get the set of similarity of things as M = fða, bÞja, b ∈ G, S ab > k, a ≠ bg. The knowledge graph can describe the relatedness between similar items by constructing paths, and the paths are called interpretable paths. In the set of all explainable paths T, the weight of the relationship between two things c and d is related to the number of paths that pass between them, and the weight expression is shown in In Equation (14), r is the relationship between things c and d, and jfT c~ðr,dÞ gj indicates the number of paths between thing c to thing d in all explainable paths T. ∑ ðr,dÞ ′ ∈S k jfT c~ðr,dÞ ′ gj denotes the number of paths that passes through the thing d. So the final form of the weighted knowledge graph is WKG = ðE, R, S, WÞ, W = fw ðr,dÞ c jc, d ∈ E and r ∈ Rg.

An Intelligent Recommendation Algorithm Model
Combining RNN and Knowledge Graph. Firstly, the user's preference set is obtained through knowledge graph, and then, the algorithm is applied to learn its representation in order to get the vector representation of different things in the user's preference set. The user's preference set is input according to different hierarchical order, and the attention mechanism is used in the process of obtaining different preference degrees, so as to obtain the user's preference vector; finally, the vector between the user and the candidate things is combined, so as to output user's click probability on things to generate recommendation results. The flow chart is shown in Figure 4.
Analytical propagation of users' historical interests is done through weighted knowledge graphs, while the direction of propagation is controlled based on the weights. Let the set of users U and the set of things V, and the interaction matrix of users and things Y = fy uv ju ∈ U, v ∈ Vg, and then, the implicit feedback of users is where y uv = 1 indicates that there is an implicit feedback between users u and things v. The set of k hopping preferences of user u in the weighted knowledge graph WKG is shown in In Equation (16), k = 1, 2, ⋯H, and E 0 u = fvjy v u = 1g denotes the set of users' clicks on historical things, which 5 Journal of Applied Mathematics can be considered as the initialized set of users u in the knowledge graph. w ðr,dÞ c denotes the relationship weight of things c and things d, and τ is the weight threshold, which mainly excludes the propagation direction with smaller weight. According to the datasets of related things, the set of k hopping triples of user u is shown in Propagating user preferences through weighted knowledge graphs can achieve better results. After obtaining the user's preference set by weight knowledge mapping, it can be learned by using recurrent neural network, which can represent the user's deeper preferences and thus achieve better user usage prediction. For each user's preference set fE k u g H k=1 , the algorithm is used to embed it into the vector space with different hierarchical order for input, with the first one being the inner layer data and the second one being the outer layer data, so as to retain the inner layer information. The hidden layer base unit then uses a long short-term memory (LSTM) network for the purpose of solving the gra-dient problem. Since users have various preferences for different things, the attention mechanism is used to measure the weights of each thing, and the different outputs are linearly combined to obtain the final preference characteristics, as shown in In Equation (18), α nj denotes the match between the j and the n output and the relative importance of the j thing. n is the number of entities in the preference set E u of the user u. The expression of α nj is shown in Finally, the user's preferred features and the combination of features of things are used to predict the user's click probability on things, as shown in  In Equation (20), σ is the Sigmoid activation function. The expression of the loss function of the algorithm is shown in In Equation (21), L KG is the loss function of knowledge graph entity embedding, L RS is the loss value of click-through rate prediction, and ∂ is the cross-entropy loss function of actual probability and predicted probability.

Performance Analysis of Intelligent
Recommendation Algorithm Combining RNN and Knowledge Graph 7 Journal of Applied Mathematics setup, the models used in the study are compared and analyzed with the following models: the CKE (collaborative knowledge base embedding) model [22], the DKN (deep knowledge-aware network for news recommendation) model [23], the wide and deep model [24], and RippleNet model [25]. Firstly, we combine the knowledge graph and collaborative filtering information to construct the weight knowledge graph. Then the threshold value is set. In the book and movie dataset, the data with a rating of not less than 8 is set to 1, indicating that the user likes the book or movie more. In the Book-Crossing [26] and MovieLens-1M datasets [27], the data that the user has not seen or has not participated in the rating is set to 0, which means the user does not like the book or movie. The longest interpretable path L is set to 4 and 3, the weight threshold τ is set to 0.06 and 0.1, and the number of hops H is set to 2. For the parameter settings of the other four models, see References [22][23][24][25]. The results of the five models are compared in the book and movie datasets. Among them, the ROC comparison of different models in the Book-Crossing datasets is shown in Figure 5.
As can be seen from Figure 4, the ROC curve combining RNN and knowledge graph works best in the Book-Crossing datasets, and its corresponding lower area AUC value is the  Journal of Applied Mathematics highest at 0.931. Among them, the DKN model has the lowest AUC value, which indicates that the method is less effective in extracting information in the Book-Crossing datasets.
The ROC curves of the five models in the MovieLens-1M datasets were also compared, as shown in Figure 6. Figure 6 shows the comparison of the ROC curves of five different models in the MovieLens-1M datasets, and it can be seen from the figure that the ROC curve combining RNN and knowledge graph has the best effect, with the highest AUC value of 0.784 and the lowest AUC value of 0.617 for the DKN model. However, the performance is better in the Book-Crossing datasets. The reason is that there are more knowledge graph relations in the Book-Crossing datasets, which can produce better results when controlling them by using weights. The accuracy of the five models in the two datasets is also compared, and the comparison in the Book-Crossing datasets is shown in Figure 7.
As can be seen from Figure 7, the accuracy rates of the five models gradually increase and level off as the number of samples increases. Among them, the accuracy rates of CKE model and DKN model are lower compared with the other three algorithmic models, 76.45% and 66.72%, respectively, indicating that the extraction effect of CKE model and DKN model is poor. In addition, the algorithmic model combining RNN and knowledge graph achieves the highest accuracy, and its rising trend is smoother as the number of samples increases. When the number of samples is 1000, its accuracy is 93.37%, which is 2.45% higher than the Rip-pleNet model, 6.87% higher than the wide and deep model, 16.92% higher than the CKE model, and 26.65% higher than the DKN model by 26.65%. Also, the accuracy of the five models in the MovieLens-1M datasets is compared, as shown in Figure 8.
As can be seen from Figure 8, the extraction accuracy of all five algorithmic models in the MovieLens-1M datasets is reduced compared to the Book-Crossing datasets. It is the same reason as the five algorithmic models achieved lower AUC values in the MovieLens-1M datasets. With the increase of sample size, the accuracy rate also shows a slow increasing trend. When the number of samples reaches 1000, the algorithm model combining RNN and knowledge graph achieves the highest accuracy of 74.15%, which is 2.97% higher than the RippleNet model, 3.31% higher than the wide and deep model, 8.32% higher than the CKE model, and 11.43% higher than the DKN model. The F1 values of the five models in the two datasets were also compared, and the comparison in the Book-Crossing datasets is shown in Figure 9.
As shown in Figure 9, the F1 values of the five models show a slowly increasing trend with the increase of the number of iterations in the Book-Crossing datasets, which is consistent with the trend of the accuracy rate. The F1 values of the CKE and DKN models are also lower than the other three models. Meanwhile, the F1 values of the five models in the MovieLens-1M datasets are compared, as shown in Figure 10.
As can be seen from Figure 10, the F1 values of all five models in the MovieLens-1M datasets decreased compared to the F1 values of the five models in the Book-Crossing datasets. With the increase of the number of iterations, the F1 values of the five models show a slowly increasing trend and tend to be stable. Among them, the algorithm model combining RNN and knowledge graph achieves the highest F1 value. The change of the AUC and ACC values together shows that the recommendation algorithm combining RNN and knowledge graph has the best performance.

Performance Analysis of Intelligent Recommendation
Algorithms Combining RNN and Knowledge Graph. Ten users in the Book-Crossing datasets were randomly selected for error analysis on the ratings of different books, and the results are shown in Table 1.
As can be seen from Table 1, the predicted scores of ten users randomly selected from the Book-Crossing datasets scores, extracted by combining RNN and knowledge graph, do not differ much from the true scores, and the error values generated are low (less than 2%). The algorithm predicts the scores of 7.841 and 7.095 for the user with number 289, whose scores for different books (numbers 112 and 153) are 8 and 7, respectively, and the corresponding errors are 1.99% and 1.36%, indicating that the errors generated by the algorithm for the same user on the scores of different books are not very different. Meanwhile, for the book numbered 178, two different users (numbered 412 and 503) rated it 7 and 9, and the algorithm predicted 6.904 and 8.927, with an error of 1.37% and 0.81%, respectively, indicating that the error of the algorithm for different users on the same book is not significant, which proves the high performance of the algorithm used in the study. The recommendation algorithm used in the study has a high performance. In order to further verify the effectiveness of combining RNN and knowledge graph recommendation algorithms, two other different recommendation algorithms are introduced: LDA-ALS algorithm and SVD algorithm, and the RMSE and MAE values of the three algorithms are compared in actual operation, as shown in Figure 11.
As can be seen from Figure 11, the recommendation algorithm combining RNN and knowledge graph has the lowest RMSE value and MAE value among the three recommendation algorithms. Its RMSE value is 0.467, which is 13.2% lower than that of the LDA-ALS algorithm and 23.7% lower than that of the SVD algorithm. Its MAE value is 0.442, which is 6.2% lower than the LDA-ALS algorithm and 10.3% lower than the SVD algorithm. It can be seen that the RNWKG algorithm performs better and is more stable than the LDA-ALS algorithm and the SVD algorithm, and it produces better recommendation results.

Conclusion
Smart recommendation algorithms use deep learning techniques to extract effective features from a large amount of user data, thus forming different user profiles and recommending users' favorite information based on historical preferences. The study combines recurrent neural network algorithms with knowledge graphs and uses the advantages of both to study and improve the intelligent recommendation algorithms. The results show that the AUC and ACC values of the five recommendation models are higher in the Book-Crossing datasets than the MovieLens-1M datasets, indicating that the five models have the best extraction effect in the Book-Crossing datasets. And among the five recommendation models, the recommendation algorithm model combining RNN and knowledge graph has the highest AUC value and ACC value. At the same time, the ratings of several users were randomly selected in the Book-Crossing datasets, and it was found that the predicted ratings of the recommendation algorithm combining RNN and knowledge graph were closer to the real ratings, and the error values were smaller (less than 2%), indicating that it has better extraction and prediction performance. Comparing the RMSE and MAE values of the results of the three different recommendation algorithms, it is found that the recommendation algorithm combining RNN and knowledge graph has the lowest RMSE and MAE values. Its RMSE value is 0.467, which is 13.2% lower than that of the LDA-ALS algorithm and 23.7% lower than that of the SVD algorithm. Its MAE value is 0.442, which is 6.2% lower than the LDA-ALS algorithm and 10.3% lower than the SVD algorithm, indicating its stable performance and good recommendation effect. In summary, the recommendation algorithm combining RNN and knowledge graph has high performance and can play a greater value in user recommendation. However, there are still some areas for improvement, such as only the book and movie datasets, which can be combined with more application scenarios for analysis and validation. Also, the examination of users' short-term preferences can be added to achieve a more comprehensive effect.

Data Availability
The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.

Conflicts of Interest
It is declared by the authors that this article is free of conflict of interest.