Personalized Recommendation via Suppressing Excessive Diffusion

Efficient recommendation algorithms are fundamental to solve the problem of information overload in modern society. In physical dynamics, mass diffusion is a powerful tool to alleviate the long-standing problems of recommendation systems. However, popularity bias and redundant similarity have not been adequately studied in the literature, which are essentially caused by excessive diffusion and will lead to similarity estimation deviation and recommendation performance degradation. In this paper, we penalize the popular objects by appropriately dividing the popularity of objects and then leverage the second-order similarity to suppress excessive diffusion. Evaluation on three real benchmark datasets (MovieLens, Amazon, and RYM) by 10-fold cross-validation demonstrates that our method outperforms the mainstream baselines in accuracy, diversity, and novelty.


Introduction
With the rapid development of Internet technology and the explosion of information, information overload is increasingly exacerbated and can not be ignored [1].That massive data makes it difficult for people to obtain the most relevant information promptly has turned into a hindrance to the development of Internet technology [2].Personalized recommendation [3][4][5] is fundamental to solve the problem of information overload which adopts previous interactions records to extract users' interests for making recommendations.Despite the fact that vast amount of work has been done [5,6], it is still far from enough to satisfy the increasing needs of commodity information service [7][8][9][10].
Due to the urgent demands of the E-economy, various recommendation algorithms have been proposed: contentbased (CB) approach [10] captures user's preferences to recommend similar objects, but it does not apply to audio, image, or video information; spectral analysis (SA) [4] is not suitable for huge-size systems because of high computational complexity; collaborative filtering (CF) [11,12] is based on similarity and consequently suffers from a popularity bias problem; network-based (NB) approach [13][14][15] constructs a network based on the relationships between the users and the objects and then analyzes the network to recommend for users but there exists the "cold-start" problem.In this paper, motivated by mass diffusion [13,[16][17][18] we focus on recommendation methods that directly build on a network representation of the input data.This elementary approach has been modified and generalized many times since then in order to improve the accuracy and diversity of the recommendations [19].
In this paper, the problem widely existing in mass diffusion-based recommendation algorithms is raised as "excessive diffusion" (defined in Section 2) and we propose a novel algorithm to relieve it.Popularity bias [19] and redundant similarity [20] can both be attributed to excessive diffusion which will definitely lead to overestimated similarity and depress recommendation performance.With the motivation of improving the recommendation performance, in this paper, we firstly penalize the weight of popular objects and consider the heterogeneity of users' degrees simultaneously to restrain popularity bias and then we leverage the secondorder similarity to further eliminate redundant similarities.Extensive experiments on three real datasets (MovieLens, Amazon, and RYM) demonstrate the effectiveness of the proposed algorithm in improving recommendation accuracy, diversity, and novelty thus applicable in practice.

Related Work
Theoretical physics has provided us with some useful tools to improve the recommendation performance, such as mass diffusion (MD) [16][17][18] and heat conduction (HC) [21,22].Network-Based Inference (NBI) [13] is a classical recommendation algorithm based on mimicking the mass diffusion resource-allocation process between objects via neighboring users, which is biased towards popular objects.There are a lot of variants of it: Heterogenous Network-Based Inference (HNBI) [16] initiates the resource distribution heterogeneously; Redundant Eliminated Network-Based Inference (RENBI) [23] eliminates redundant correlations by considering high order correlations between objects; Corrected Similarity-based Inference (CSI) [24] combines the forward and backward diffusion to correct similarity estimation deviation and so forth.HC [21] imitates the process of heat conduction, which achieves high diversity but low accuracy.Combining the MD and HC processes is an intuitive idea and there are many hybrid algorithms that have been developed to decrease the NBI's bias towards popular objects, like hybrid heat-spreading and probabilistic-spreading (HHP) [25], balanced diffusion (BD) [26], preferential diffusion (PD) [27], similarity-preferential hybrid processes (SPHY) [28], and so on [19,23].

The Problem of Excessive Diffusion.
The essential point that hinders the improvement of recommendation performances lies in the bias on some objects.Putting it in the diffusion paradigm, excessive resources are spread to them.That is why we call it excessive diffusion.Although there have been many successful improvements, excessive diffusion is ubiquitous in realistic recommendation systems.The appearance it takes on is mainly manifested in two aspects: on the one hand, the resources diffusion is excessive biased towards popular objects making the popular objects more likely to be recommended, which leads to popularity bias and will not necessarily promote the accuracy but undermine recommendation diversity and novelty.On the other hand, the resources derived from the same user may be repeatedly counted in the diffusion process; that is, excessive resources are distributed to the object, which results in redundant similarity.An illustration of redundant similarity is shown in Figure 1.Excessive diffusion causes some objects to have too many resources than they deserve which seriously depresses the recommendation performance and should be suppressed to improve the recommendation performance.

Method
Suppose that a recommendation system contains  users and  objects and each user has collected some objects.Let  = { 1 ,  2 , . . .,   } and  = { 1 ,  2 , . . .,   } represent the users and objects, respectively.According to user's purchase history we can construct the user-object bipartite network with + nodes.If   is collected by   , there is a link between   and   , and the corresponding element   in the adjacent matrix  is set as 1, otherwise 0. Mathematically speaking, the essential task of a recommendation system is to generate a ranking list of the target user's uncollected objects.The top  objects are recommended to this user.
Network-Based Inference (NBI) [13] is based on simulating the mass diffusion resource-allocation process, where each object distributes its initial resource equally to all the users who have collected it, and then each user equally reallocates what he/she has received to all the objects he/she has collected, the transfer weight   can be defined as where (  ) represents the number of objects collected by   and (  ) denotes the number of users who have collected   .For a specific user   , we initially assign each object that has been collected by   one unit of resource, while others are assigned 0; namely,   =   , and then we redistribute these resources via the transformation   = , where  is the transfer matrix,  = (  ) ×1 , and   represents the final resource distribution of objects and the top- values of uncollected objects will be recommended to   .
Motivated by enhancing the algorithm's recommendation performance, we find a suitable way to suppress excessive diffusion.We penalize the popular objects by assigning more resource to the low-degree objects at the last step (a user redistributes his/her resource to his/her neighbor object   the amount proportional to (  )  ) and consider the heterogeneity of users' degrees simultaneously [27] to mitigate popularity bias and strengthen the capability of finding unpopular and niche objects.In this case, (1) is transformed into where −1 ≤  ≤ 0 is a free parameter and (  (  )  ) denotes the average of (  )  over all objects that have been collected by user   .As we know, some resources that object obtained may be derived from diverse users and others may stem from the same user.Considering the example in Figure 1, from our point of view, A and B are independent of C in the sense that they appear in distinct transactions that have C (no common user collecting A, B, and C together) and therefore their adjacent relations in the graph with C are independent.Then, the second-order similarity between A and C, derived from the path A→B→C, should be insignificant because A and B are independent of C. On the contrary, D and F have strong second-order similarity in that they appear in the same transactions that have F (D, E, and F are collected by the same user), which leads to strong correlation between both.That is, if there exists redundant similarity between two specific objects, the second-order similarity of them should be strong.Hence, we can subtract the second-order similarities in an appropriate way to eliminate redundant similarities to some extent [20].As a result, we design an improved algorithm via suppressing excessive diffusion (SED), and the transfer matrix is modified as where  ≤ 0 is a free parameter and  = (  ) × is achieved via (2).Clearly, when  = 0 and  = 0, it will degenerate to NBI. Figure 2 gives illustrations about how to calculate transfer matrix of NBI and SED.

Evaluation Metrics.
Accuracy is critical in evaluating the performance of the algorithm.We introduce four indicators to assess algorithm's accuracy.
(1) Averaged Ranking Score (⟨⟩) [25].⟨⟩ evaluates the ability of ranking users' preferable objects in higher places than disliked ones.For an arbitrary link   in   , if   's rank in   's recommendation list is rank  , then the averaged ranking score is defined as where |  | denotes the number of links in probe set.The smaller ⟨⟩, the better the algorithm's accuracy.
(2) Area under ROC Curve (AUC) [24].AUC measures the capacity of identifying the relevant objects from the irrelevant objects.For  independent experiments, each of which compares a relevant object and an irrelevant one, if there are  1 times when the relevant object has a higher score than the irrelevant one and  2 times when the scores are equal, then Clearly, the greater the AUC, the higher the algorithm's accuracy.
(3) Precision () [13].Precision is the ratio of the number of   's hidden links (objects collected by   and present in the probe set) to   (), contained in the top- recommendation list.Therefore, the precision  of the whole system is (4) Recall [24].Recall is the proportion of the number of all hitting links and the number of links in probe set, as Diversity quantifies how different the recommended objects are with respect to each other.It is mainly measured from two aspects in this paper.
(1) Intrasimilarity () [24]. measures the diversity of the objects in one user's recommendation list.For an arbitrary user, denote the recommended objects as   = { 1 ,  2 , . . .,   }.Then the whole system's intrasimilarity is written as where    is the cosine similarity between   and   and it is defined as The lower the intrasimilarity, the higher the algorithm's diversity.
(2) Hamming Distance () [29]. refers to how different the recommended lists are between users.Let   denote the number of distinct objects in the recommendation lists of   and   , and then the averaged hamming distance is The larger , the higher the algorithm's diversity.Novelty is closely related to personality which measures the capacity of generating novel and niche recommendations.
(1) Average Degree (⟨⟩) [29].Let   signify the th recommended object for   , and then the average degree of all recommended objects is equal to The smaller ⟨⟩, the higher the algorithm's novelty.

Benchmark Methods.
For comparison, we briefly introduce six recommendation algorithms.
(1) Collaborative Filtering (CF) [12].CF is based on measuring the similarity between users or objects.For any two users   and   , the cosine similarity is Thus the extent to which the target   will like   is (2) Network-Based Inference (NBI) [13].NBI have been introduced before.
(4) Corrected Similarity-Based Inference (CSI) [24].CSI combines the forward and backward diffusion to correct similarity estimation deviation.Then the resource transfer weight is defined as where   is achieved via (1).
(5) Redundant Eliminated Network-Based Inference (RENBI) [20].RENBI considers high order correlations between objects, and then the transfer weight is where  ≤ 0 is a free parameter and  is obtained according to (1).
(6) Preferential Diffusion (PD) [27].PD's transfer weight   is equal to (2).Although RENBI and PD are similar to our proposed algorithm, they do not have a complete understanding of the problem of excessive diffusion, so they can only partially alleviate it.SED cleverly combines the advantages of RENBI and PD, which penalizes the popularity degree of objects to relieve popularity bias and further reduces redundant similarities, and can be more effective in suppressing excessive diffusion.

Results
In order to obtain credible experimental results, 10-fold cross-validation is performed to decrease deviation.Results presented in here are achieved via averaging over 10 independent   /  divisions.The recommendation performances, measured by seven metrics, of seven methods for three datasets are summarized in Table 2.
As shown in Table 2, for the three datasets, SED performs the best on all seven metrics.Concretely speaking, SED surpasses the original mass diffusion-based algorithm NBI in all aspects, especially with ⟨⟩ reduced by 23.6%,  increased by 24.3%, Recall increased by 24.3%,  increased by 37.6%, and ⟨⟩ reduced by 32.8% in MovieLens; Recall increased by 20.1% and ⟨⟩ reduced by 50% in Amazon; and ⟨⟩ reduced by 40.6%,  increased by 25.8%, Recall increased by 26.4%, and ⟨⟩ reduced by 40.8% in RYM.It achieves different degrees of improvement on all other algorithms as well.In a word, the proposed algorithm exhibits outstanding accuracy, diversity, and novelty.
To explain our algorithm's performance under different recommendation list length, we fix the optimal parameters and then vary  from 5 to 100 to obtain the recommendation performance for three datasets, the results of which are shown in Figures 3, 4, and 5, respectively.Since ⟨⟩ and AUC are constant values and do not change with , they are not shown in Figures 3-5.Each data point is obtained by averaging over ten independent runs with data division.
From the three figures, we can see that for two "the smaller the better" metrics,  and ⟨⟩, SED's curves are at the bottom, while, for the remaining three "the higher the better" metrics, they are always on the top.That is to say, the proposed method also performs best among the schemes under different recommendation list lengths which further supports the results reported in Table 2. From this we can draw a conclusion that the moderate inhibition of excessive diffusion to ensure the fairness of diffusion is more conducive to effective recommendation.

Conclusion and Discussion
Motivated by preventing the resources excessively diffused to the popular objects and the objects where there exists redundant similarity, in this paper, we firstly penalize the popular objects' degrees and take the heterogeneity of users' degrees into account simultaneously to restrain popularity bias and then we eliminate redundant similarities to some extent by subtracting the second-order similarity.Extensive experiments on three real datasets consistently demonstrate the effectiveness of SED considering its improvement in accuracy, diversity, and novelty.SED also performs the best compared to the benchmarks under different recommendation list lengths, and thus it is applicable and versatile in practice.
Our method can more accurately match the user with the right objects complying with his/her preferences and in a commercial sense can further grasp user's loyalty to promote substantial profits growth.Because of its effectiveness, SED can be applied in various kinds of recommendation environments, like using purchase records to recommend books, using reading histories to recommend news, recommending TV shows and movies on the basis of users' viewing patterns  and ratings, and so on.Although SED has achieved good recommendation performance, there are some improvements worthy of further investigation, such as considering the time dimension, which is indeed necessary as it can greatly affect the recommendation performance and measure the methods ability to reflect both network topology and the systems natural growth patterns and the users shifting interests [30].
In summary, we hope our method can enlighten readers to a certain extent.

Figure 1 :Figure 2 :
Figure1: Illustration of redundant similarities.Objects and users are marked with circles and squares, respectively.The solid line represents the object that has been collected by user and the dotted line denotes the similarity between objects.As we can see that the resources of C obtained through the diffusion following paths A→user 1→C and B→user 2→C are independent for there is no common user collecting A, B, and C together in Figure1(a), however, in Figure1(b), the resources of F are obtained from the same user and are counted twice.If user 4 chooses F just because F has some similarities with D (or E), the similarity E-F (or D-F) is redundant when calculating the resource of F, which degrades the recommendation accuracy.

Table 1 :
Primary information of the three datasets.from1to 5 stars in MovieLens and Amazon and from 1 to 10 in RYM.A higher rating conveys stronger confidence for user's preference towards an object.Here we only consider objects collected by users with ratings of at least 3 stars in MovieLens and Amazon and no less than 5 stars in RYM for the sake of capturing user's preference more precisely.After processing, detailed information of the datasets is shown in Table1.Before the experiments, datasets are randomly divided into two parts: a training set   containing 90% of all links and a probe set   containing the rest.