A Novel Adaptive Conditional Probability-Based Predicting Model for User’s Personality Traits

With the pervasive increase in social media use, the explosion of users’ generated data provides a potentially very rich source of information, which plays an important role in helping online researchers understand user’s behaviors deeply. Since user’s personality traits are the driving force of user’s behaviors, hence, in this paper, along with social network features, we first extract linguistic features, emotional statistical features, and topic features from user’s Facebook status updates, followed by quantifying importance of features via Kendall correlation coefficient. And then, on the basis of weighted features and dynamic updated thresholds of personality traits, we deploy a novel adaptive conditional probability-based predicting model which considers prior knowledge of correlations between user’s personality traits to predict user’s Big Five personality traits. In the experimental work, we explore the existence of correlations between user’s personality traits which provides a better theoretical support for our proposed method. Moreover, on the same Facebook dataset, compared to othermethods, ourmethod can achieve an F1-measure of 80.6%when taking into account correlations between user’s personality traits, and there is an impressive improvement of 5.8% over other approaches.


Introduction
As a new medium for information dissemination, social network has become a novel means of social interactions.Hence, user's individual behaviors have gradually turned into the key factors in social networks analysis.Besides, although some users post their desirable images and lives onto social media to achieve self-presentation which reflect some sort of "untrue" self, users' contributions and activities, which can be instantly made available to entire social network [1], still provide a valuable insight into individual behaviors.
Psychologists believed that user's personality traits were the driving force of user's behaviors, and individual differences in personality traits may have an impact on user's online activities [2,3].Due to the accessibility of user's personality traits features, in order to avoid basic rights violation or discrimination, some researchers carried out experiments with users' psychological features and profiles which were recorded for research purposes with users' consent to make a deep understanding of online social networks via user's personality traits.User's personality traits can be used to predict early adoption about Facebook [4]: a conscientious person has sparing use of Facebook, an extroverted person has long sessions and abundant friendships, and a neurotic person has high frequency of sessions.Moreover, user's personality traits may help optimize search results [5], manifest social influence [6], and distinguish individuals who have some common properties in the crowd [7].It also plays an important role in relationship outcomes (customer trust, satisfaction, and loyalty) [8].To sum up, user's personality traits recognition has an important theoretical significance to mine user's behavior patterns and get user's potential needs under different contexts.Hence, analyzing and forecasting user's personality traits through mining data from online social networking sites have become a research focus.Our work on predicting user's personality traits is motivated by its broad application prospect.

Mathematical Problems in Engineering
Nevertheless, confronted with the same problem as stated in [9], accessibility of user's personality traits features may vary.Consequently, user's personality traits are by nature difficult to predict.Generally, psychologists regarded personality traits, which were reflected in user's attitudes towards things and actions taken by user [10], as a person's unique pattern of long-term thoughts, emotions, and behaviors [11,12].Therefore, leveraging long-term mode of expression and emotion behind numerous contents that a user posts to predict his/her personality traits can be a feasible measure.Furthermore, users who have diverse personality traits tend to pay attention to different types of things; in such a scenario, we expect that topics a user is concerned about would enhance the performance of user's personality traits prediction.Based on the research train of thought, in this paper, we propose a new adaptive conditional probabilitybased framework for user's personality traits prediction.Our main contributions are summarized next.
(1) Demonstrate the existence of interdependencies between user's personality traits.
(2) On account of social network features, linguistic features, emotional statistical features, and topic features, we put forward a novel unsupervised adaptive conditional probability-based framework for the problem of predicting user's personality traits through taking prior knowledge of correlations between user's personality traits into consideration.
(3) Exploit correlations between features and user's personality traits via Kendall correlation coefficient, so as to quantify importance of each feature.
(4) Update threshold of each personality trait dynamically rather than adopt a unified threshold.
The rest of the paper is organized as follows: Section 2 describes the related work; Section 3 defines the method we propose; details of the experimental results and dataset which is used in this study are given in Section 4; finally, conclusion appears in Section 5.

Related Work
As research on user's behaviors in social networks has become a hot spot, user's personality recognition has received a significant amount of attention in both theory and practice.Argamon et al. [13] and Mairesse et al. [14] were first dedicated to this research field.Generally, there were two main approaches adopted for studying user's personality traits in social networks.Merely based on social network activities, one was using machine learning algorithms to capture user's personality traits.Moore and McElroy [15] explored user's personality traits through questionnaires and log data of 219 college students, and scale reliabilities for the five personality dimensions of the IPIP were acceptable with Cronbach's alpha values of 0.90 for extraversion, 0.81 for agreeableness, 0.82 for conscientiousness, 0.83 for emotional stability, and 0.79 for openness to experience which revealed that mining user's personality traits in Facebook was feasible.Kosinski et al. [5] first presented that there were psychologically meaningful links between users' personalities, their website preferences, and Facebook profile features and then predicted individual's personality traits via multivariate linear regression.The experimental results indicated that extroversion was most highly expressed by Facebook features, followed by neuroticism, conscientiousness, and openness.Agreeableness was the hardest trait to predict (0.05 in terms of accuracy) using Facebook profile features and the simple model.
The other one extended personality-related features with linguistic cues.On the basis of the corpus which was derived from essays written by students at the University of Texas at Austin [16], Argamon et al. [13] utilized SMO [17] to determine whether each author had high or low neuroticism or extraversion, respectively.For neuroticism, their experimental results have shown clearly the usefulness of functional lexical features, in particular the appraisal lexical taxonomy, while in the case of extraversion, experimental results were less clear, but examination of indicative function words pointed the way to developing more effective features, by focusing on expressions related to norms, (in)completeness, and (un)certainty.Golbeck et al. [18] explored whether the publicly available information on users' Facebook profile can predict personality traits.In order to predict personality traits of 279 Facebook users, based on linguistic, structural, and semantic features, they used the profile data as a feature set and trained two machine learning algorithms, m5sup' Rules [19] and Gaussian Processes [20], to predict each of the five personality traits within 11% of their actual value.In addition, the experimental results have shown that user's Big Five personality traits can be predicted from the public information they shared on Facebook.
Mairesse et al. [14] leveraged classification, regression, and ranking models to recognize personality traits via LIWC and MRC features automatically.And experiments were carried out with the essays corpus and the EAR corpus, respectively.The results revealed that the LIWC features outperformed the MRC features for every trait, and the LIWC features on their own always performed slightly better than the full feature set.Concerning the algorithms, it can be found that AdaboostM1 [17] performed the best for extraversion (56.3% correct classifications), while SMO produced the best models for all other traits.They also pointed out that features were likely to vary depending on the source of language and the method of assessment of personality through analyzing the correlations between different characters.In order to forecast RenRen's 335 users' personality traits, according to the number of user's friends and a state recently released, Bai et al. [21] used many classification algorithms such as Naive Bayesian (NB) [17], Support Vector Machine (SVM), and Decision Tree [17].And they found out that C4.5 Decision Tree can get the best results (0.697 for agreeableness, 0.749 for neuroticism, 0.824 for conscientiousness, 0.838 for extraversion, and 0.811 for openness in terms of 1-measure).Oberlander and Nowson [22] achieved better results (ranking on raw accuracy: agreeableness > conscientiousness > neuroticism > extraversion, the best agreeableness accuracy was 30.4% absolute over the baseline (77.2% relative)) on classification of personality traits through leveraging Native Bayes model on account of differing sets of n-gram features.They also demonstrated that, with respect to feature selection policies, automatic selection generally outperformed "manual" selection.On the basis of automatically derived psycholinguistic and moodbased features of a user's textual messages, Nguyen et al. [23] utilized SVM classifier for examining two two-class classification problems: influential versus noninfluential and extraversion versus introversion.They experimented with three subcorpora of 10000 users each and presented the most effective predictors for each category.The best classification result, at 80%, was achieved using psycholinguistic features.However, they did not predict personality traits in finer grain.Bai et al. [24] proposed a multitask regression algorithm and an incremental regression algorithm to predict users' Big Five personality traits from their usages of Sina microblog objectively.The results indicated that personality traits can be predicted in a high accuracy through online microblog usages.Besides, the average mean absolute error of multitask regression model was 13.84% which got about 5 percentage points reduction compared to incremental regression.Sun and Wilson [25] demonstrated that, without any significant addition or modification, a cognitive architecture can serve as a generic model of personality traits.Besides, integrating personality modeling with generic computational cognitive modeling was shown to be feasible and useful.
However, some drawbacks can be pointed out in previous work on user's personality traits prediction: (1) some researchers have made an assumption that there had been little or no correlations between user's personality traits [26]; however, the massive approaches have been explored to examine personality psychology and have revealed the correlations between the Big Five dimensions instead of little or no correlations [27][28][29][30].(2) Although different features played different roles in predicting user's personality traits [18,31,32], only a few researchers considered the correlations between features and personality traits [33].(3) In multilabel learning task, such as PT5 method proposed by Tsoumakas and Katakis [34], thresholds were usually unified to the same value, which was not appropriate.In this light, we propose an adaptive conditional probability-based model to improve the performance of user's personality traits prediction task.

Adaptive Conditional Probability-Based Predicting Model for User's Personality Traits
In this section, we present predicting model adopted in our work.Initially we make a definition of preliminary features (Section 3.1), followed by measurement of distributing weights to characteristics (Section 3.2).And then, we outline framework of adaptive conditional probability-based user's personality traits prediction model (Section 3.3).Finally, in Section 3.4, we depict our algorithm and analyze time complexity of our proposed model.

Definition of Features.
As a person's unique pattern of long-term thoughts, emotions, and behaviors, personality traits are reflected in user's attitudes towards things and actions taken by user.Therefore, aside from social network features which were provided in Facebook dataset [35] directly, we introduced linguistic features, emotional statistical features, and topic features to predict user's personality traits.Linguistic features, emotional statistical features, and topic features can be measured via analysis of user's Facebook status updates.
3.1.1.Social Network Features.Facebook dataset includes seven social network features, namely, date of user's register, network size, ego betweenness centrality, normalized ego betweenness centrality, density, brokerage, normalized brokerage, and transitivity, which reflect user's behavior patterns just through user's network structure.Similar to the cluster assumption [36], in this work, we took the above features into consideration to complete prediction task and assumed that the more similar were the network structures between two users, the more likely they shared the same label.

Linguistic Features.
Since each user showed a particular mode of expression, some researchers held the view that correlations between personality traits and spoken or written linguistic cues were significant [14,16], so language-based assessments can constitute valid personality measures [28].Hence, in this paper, we regarded linguistic cues as factors so as to mine user's personality traits through user's means of expression.
A natural language parser is used to work out grammatical structure of sentences, such as grouping words together as "phrases" and obtaining subject or object of a verb.Probabilistic parsers try to produce the most likely analysis of new sentences via leveraging knowledge of language gained from hand-parsed sentences.Stanford Parser (http://nlp.stanford.edu/software/lex-parser.shtml#About) is a probabilistic natural language open source parser, which implements a factored product model, with separate PCFG phrase structure and lexical dependency experts, whose preferences are combined by efficient exact inference, using an A * algorithm.Either of these yields a good performance statistical parsing system [37].A GUI is provided for utilizing it simply as an accurate unlexicalized stochastic context-free grammar parser and viewing the phrase structure tree output of the parser.Thus, in order to learn traits of contents that user yields, we got word frequency statistics of 35 kinds of parts of speech with Stanford Parser that were conjunction, numeral, determiner, existential there, foreign word, modal auxiliary, singular or mass noun, plural noun, proper noun, plural proper noun, predeterminer, genitive marker, personal pronoun, plural personal pronoun, ordinal adverb, comparative adverb, superlative adverb, particle, symbol, interjection, verb in base form, verb in past tense, gerund, verb in past participle, verb in present tense (not 3rd person singular), verb in present tense (3rd person singular), subordinating conjunction, ordinal adjective, comparative adjective, superlative adjective, list item marker, WH-determiner, WHpronoun, WH-plural pronoun, and WH-adverb.Besides, we defined another six linguistic features: the total number of words and frequency statistics of punctuation, comma, period, exclamation, and question.Nevertheless, we filtered hyperlinks in users' contents as blog services tended to produce many hyperlinks for navigation and advertisement [38], which had no relationship with personality traits prediction.

Emotional Statistical Features.
User's attitudes towards things, which show user's different personality traits, reflect user's unique pattern of long-term emotions.For instance, a neurotic person may have a tendency to experience unpleasant emotions easily, such as anger, anxiety, depression, and vulnerability.Hence, user's statistics of emotion can be characteristics in user's personality traits predicting model.In this paper, user's emotional statistical characteristics included proportion of positive words and negative words used in user's posts.On the basis of adjectives and their variants obtained in Section 3.1.2,user's emotional statistical characteristics were calculated with the corpus of HowNet Knowledge (http://www.keenage.com/download/sentiment.rar).HowNet Knowledge, which includes 8945 words and phrases, consists of six files: positive emotional words list file, negative emotional words list file, positive review words list file, negative review words list file, degree level words list file, and proposition words list file.
As user's emotional statistical features, user's positive and negative emotional statistical characteristics are defined as where PT() and NT() represent proportion of positive words and negative words used in user 's posts, respectively,   and   represent the number of positive emotional words and the number of negative emotional words user  used which are included in HowNet Knowledge, and sum  represents the total number of words in user 's contents.

Topic Features.
The things user focuses on may have an impact on actions that user has taken.Take openness which is one of the Big Five personality traits as an example: it reflects degree of intellectual curiosity, creativity, and a preference for novelty and variety a person has.Therefore, we mined a series of user's concerned themes from user's status via LDA (Latent Dirichlet Allocation) [39] for predicting user's personality traits.
Since our purpose is to extract all concerned themes of a user, rather than to extract specific themes of each post, we merged all microblogs of a user into one document and then extracted user's concerned themes; namely, each document corresponded to a user.The results of LDA model are shown as follows: where DT stands for an × matrix, which is mainly used for storage of distribution of themes and documents,  stands for the number of documents,  stands for the number of themes, and element DT  ( = 1, 2, . . .,  and  = 1, 2, . . ., ) in matrix DT stands for probability that the th document belongs to the th theme, that is, degree of attention which user  pays to the th theme.

Weight
where where  denotes the number of elements that have repeated elements in random variable  and   denotes the number of repeated elements of the th element.Similarly,  2 is calculated as where   denotes the number of elements that have repeated elements in random variable  and   denotes the number of repeated elements of the th element. 3 denotes the total number of merged sequences, which is calculated as  where  represents dimensions of tuples.Thus, according to Kendall correlation coefficient between features and personality traits, importance of the th feature   is calculated as where (  ,   ) stands for Kendall correlation coefficient between the th feature   and the th personality trait   and  stands for set of Big Five personality traits which will be introduced in Section 3.4.After normalizing importance of the th feature   , weight of   is calculated as follows: where  denotes set of personality traits predicting features.

A Framework of Predicting User's Personality Traits.
Figure 1 shows the architecture of adaptive conditional probability-based predicting model for user's personality traits.An analysis of correlations between user's personality traits and features is conducted before assigning weights to linguistic, emotional statistical, and topic features which are extracted from contents that user produces, as well as social network features.Meanwhile, correlation analysis of user's personality traits is carried on.Then, according to weights of features,  nearest neighbors set is obtained.Finally, given  nearest neighbors set, dynamic updated thresholds of personality traits, and correlations between personality traits, user's personality traits set is predicted.

Algorithm of Adaptive Conditional Probability-Based
User's Personality Traits Prediction.Before we proposed our predicting algorithm, we conducted experiments to analyze correlations between user's personality traits which will be shown in detail later in Section 4.2.And the results of correlation analysis revealed that there were interdependent relationships between user's personality traits.Thus, given dynamic updated thresholds of personality traits, as well as considering interdependencies between personality traits, a novel adaptive correlation-based predicting model was proposed.
In psychology, the Big Five personality traits are five broad domains or dimensions of personality traits that are used to describe human personality.The theory based on the Big Five factors is called Five Factor Model (FFM) [40].The Big Five factors are extraversion (denoted by EXT), neuroticism (denoted by NEU), agreeableness (denoted by AGR), conscientiousness (denoted by CON), and openness (denoted by OPN).
Our method aimed to predict user 's Big Five personality traits set   .As stated in [41], here, each personality trait was classified into two degrees: namely, extraversion was mapped onto extravert and shy, neuroticism was mapped onto neurotic and secure, agreeableness was mapped onto friendly and uncooperative, conscientiousness was mapped onto precise and careless, and openness was mapped onto insightful and unimaginative.For convenience, we denoted the two degrees of each personality trait by positive level and negative level in short, respectively.First, we set initial threshold of each personality trait as 0.5 and threshold of the th personality trait   is denoted by (  ), followed by calculating distance between user  and other users with (9) as follows: where Dis(, ) presents distance between user  and user , (,  ℎ ) and (,  ℎ ) present the ℎth feature  ℎ 's value of user  and user , and  presents the number of features.Secondly, we sorted user 's distance set in ascending order and selected top  users as user 's neighbors (denoted as ()).Then for each   , we recorded the number of users who had positive level of   in () as    .
Thirdly, we selected a   from unvisited personality traits set UP, if user 's personality traits set PT  was empty, and then we calculated probability that user  had positive level of   as follows: where       presents event that there are exactly       instances having positive level of   in (), whose  nearest neighbors also have    users with positive level of   .   and ∼    present event that user  has positive level of   and event that user  has negative level of   , respectively.And if user 's personality traits set PT  was not empty, considering correlations between personality traits, we calculated probability that user  had positive level of   on the basis of prior knowledge about correlations between   and personality traits that have already been visited in UP as follows: , (11) where  1 ,  2 , . . .∈ PT  present personality traits which have already been visited in UP.If (,   ) was greater than or equal to (  ), then positive level of   was added to PT  ; else negative level of   was added to PT  .
And then, we calculated error rate of each personality trait, and error rate of   is denoted by Err(  ) which is calculated as where   denotes the number of instances which are classified correctly to denote the total number of instances.Finally, if unvisited personality traits set UP was empty and error rate of each personality trait fell within a specified range, then algorithm was terminated; else if unvisited personality traits set UP was not empty, then we continued to predict another personality trait in UP; else if there was a personality trait   whose error rate did not reach defined limits, then we updated (  ) with (12) and added it to personality traits set UP to predict it again: where  denotes a monotonic decreasing learning rate.Our algorithm can now be defined as Algorithm  (37) to step (45).Assume that the number of iterations is .Since steps (10), (11), and ( 12) can be done offline and do not need to be calculated repeatedly every time, hence, from step (13) to step (52), the overall online complexity of our proposed method is (log 2 ) as  and  are far less than .From here we see that our proposed method is feasible in a big data environment as in Facebook monitoring.

Experimental Evaluation
In this section, we first describe dataset used in our experiments.And then we analyze the interdependent relationships between user's personality traits.Finally, we conduct experiments on different kinds of features and make a comparison with other methods based on the same dataset.  ← UP.poll() (23) If PT  = 0 then (24) calculate (,   ) according to (10) (25) End if (26) Else if PT  ̸ = 0 then (27) calculate (,   ) according to (11) (28) End if (29) If (,   ) ≥ (  ) then (30)   ← positive level of   (31) End if (32) Else if (,   ) < (  ) then (33) PT  ←negative level of   (34) End if (35)  of data.With consent, users' psychological and Facebook profiles are recorded for research purposes.Currently, the database contains more than 6,000,000 test results together with more than 4,000,000 individual Facebook profiles.In this paper, we adopted the dataset that was provided in the Workshop on Computational Personality Recognition (Shared Task) (http://mypersonality.org/wiki/lib/exe/fetch.php?media=wiki:mypersonality final.zip)[35].The dataset was a subset of myPersonality database including 250 users which had both information about personality traits (annotated with users' personality scores and gold standard personality labels which were self-assessments obtained using a 100-item long version of the IPIP personality questionnaire (http://ipip.ori.org/newFindingLabeling IPIP Scales.htm)) and social network structure (network size, betweenness centrality, density, brokerage, and transitivity) along with their 9900 status updates in raw text.With the aid of this standard dataset, we can compare the performance of our proposed method with others' personality recognition systems on a common benchmark.

Analysis of Correlations between Personality
Traits.Previous works were usually predicting user's personality traits without considering interdependencies between them.In this context, we investigated whether there had been relationships between user's personality traits.Since not all users had the same level of interactions, consequently, we first grouped users according to different degrees of user's personality traits.As an example, if a user has positive level of agreeableness, another user has positive level of agreeableness as well; then they will be divided into a group; else if a user has positive level of agreeableness, another user has negative level of agreeableness, and they will not be divided into a group.Then, we explored cooccurrences between personality traits in each group simultaneously, which is shown in Figure 2.
It can be observed intuitively that a significant proportion of users have negative level of extraversion, positive level of openness, or positive level of agreeableness.It is also noteworthy that positive level of extraversion has less overlap with negative level of openness, negative level of conscientiousness, and negative level of agreeableness.Besides, positive level of neuroticism has less overlap with positive level of conscientiousness, positive level of agreeableness, and negative level of openness.Furthermore, there are bits of users that have positive level of extraversion and positive level of neuroticism simultaneously.
However, the above cooccurrences may be due to the a priori statistics of each trait.For instance, if there are more users with negative level of neuroticism, positive level of openness, and positive level of agreeableness than others, it is more likely to have more users with cooccurrence between negative level of neuroticism and positive level of openness, negative level of neuroticism and positive level of agreeableness, and positive level of openness and positive level of agreeableness than others, as it is shown in Figure 2. Therefore, since users were grouped according to different degrees of user's personality traits, in order to investigate whether the above phenomenon meant that there were positive or negative correlations between different degrees of personality traits, in each group, we treated scores of a certain personality trait, according to which users were divided into the group, and scores of another personality trait with a certain degree as two random variables, followed by employing Kendall correlation coefficient which was calculated in (3) to analyze real correlations between them.The results are presented in Table 1.
As it can be seen from Table 1, we can observe a negative correlation between negative level of extraversion and positive level of neuroticism; in other words, people that score high on extraversion have less possibility to score high on neuroticism as well.In psychology, a person with negative level of extraversion can be explained as solitary and a person with negative level of conscientiousness means being careless.Consequently, negative level of extraversion and negative level of conscientiousness are not in line with positive level of neuroticism which tends to experience unpleasant emotions easily.Another factor is social desirability.Moreover, agreeableness, extraversion, conscientiousness, and openness are more or less "desirable," whereas neuroticism is quite clearly negative.Thus, generally, there are negative correlations between neuroticism and other personality traits.
What is more, it is inconsistent with Figure 2 that there is a positive correlation between positive level of neuroticism and positive level of extraversion.It may be explained that a person with positive level of extraversion tends to be enthusiastic and talkative, which is compatible with positive level of neuroticism.In addition, in line with Figure 2, since negative level of openness shows cautiousness, it has a positive correlation with negative level of extraversion and positive level of agreeableness which is a tendency to be compassionate.However, there is a positive correlation between negative level of openness and positive level of neuroticism which is not in keeping with Figure 2.
Since there are some contradictions between Figure 2 and Table 1, we leveraged Jensen-Shannon divergence [45] to further analyze correlations between user's personality traits.Jensen-Shannon divergence between two probability distributions is calculated as where  and  denote two probability distributions,  denotes an average distribution of  and , and  KL ( ‖ ) denotes Kullback-Leibler divergence [46] between  and , which is calculated with where () and () denote the th value of  and .The results are presented in Table 2.
In keeping with Figure 2 and Table 1, Jensen-Shannon divergences between negative level of neuroticism and positive level of agreeableness, positive level of conscientiousness, and positive level of openness are larger.It is because of the fact that a person with negative level of neuroticism shows confidence on everything, and positive level of openness reflects degree of intellectual curiosity, creativity, and  In addition, Soto et al. [29] demonstrated convergent and discriminate correlations for the Big Five Inventory.Soto and John [30] also illustrated that there was strong convergence between each BFI facet scale and corresponding NEO PI-R facet.Moreover, Park et al. [28] demonstrated convergence with self-reports of personality at the domain-and facet-level.In summary, although utilizing different Big Five measures, it can be found that correlations in myPersonality dataset are also in keeping with the findings from other samples.Hence, correlations between personality traits are expected and actually prove that myPersonality results are valid.So leveraging prior knowledge about relationships between personality traits may contribute to a better prediction on personality traits.The results provide a better theoretical support for our proposed method.

Evaluation Metric.
Since most researchers exploring on personality recognition leveraged different measures to evaluate their experiments on various datasets, it was hard to completely appraise their performance and quality.However, a common dataset which was annotated with gold standard personality labels was available in the Workshop on Computational Personality Recognition (Shared Task), users were free to split the training and test sets as they wish, and precision, recall, and 1-measure were suggested to be used to evaluate results of predictions.So in this paper, for more reliable results, the proposed method was also evaluated with stratified 5-fold cross-validation.Since there was no training set in unsupervised learning, the common dataset was split into folds and in order to avoid overfitting, the same author where tp stands for the number of positive samples that are classified correctly, fp stands for the number of positive samples that are classified incorrectly, and fn stands for the number of negative samples that are classified incorrectly.

Analysis of Impacts of Different Factors
. First, we carried on experiments with our proposed method.Scores of precision, recall, and 1-measure are shown in Table 3 including mean scores of all personality traits.

Impacts of Different
Categories of Feature Sets.In this section, we conducted experiments on social network features, linguistic features, emotional statistical features, topic features, and all features, respectively.Due to space restrictions, Figure 3 only illustrates 1-measure of all personality traits along with mean 1-measure of them.It can be observed that experiments which are conducted on linguistic features, emotional statistical features, and topic features result in unsatisfactory performance with respect to social network features, and social network features have the best classification performance for extraversion which is consistent with the conclusion in [33].Since social network features reflect a user's pattern of social status and habits, which are relatively important decisive factors in user's personality traits, as a result, they provide more information than other types of features.Nevertheless, whether more or less, each kind of characteristic provides unique information in user's personality traits prediction.Therefore, carrying on experiment with combination of social network features, linguistic features, emotional statistical features, and topic features can achieve better performance.

Impact of Weighted
Features.Additionally, we conducted experiments on unweighted features.Here, we only presented the most successful results in Table 4, including mean scores of all personality traits.From Tables 3 and 4, we can observe that the best results for myPersonality dataset are achieved after distributing weights to features.It illustrates that considering correlations between features and user's personality traits can bring performance gain to the prediction task since there are limited features that may remain useful for user's personality traits prediction.

Impact of Dynamic Updated Thresholds of Personality
Traits.In addition, we predicted user's personality traits with unified thresholds as well.The results are summarized in Table 5.
Multilabel learning task is to predict one or more categories for each instance.In the existing algorithms, such as PT5 method proposed by Tsoumakas and Katakis [34],  [17], and Naïve Bayes (NB) for automatic recognition of personality traits from users' Facebook statues updates, respectively; even with a small set of training dataset, it could achieve better results than most baseline algorithms.On the basis of a set of features extracted from Facebook dataset, Alam et al. [43] explored suitability and performance of several classification techniques, which were SVM, Bayesian Logistic Regression (BLR) [47], and Multinomial Naïve Bayes (mNB) [48].Tomlinson et al. [44] used ranking algorithms for feature selection and Logistic Regression (LR) [49] as learning algorithms, which achieved a high performance.In this paper, we tested our proposed method with the same set than the participants in the Workshop on Computational Personality Recognition (Shared Task).In terms of precision, recall, and 1-measure, a comparison of the works described above, as well as our proposed method (denoted by CP), is summarized in Tables 6, 7, and 8. From Tables 6, 7, and 8, the experimental results indicate that openness is the easiest personality trait to be predicted by Facebook features with the above methods because there are more users with positive level of openness than other personality traits in Facebook dataset.Since Verhoeven et al. [42] trained an ensemble SVM model based on both Facebook and Essays personality traits dataset, as a consequence, when tested on the same Facebook dataset, it can achieve better results than our proposed method.Furthermore, when trained and tested on the same Facebook dataset, our method outperforms the methods in [33,43,44] which do not take weighted features, dynamic updated thresholds of personality traits, and correlations between personality traits into consideration.

Conclusion
In this paper, we studied the problem of exploiting interdependencies between user's personality traits for predicting user's personality traits.First, after analyzing importance of features, we conducted experiments on Facebook dataset to demonstrate the existence of correlations between user's personality traits.Bearing this in mind, an unsupervised framework, adaptive conditional probability-based predicting model, was then proposed to predict user's Big Five personality traits based on importance of features, dynamic updated thresholds of personality traits, and prior knowledge about correlations between personality traits.Furthermore, we compared our results with the ones achieved by others in the Workshop on Computational Personality Recognition (Shared Task) on the same dataset.In general, the experimental results demonstrated the effectiveness of our proposed framework.
In future work, we will speculate on what directions can be undertaken to ameliorate its performance with respect to time complexity so as to better apply it to a big data environment as in Facebook monitoring.Besides, in order to make our framework applicable to dynamic networks better, we will explore combining time series analysis with personality traits, predicting algorithm to capture dynamic evolution process of information and network structure.Furthermore, since social network features (described in Section 3.1.1)are specific for our dataset, they may be difficult (or even impossible) to obtain in another dataset; hence, we will conduct experiments on other datasets, such as the Tweeter corpus of PAN shared task (http://pan.webis.de/)on author profiling where also personality is considered, to further validate the effectiveness of the proposed framework based on different features.

Figure 1 :
Figure 1: The architecture of adaptive conditional probability-based predicting model for user's personality traits.

Figure 3 :
Figure 3: 1-measures of five personality traits along with mean 1-measures of them with different categories of feature sets.
Distribution of Features.Every feature has a different impact on user's personality traits prediction.It is of great importance to allocate features' weights reasonably so as to be able to perform good prediction based only on scant knowledge of personality traits.Kendall test is a nonparametric hypothesis test which calculates correlation coefficient to test statistical dependence of two random variables.Since values of features and scores of personality traits in the dataset we used were numeric, therefore, in order to quantify importance of each feature, we analyzed relevance between user's personality traits and characteristics via Kendall correlation coefficient in which values of features and scores of personality traits were treated as two random variables.Kendall correlation coefficient is calculated as (12)ssume that size of dataset is , dimension of features is , and size of the nearest neighbors set is .The complexity of conditional probability-based predicting model for user's personality traits is analyzed as follows.From step(10)to step(12), computation time of Kendall correlation coefficient between features and personality traits is taking ( (28)ime, calculating weights of features takes () time, and then distributing weights to features can be computed in () time; the total time is ( 2 ) as dimension of features  is far less than size of dataset .Calculating distance between user  and other users from step(13)to step(15)will take () time.In step(16), it will take (log 2 ) time for sorting set of distances between user  and other users.Step(18)to step (20) take () time to select  nearest neighbors of user .If algorithm goes from step(23)to step(25), it will take ( 2 ) time.Else if algorithm goes from step(26)to step(28), it will take ( 2 ) time.It takes () time from step : user ; users set ; size of the nearest neighbors set ; unvisited personality traits set UP Output: user 's personality traits set PT  (1)  ←  / *  stands for a flag, if any of the thresholds of personality traits is updated, then  equals to true, otherwise  equals to false * / () ← select top  users in {Dis(, 1), Dis(, 2), . ..} (18) For each personality trait   ∈ UP do (19)    ← the number of users who have positive level of personality trait   in () tures are highly motivated to answer honestly and carefully.Additionally, users' psychological and Facebook profiles are "verified" by individuals' social circles; hence, it is hard to lie to the people that know you best.Also, there is no pressure, and one does not have to share his/her profile information; thus, myPersonality avoids deliberate faking Input

Table 1 :
Kendall correlation coefficients between personality traits. and  stand for positive level and negative level of a certain personality trait.The bold number is the highest Kendall correlation coefficients and the italic number is the lowest Kendall correlation coefficients.The bigger Kendall correlation coefficient is, the better the consistency is.The proportion of users who have a certain pair of personality traits simultaneously.PEXT, PNEU, PAGR, PCON, and POPN stand for positive level of extraversion, neuroticism, agreeableness, conscientiousness, and openness.NEXT, NNEU, NAGR, NCON, and NOPN stand for negative level of extraversion, neuroticism, agreeableness, conscientiousness, and openness.

Table 2 :
Jensen-Shannon divergence between personality traits. and  stand for positive level and negative level of a certain personality trait.The bold number is the highest Jensen-Shannon divergence and the italic number is the lowest Jensen-Shannon divergence.Smaller Jensen-Shannon divergence means better consistency.

Table 3 :
Precision, recall, and 1-measure of adaptive conditional probability-based predicting model (in bold, best performance).

Table 4 :
Precision, recall, and 1-measure of predicting model with unweighted features (in bold, best performance).

Table 5 :
Precision, recall, and 1-measure of predicting model with unified thresholds (in bold, best performance).

Table 6 :
Precision of different methods on Facebook dataset.The best performance per personality trait appears boldfaced.If threshold is set too high, not all categories tags will be predicted; on the other hand, if threshold is set too low, too many classes will be predicted in results.Hence, in this paper, we updated thresholds of personality traits dynamically according to error rates of them.Compared to Table3, based on thresholds obtained through updating dynamically, our method can achieve better results.

Table 7 :
Recall of different methods on Facebook dataset.The best performance per personality trait appears boldfaced.

Table 8 :
1-measure of different methods on Facebook dataset.The best performance per personality trait appears boldfaced.