LSOT : A Lightweight Self-Organized Trust Model in VANETs

With the advances in automobile industry and wireless communication technology, Vehicular Ad hoc Networks (VANETs) have attracted the attention of a large number of researchers. Trust management plays an important role in VANETs. However, it is still at the preliminary stage and the existing trust models cannot entirely conform to the characteristics of VANETs. This work proposes a novel Lightweight Self-Organized Trust (LSOT) model which contains trust certificate-based and recommendation-based trust evaluations. Both the supernodes and trusted third parties are not needed in our model. In addition, we comprehensively consider three factor weights to ease the collusion attack in trust certificate-based trust evaluation, and we utilize the testing interaction method to build and maintain the trust network and propose a maximum local trust (MLT) algorithm to identify trustworthy recommenders in recommendation-based trust evaluation. Furthermore, a fully distributed VANET scenario is deployed based on the famous Advogato dataset and a series of simulations and analysis are conducted. The results illustrate that our LSOT model significantly outperforms the excellent experience-based trust (EBT) and Lightweight Cross-domain Trust (LCT) models in terms of evaluation performance and robustness against the collusion attack.


Introduction
Nowadays, an increasing number of vehicles are being equipped with position and wireless communication devices, which forms an independent research area known as VANETs [1,2].Furthermore, VANETs have become one of the most prominent branches of Mobile Ad hoc Networks (MANETs) as they contribute to the increased road safety and passenger comfort [3][4][5].
In VANETs, the participating nodes (i.e., vehicles) can interact and cooperate with each other by exchanging messages through nearby roadside units (i.e., vehicle to infrastructure) and intermediate vehicles (i.e., vehicle to vehicle) [6].However, due to the characteristics of VANETs, namely, being large, open, distributed, highly dynamic, and sparse, they are vulnerable to some malicious behaviors and attacks [7].
Traditional cryptography and digital signature technologies mainly focus on ensuring the verifiability, integrity, and nonrepudiation of messages among nodes and little concerns have been placed on evaluating the quality of messages and nodes to deal with unreal information from malicious nodes which may compromise VANETs [13,14].In fact, authenticated nodes may also send out unreal information or collude with others to cheat honest nodes for their own sake [15,16].
Trust management plays a significant role in VANETs as it enables each node to evaluate the trust values of other nodes before acting on a message from other nodes for the purpose of avoiding the dire consequences caused by the unreal messages from malicious nodes [17].However, recently only a few trust models in VANETs have been proposed [8,9,11,[18][19][20][21][22] and they can be roughly divided into two categories, namely, infrastructure-based and self-organized models [7,23].
Infrastructure-based trust models (as shown in Figure 1(a)) [8,9,18,19] usually include the hierarchical Certificate Authorities (CAs) which are supposed to be totally trusted and able to satisfy a variety of security needs, such as authentication, integrity, nonrepudiation, and privacy.However, this kind of trust models requires too strong assumption.For example, in these models, the CAs must be totally trusted and online at all the time, and every vehicle must be able to access to the CAs at any time, while, in reality, the CAs may break down or even collude with some malicious vehicles to cheat other honest ones, and the vehicles may not be able to connect to the CAs where the roadside units are not available (e.g., outside the city).
Since the self-organized models are more applicable to the distributed and highly dynamic environment of VANETs, most of the recent trust models are built in this manner (as shown in Figure 1(b)) [11,[20][21][22].In these models, the CAs are not guaranteed at all the time and each node evaluates the trust value of target node based on the local knowledge obtained from its past experiences and the recommendations of neighbor nodes during a short period of time.Though a few self-organized trust models have been proposed, there still exist the following drawbacks in them.
(a) Due to the high dynamic characteristic, VANETs are indeed temporary networks and the connections among nodes are short-lived.In most cases, a node will not interact with the other same nodes more than once [24].As a result, the self 's past experiences are usually not available for trust evaluation.
(b) Most of the messages in VANETs are time-critical (e.g., the reports about traffic jams or accidents) and the nodes need to evaluate their trust quickly and decide whether to act on them or not, while collecting the trust recommendations requires large amounts of time and bandwidth resources [12], which does not conform well to the natures of VANETs.
(c) Though trust management can effectively detect the malicious nodes and false messages and promote the node collaboration, the trust model self may become the target of attacks, such as the notorious collusion attack which is an open problem in the area of trust and reputation system [14], while the existing selforganized trust schemes rarely consider the robustness against the collusion attack.
To the best of our knowledge, there is no existing distinguished trust model for VANETs that has overcome all the above limitations.This is just the motivation of our work.In this paper, we introduce the trust certificate [10,12] and testing interaction [25,26] and propose a novel LSOT model for VANETs.The major characteristics and contributions of our proposed model are summarized as follows.
(a) Our LSOT Model Is Built in a Lightweight and Fully Distributed Manner.In our proposed model, the nodes are self-organized and both the supernodes (e.g., the nodes with special roles) and trusted third parties (e.g., CAs) are not needed.Moreover, as our LSOT model aggregates both trust certificate-based and recommendation-based trust evaluations, the evaluations in our model can be made quickly and reach an excellent performance in a lightweight manner.

(b) Our LSOT Model Has High Evaluation Performance.
To demonstrate the performance of our proposed model, we deploy a VANET scenario based on the noted Advogato dataset (http://konect.uni-koblenz.de/networks/advogato)and conduct a series of simulations and analysis.The results demonstrate that our proposed model significantly outperforms the excellent EBT model [25] and LCT model [12] in terms of the evaluation performance.

(c) Our LSOT Model Has Strong Robustness against the Collusion Attack.
In our LSOT model, we adopt the testing interaction method to build and maintain the trust recommendation network and combine trust certificate-based and recommendation-based trust evaluations.Thus our proposed model has stronger robustness against the collusion attack than LCT model, which has been verified by the simulations and analysis.
The rest of this paper is organized as follows.Section 2 includes some related work and its limitations.Section 3 demonstrates the motivation and general evaluation procedure of our LSOT model, and the trust certificate-based and recommendation-based trust evaluations are detailed in Sections 4 and 5, respectively.Afterwards, Section 6 introduces the aggregation evaluation method.Comprehensive simulations and analysis are presented in Section 7 and Section 8 concludes this paper.

Related Work
In recent years, a great deal of research work for VANETs has been done by utilizing digital signature and cryptography technologies.Security and privacy have been widely concerned, and the architectures, challenges, requirements, attacks, and solutions in VANETs have been analyzed by several researchers [13,[27][28][29][30].However, these schemes mainly pay attention to ensuring the verifiability, integrity, and nonrepudiation of messages among nodes and little concerns have been placed on evaluating the quality of messages and nodes.In actual fact, an authenticated node may also send out unreal messages for its own sake and others cannot perceive them in advance.
Trust management has been proved to be a very useful solution for the mobile distributed environments as it enables each node to evaluate the trust values of others in advance so as to avoid interacting with malicious or selfish nodes.A large number of trust models have been proposed for MANETs [31], Wireless Sensor Networks (WSNs) [32][33][34], and Mobile Peer to Peer networks (MP2Ps) [35].However, these trust models are not suitable to VANETs due to the unique characteristics and requirements in this field.
Currently, trust management in VANETs is still at a preliminary stage and only a few trust models have been proposed.These trust models can mainly be classified into two categories, namely, infrastructure-based and self-organized models.
In the infrastructure-based schemes, CAs are tasked with maintaining the trust scores of vehicles.Wu et al. [18] proposed a Roadside-unit Aided Trust Establishment (RATE) model for VANETs.This model contains three properties, namely, infrastructure-based architecture, data-centric pattern, and integration of observation and feedback.Park et al. [8] introduced a simple Long-Term Reputation (LTR) scheme based on the fact that plenty of vehicles have predefined constant daily trajectories.In this model, roadside units monitor the daily behaviors of vehicles and update their reputation values.To ensure the freshness of reputation scores, the users have to query the roadside units frequently.Gómez Mármol and Martínez Pérez [19] surveyed the deficiency of existing trust models in VANETs and suggested a set of design requirements for trust schemes which are specifically suitable to VANETs.Furthermore, they also presented an original Trust and Reputation Infrastructure-based Proposal (TRIP) from a behavioral perspective, instead of an identitybased one.Li et al. [9] introduced a Reputation-based Global Trust Establishment (RGTE) scheme in which the reputation management center is responsible for collecting the trust information from all legal nodes and calculating the reputation scores of nodes.
As we mentioned earlier, the infrastructure-based schemes require too strong assumptions and may lead to some issues, such as single point of failure and high maintenance cost.Thus most of the recent trust models for VANETs are built in a self-organized manner.Yang [20] proposed a novel Trust and Reputation Management Framework based on the Similarity (TRMFS) between messages and between vehicles.They also presented a similarity mining technique to identify similarity and an updating algorithm to calculate the reputation values.Bamberger et al. [21] introduced an Inter-vehicular Communication trust model based on Belief Theory (ICBT).This model mainly focuses on the direct experiences among vehicles and utilizes binary error and erasure channel to make a decision based on the collected data.Hong et al. [22] noticed that VANETs face lots of situations and quickly change among different situations; then they described a novel Situation-Aware Trust (SAT) model which includes three important components.Huang et al. [11] absorbed the Information Cascading and Oversampling (ICO) into VANETs and proposed a novel voting scheme, in which each vote has different weight based on the distance between sender and event.
Though the above schemes provide many brilliant ideas, there exist several limitations as we analyzed earlier.In our previous work [12], we improved the classic Certified Reputation (CR) model [10] and proposed a LCT model for the mobile distributed environment.In this model, the trust certificates are adopted as they can be carried by trustees and contribute to establishing the trust relationships in highly dynamic environment in a fast and lightweight manner.However, this model is intuitively vulnerable to the collusion attack.In addition, to tackle the sparse issue of VANETs, Minhas et al. [25] introduced a novel EBT scheme, in which the vehicles send the testing requests to each other and interactively compute the trust values of others based on the quality of responses.By this way, a trust network can be built and updated dynamically.However, the supernodes with special roles are needed in this model; thus in essence this model is not built in a fully self-organized way.
Aiming at building a lightweight trust model for VANETs in a fully self-organized way as well as overcoming the limitations of aforementioned schemes, we propose a novel LSOT model in this paper and the intuitive comparisons with some other trust models are illustrated in Table 1.

The Framework of Our LSOT Model
In this section, we first show the motivation of our work with a fully self-organized VANET scenario.Afterwards, we introduce the general evaluation procedure in our proposed model through a simple example.

The Motivation of Our Work.
Before introducing our LSOT model, we first illustrate our motivation with the following VANET scenario (as demonstrated in Figure 2).In the past interactions (as shown in Figure 2(a)), the vehicle A interacted with several nearby vehicles (e.g., B∼F) and accumulated certain trust level.In a potential interaction (as shown in Figure 2(b)), A and its new neighbors (e.g., G) are strange to each other.Due to the highly dynamic feature of VANETs, the majority of previous interaction partners of A (e.g., B, D, and F) are far from G and there exists no reliable trust path between them.So G can merely collect the trust information about A from a few previous interaction partners of A (e.g., C and E; in fact they may not exist) and most of previous trust information of A (e.g., with B, D, and F) has to be ignored when building the new trust relationships between  [8] Infrastructure-based × × High -RGTE [9] Infrastructure-based × × High -EBT [10] Self-organized with supernodes × √ Midterm -ICO [11] Fully self-organized × × Low Weak LCT [12] Fully self-organized "√": support; "×": nonsupport; "-": without consideration.A and G.As a result, with the high-speed movement of A, its trust information is mostly discarded and rebuilt again and again.It is distinctly unreasonable and is just the motivation of this work.How to utilize the previous trust information to quickly build the new trust relationships is the key focus of this paper.

The Evaluation Procedure in Our LSOT Model.
To deal with the above problem, we propose a novel LSOT model and a simple example is illustrated in Figure 3.It is assumed that previous interactions occur between A and B∼F.At the end of past interactions, B∼F provide A with their trust certificates (i.e., TC(B, A) ∼ TC(F, A)) which are generated with digital signatures by B∼F.Then A stores and updates the trust certificates in its local storage.In a potential interaction, A can release a message (i.e., MS(A)) which includes six parts, that is, the identification of A (ID), message type (MT), message content (MC), trust certificates (TCs), timestamp (TS), and digital signature (DS), to neighboring vehicles (e.g., G).When G receives the message, it can check the authentication and integrity of MS(A) through digital signature technology and compute the trust certificate-based trust value of A according to the trust certificates.Moreover, G can also collect the trust recommendations (e.g., TR(C, A, G) and TR(E, A, G)) about A from its trustworthy neighbors (e.g., C and E) and then derive the recommendation-based trust value of A. Afterwards, G can calculate the final trust value of A and decide whether to trust the message content or not.In the above process, A and G are defined as trustee and trustor, respectively.B∼F are referred to as certifiers, and C and E are called recommenders.
Being consistent with the above example, the general evaluation procedure in our LSOT model is illustrated in Figure 4. Generally speaking, it involves four kinds of roles, namely, trustor (i.e., the receiver of message), trustee (i.e., the sender of message), certifier (i.e., the vehicle which provides the trust certificate), and recommender (i.e., the vehicle which has past interactions with the trustee and provides the trust recommendation to the trustor).Moreover, it mainly includes four steps: (a) At the end of past interactions, the certifiers provide their TCs to the trustee.(b) In the beginning of a potential interaction, the trustee can send out a message with TCs when needed.(c) When the trustor receives this message, it can derive the trust certificate-based trust value of the trustee based on TCs.Besides, it can also send the requests to its trustworthy neighbors for TRs.(d) The trustworthy recommenders provide TRs to the trustor, and then the trustor can obtain the recommendation-based trust value of the trustee.Afterwards, the trustor can calculate the final trust value of the trustee and decide whether to trust the message content from the trustee or not.It should be noted that we do not distinguish between the trust value of node and that of message in this paper, aiming at building a lightweight trust model for VANETs.That is to say, we utilize the trust value of a node to directly derive the trust value of message sent by the node.In our proposed model, the trust certificates for a node are stored by itself; thus this part of trust information can be carried with the movement of node.Furthermore, the trust certificates include the digital signatures and any change to them can be easily detected [10,12]; thus the node cannot modify the trust certificates for self-praise.Besides, the message is also attached with the digital signature; thus it cannot be tampered even relayed by other nodes.Benefiting from trust certificates, the previous trust information can be carried and utilized to conduct the trust evaluation quickly in a fully self-organized way.

Trust Certificate-Based Trust Evaluation
In this section, we first introduce the formal representations of trust certificate and message.Moreover, we comprehensively consider three factor weights, that is, number weight, time decay weight, and context weight, for trust certificate.Finally, we present the trust certificate-based trust calculation method in detail.

The Formal Expressions of Trust Certificate and Message.
In our LSOT scheme, the trust certificate generated by certifier  for trustee  is denoted as TC (, ) = (ID () , ID () , TY (, ) , RV (, ) , LC () , TS (, ) , DS (, )) , (1) where ID() and ID() mean the identifications of certifier  and trustee , respectively.TY(, ) denotes the type of corresponding message and RV(, ) represents the rating value which is a real number within the range of [0, 1].Larger RV(, ) means higher satisfaction degree and vice versa.LC() represents the location coordinate of certifier  and TS(, ) denotes the timestamp when the trust certificate is generated.DS(, ) represents the digital signature.The message released by trustee  is denoted as where ID() denotes the identification of trustee .MY() and MC() stand for the type and content of message, respectively.TCs() denotes the set of trust certificates for trustee .TS() and DS() represent the timestamp and digital signature, respectively.

Three Factor Weights for Trust
Certificate.Due to the unique feature of our LSOT scheme, the trustee may merely provide profitable trust certificates to the potential trustor or even collude with others to improve its trust value and slander its competitors (i.e., collusion attack).Besides, the trustee may first accumulate high trust value through releasing authentic but unimportant (e.g., entertainment-related) message and cheat others by issuing important (e.g., securityrelated) but unreal message (i.e., value imbalance attack).In order to ease these two kinds of attacks, we comprehensively consider three factor weights, that is, number weight, time decay weight, and context weight.

Number Weight.
To balance the robustness against collusion attack and bandwidth consumption, TCs() merely consists of () (() ≤ ) most favorable trust certificates which come from diverse certifiers, where  is a system parameter which relies on current network status in terms of the collusion attack.The number weight WN() corresponding to () is denoted as a piecewise function [12]: If () is less than , the trust certificates are considered incredible; thus WN() is set as 0. Otherwise, the trust certificates are viewed as reliable, so WN() is set as 1.

Time Decay Weight.
As we well know, the relatively recent trust certificate is more convincing than the less recent one and the outdated trust certificate may be unreliable at all as the behavior of trustee may change from honest to malicious in VANETs; thus the time decay weight WT(, ) for TC(, ) is denoted as [36] WT (, ) = { 0, if TN − TS (, ) > ,  −(TN−TS(,))/ , otherwise, where TN is the current timestamp and  is a time window.
where ( * ) is the importance function of message type and  is a constant within the range of [0, 1).If the importance of TY(, ) is no less than that of MY(), TC(, ) is considered reliable and WY(, ) is set as 1.Otherwise, TC(, ) is regarded as not entirely credible and WY(, ) is set as .
(b) Location.As discussed in some related work [1,7,14], the location is also an important contextual property.In the view of trustor, a trust certificate from a nearby certifier is more reliable than that from a remote certifier as the latter has a higher likelihood of colluding with trustee than the former.Thus the location similarity weight WL(, ) between trustor  and certifier  is denoted as where It should be noted that in VANETs the messages are usually broadcasted in a one-to-many manner; thus RW(, ) is independent of WL(, ) in our scheme.
If () equals , the trust certificates are viewed as reliable and CT(, ) is calculated as the weighted average value of  ratings which come from diverse certifiers.Otherwise, the trust certificates are considered unreliable and CT(, ) is set as a default low value  (0 <  < 1).From (8), we can easily find that CT(, ) falls in the range of 0∼1.In actual fact, newcomer trustees may have no sufficient trust certificates, and malicious trustees may also act as newcomers and refuse to provide unfavorable trust certificates, so their trust certificate-based trust values equal .

Recommendation-Based Trust Evaluation
In this section, we first present the formal representation of trust recommendation.Next we introduce the formation of trust network based on testing interactions.Moreover, we propose an effective MLT algorithm to identify all the trustworthy recommenders and introduce the details of recommendation-based trust calculation method.

The Formation of Trust Network.
Due to the sparse and highly dynamic characteristic, there are no sufficient or longterm trust relationships among nodes in VANETs.In order to tackle this problem, we introduce the idea of allowing nodes to send several testing requires (to which the senders have known the corresponding solutions in advance) to each other and calculate the trust values of receivers according to the accuracy and timeliness of responses.Inspired by the previous work [25,26], we adopt and improve the classic experience-based trust evaluation scheme [37].
Let TV(, ) ∈ [0, 1] be the trust value demonstrating the satisfaction degree of sender  to the responses of receiver .If sender  does not receive any response from receiver , TV(, ) is set as 0. Whenever sender  receives a response from receiver , it updates TV(, ) based on the following rules: where  and  are the increment and decrement factors, respectively, and their ranges are (0, 1).Moreover, we set  <  due to the fact that trust is difficult to build up but easy to drop off.We can easily find that the experience-based trust is accumulated and the trust values of nodes can be updated recursively as (10) and (11).Moreover, the difficulty of the above calculations is very small and each node can evaluate the trust values of other nearby nodes easily through testing interactions; thus the trust network can be generated and dynamically updated in a lightweight manner.A simple example is shown in Figure 5.

Trust Calculation Method.
In recommendation-based trust evaluation, only the ratings from trustworthy recommenders are considered.For identifying trustworthy recommenders, we propose a novel MLT algorithm (i.e., Algorithm 1) to calculate the maximum local trust values of all the recommenders in the view of trustor.
As we know, the trust network in VANETs has the highly dynamic characteristic and the reliability of trust evaluation will be very low when the trust path is too long [38].Therefore, we consider the trust decay in our MLT algorithm.Specifically, suppose  0 →  1 → ⋅ ⋅ ⋅ →  ℎ (where  0 = ,  ℎ = , and recommender  has previous interactions with trustee ) is one of the optimal trust paths from trustor  to recommender ; then the maximum local trust value MT(, ) (i.e., M[] in Algorithm 1) of recommender  from the perspective of trustor  can be obtained from [39]: where ℎ is the hop from trustor  to recommender  and  is a parameter which controls the speed of trust decay.If MT(, ) reaches the trust threshold TH() of trustor , recommender  is viewed as trustworthy and vice versa.Similarly, we can obtain all the elements of trustworthy recommender set RS(, ) and calculate the recommendation-based trust value RT(, ) of trustee  in the view of trustor  as [40] RT (, ) If RS(, ) is not empty, RT(, ) is calculated as the weighted average value of ratings from all the trustworthy recommenders.Otherwise, RT(, ) is set as a default low value ] (0 < ] < 1).From ( 10)∼( 13), we can find that the range of RT(, ) is also 0∼1.end if (19) end for (20) end if (21) Add  into VN; (22) end while (23) return MT; Algorithm 1: Our MLT algorithm.

Aggregation Trust Evaluation
As we mentioned earlier, trust certificate-based and recommendation-based trust evaluations have diverse advantages and weaknesses as follows: (a) Comparing to recommendation-based trust evaluation, trust certificate-based one can be conducted in a more fast and lightweight manner (the detailed analysis is provided in our previous work [12]) while it is intuitively more vulnerable to the collusion attack as the certifiers are strange to the trustor in most cases.(b) Recommendation-based trust evaluation seems to be more credible than trust certificate-based one, as in the former only the ratings of trustworthy recommenders are considered.But collecting the opinions from trustworthy recommenders consumes large amounts of time and bandwidth resources, especially when MH is set as a relatively high value (e.g., 6).
Thus it is beneficial to aggregate these two kinds of trust evaluations to achieve the more accurate evaluation result.In our scheme, the final trust value FT(, ) of trustee  in the sight of trustor  is calculated as where  is a weight parameter within the range of [0, 1] which controls the weights of two kinds of trust evaluations in aggregation trust evaluation.So the range of FT(, ) is also 0∼1.Specifically, when  equals 1 or 0, the aggregation trust evaluation reduces to mere trust certificate-based one or mere recommendation-based one, respectively.In other cases (i.e., 0 <  < 1), the aggregation trust evaluation falls in between trust certificate-based one and recommendation-based one.

Simulations and Analysis
To demonstrate the performance of our LSOT model, we present a series of simulations and analysis in this section.Specifically, we first deploy a fully distributed VANET scenario based on the famous Advogato dataset.Then we validate the variations of both average trust values and average acceptance rates of three kinds of messages.Moreover, we compare the evaluation performance of our proposed model with that of EBT and LCT models.Finally, we analyze and verify the robustness of our LSOT model against the collusion attack comparing to that of LCT model.

Simulation Settings.
In this work, the comprehensive simulations are implemented by Java language on an Ubuntu server with 2.83 GHz CPU and 4 G RAM.In concrete terms, we first deploy a fully distributed VANET scenario: The trust recommendation network is built based on the famous Advogato dataset which includes 6541 nodes and 51127 directed edges (denoting three kinds of trust relationships among nodes, namely, apprentice, journeyer, and master, of 1 As the nodes in Advogato dataset do not contain location information, we set  = ∞ and  = ∞ in our simulations so as to ensure WL(, ) ≡ 1.
2 MH is set as a relatively low value (i.e., 3) due to the highly dynamic and time-critical features of VANETs.
which corresponding trust values are 0.6, 0.8, and 1.0, resp.).The nodes' trust thresholds are randomly generated.Three kinds of different messages, namely, honest (i.e., authentic and helpful), general (i.e., authentic but valueless), and malicious (i.e., unreal and harmful) messages, are sent from different senders.In each test, a random node receives a message from certain sender and evaluates its trust value by utilizing our LSOT scheme.If the message's derived trust value reaches the node's trust threshold, the node accepts this message and provides a new trust certificate to the sender according to its satisfaction degree to this message.After each test, the timestamp adds 1.The parameters in our simulations are set as illustrated in Table 2.

Validating the Evaluation Performance.
In this part, we mainly validate the average trust value variations of three kinds of messages in honest environment, and we also reveal the variations of average acceptation rates of three kinds of messages.In concrete terms, we divide the 500 times' tests into 5 equal intervals (i.e., I1∼I5) and then calculate the average acceptation rate in each interval, respectively.The simulation is repeated 1000 times for each kind of messages and average results are shown in Figures 6 and 7.
We first analyze the variations of average trust values as shown in Figure 6.In the initial stage, three kinds of messages have the same trust value (i.e., 0.10).With the increase of test times (0∼300 times), the average trust value of honest messages rises rapidly from 0.10 to 0.64 due to their excellent quality while that of general messages grows slowly from 0.10 to 0.36.Besides, the average trust value of malicious messages remains about the same at 0.10 on account of their terrible performance.In the latter tests (300∼500 times), all the three kinds of messages dynamically keep constant average trust values (i.e., 0.64, 0.36, and 0.10, resp.).
Next, we analyze the variations of average acceptation rates as shown in Figure 7.In the first three intervals (i.e., I1∼I3), the average acceptation rate of honest messages grows from 27.46% to 63.01% and that of general messages rises from 18.60% to 36.49%, while that of malicious messages basically stays unchanged at 11.43%.In the latter intervals (i.e., I4 and I5), all the three kinds of messages almost maintain constant average acceptation rates (i.e., 64.65%, 37.40%, and 11.43%, resp.).
As we know, honest messages bring benefits and malicious messages mean risks; thus the higher the average trust value and average acceptance rate of honest messages, the better, and the lower the average trust value and average acceptance rate of malicious messages, the better.Therefore, the above results show that our LSOT model significantly improves the average trust value and average acceptance rate of honest messages without increasing the risks caused by malicious messages.

Comparing the Evaluation Performance.
In this simulation, we mainly compare the evaluation performance of our LSOT model with that of EBT and LCT models as they are similar to our model.Moreover, we deploy and necessarily modify these two models in our VANET scenario.As we know, the trust ranges in EBT and LCT models are [−1, 1] and [0, 100], respectively, different from that in our proposed model (i.e., [0, 1]); thus they are all converted to [0, 1] for comparison.Besides, the role-based trust is removed from EBT model as it is not consistent with the fully self-organized way.This simulation is also repeated 1000 times for each kind of messages in EBT and LCT models, and the average results are shown in Figure 8.Moreover, we also compare the average acceptation rates of honest and general messages in every interval (i.e., I1∼I5) in three kinds of models as illustrated in Figure 9.
We first analyze the average acceptation rate variations of honest messages in three kinds of trust models as demonstrated in Figure 9(a).In the first interval (i.e., I1), LCT model has distinctly lower average acceptation rate (i.e., 10.99%) than EBT model (i.e., 30.74%) and our LSOT model (i.e., 27.46%).It is because that LCT model merely includes trust certificate-based evaluation and the senders of honest messages are not able to provide sufficient trust certificates to improve their own trust values, while EBT model has no restriction about the number of recommenders in recommendation-based trust evaluation and the average trust value of honest messages rises with the increasing test times.Our LSOT model absorbs the merits of recommendation-based evaluation; thus in I1 the average acceptation rate in our LSOT model is greatly higher than that in LCT model and slightly lower than that in EBT model.
It is because EBT model only contains recommendationbased evaluation and a portion of recommenders cannot be reached within the maximum allowable hop (i.e., 3), while in LCT model the trust certificates are attached to the messages and they contribute to improving the trust values of honest messages.Our LSOT model includes the trust certificatebased and recommendation-based trust evaluations; thus in I2∼I5 the average acceptation rate in our LSOT model is greatly higher than that in EBT model and generally higher than that in LCT model.
Next we analyze the average acceptation rate variations of general messages in three kinds of trust models as shown in Figure 9(b).In the first interval (i.e., I1), the average acceptation rate in our LSOT model (i.e., 18.60%) is greatly higher than that in LCT model (i.e., 10.98%) and slightly lower than that in EBT model (i.e., 22.41%).In the latter intervals (i.e., I2∼I5), the average acceptation rate in our LSOT model rises rapidly and stays basically unchanged at a relatively higher rate (i.e., 37.09%) than that in EBT model (i.e., 29.84%) and LCT model (i.e., 35.30%).The detailed analysis is omitted as it is similar to that of honest messages.
Besides, we analyze the average acceptation rate variations of malicious messages in three kinds of trust models (as the average acceptation rate of malicious messages in every model remains about the same as 11.46%, the comparison chart is omitted for space limitation).In LCT model, the senders of malicious messages act as newcomers and refuse to provide any unfavorable trust certificates; thus both the average trust value and average acceptation rate keep largely constant.In EBT model, due to the malicious behaviors and "reentry" strategy [41], the average trust value and average acceptation rate of malicious messages also remain basically unchanged.Our LSOT model aggregates EBT and LCT models; thus the average acceptation rate of malicious messages also remains largely untouched.
Through the above analysis, we can easily discover that our LSOT model not only limits the risks caused by malicious messages as well as EBT and LCT models do but also greatly raises the average acceptation rate of honest messages and improves that of general messages to some extent when comparing to the other trust models.Thus our LSOT model has better evaluation performance than EBT and LCT models in general.

Comparing the Robustness Characteristics.
In the previous parts, we mainly consider the performance of our model in honest environment, while in this part we focus on verifying and analyzing the robustness of our model against the collusion attack through comparing to that of LCT model.The comparison with EBT model is omitted as there is no consideration of collusion attack in this model.Due to the distributed feature of VANETs, malicious nodes may collude with other nodes to raise their own trust values (i.e., ballot stuffing) or slander their honest competitors (i.e., bad mouthing) [42], which will bring risks to message receivers.So a good trust model for VANETs should be able to detect and filter them out.
As we well know, in the trust certificate-based trust evaluation the certifiers are strange to the active trustor, while    in the recommendation-based trust evaluation the recommenders are trustworthy in the perspective of active trustor.Thus the certifiers have a higher likelihood of colluding with malicious senders than the recommenders.LCT model merely consists of the trust certificate-based trust evaluation; thus it is intuitively vulnerable to the collusion attack.While our LSOT model aggregates the trust certificate-based and recommendation-based trust evaluations, it has relatively strong robustness against the collusion attack.
Next, we validate the above analysis through two simulations in which the recommenders are assumed to be trustworthy and the certifiers may be collusive at a certain percentage (e.g., 0%, 25%, 50%, 75%, or 100%).

Ballot Stuffing.
In this part, we compare the robustness against the ballot stuffing of our LSOT model with that of LCT model.In the ballot stuffing, the collusive certifiers provide favorable trust certificates with high rating values to malicious messages in spite of their bad performance.In each simulation, we vary the Percentage of Collusive Certifiers (PCC) and then calculate the average trust value of malicious messages in each case, respectively.The simulation is repeated 1000 times and the average results are illustrated in Figure 10.
In the ideal case (i.e., PCC = 0%) as shown in Figure 10(a), the variation curves of average trust values of malicious messages in two kinds of trust models are very close to each other.With the increase of PCC, the curve in LCT model gets steeper and steeper while that in our LSOT model rises slowly, so the gap of two curves gradually grows.In the extreme case (i.e., PCC = 100%) as shown in Figure 10(e), the gap of two curves reaches the maximum amount and the average trust value of malicious message in our LSOT model is significantly lower than that in LCT model.
As we mentioned earlier, the lower the average trust value of malicious messages, the better; thus the above simulation and analysis results demonstrate that our LSOT model has stronger robustness against the ballot stuffing than LCT model.

Bad Mouthing.
In this part, we validate the robustness of our LSOT model against the bad mouthing through comparing to LCT model.In the bad mouthing, the collusive certifiers provide adverse trust certificates with low rating values to honest messages in spite of their good quality.In each simulation, we vary PCC and compute the average trust value of honest messages in each case, respectively.The simulation is also repeated 1000 times and average outputs are demonstrated in Figure 11.
In the ideal case (i.e., PCC = 0%) as shown in Figure 11(a), the variation curve of average trust value of honest messages in our LSOT model is approximately consistent with that in LCT model.With the increase of PCC, the curve growth in LCT model becomes slower and slower while that in our LSOT model is relatively fast; thus the gap of two variation curves progressively grows.In the extreme case (i.e., PCC = 100%) as shown in Figure 11(e), the gap of two curves is up to the maximum value and the average trust value of honest messages in our LSOT model is greatly higher than that in LCT model.
As mentioned earlier, the higher the average trust value of honest messages, the better; thus the above simulation and analysis results illustrate that our LSOT model significantly outperforms LCT model in terms of the robustness against the bad mouthing.

Conclusion
In this work, we have proposed a novel LSOT model, in which both the supernodes and trusted third parties are not needed, for VANETs in a self-organized way.It combines both trust certificate-based and recommendation-based trust evaluations; thus the evaluation in it can be made quickly and reaches an excellent performance in a lightweight manner.In trust certificate-based trust evaluation, we have comprehensively considered three factor weights, namely, number weight, time decay weight, and context weight, to ease the collusion attack and make the evaluation result more accurate.In recommendation-based trust evaluation, we have utilized the testing interaction method to build and maintain the trust network and proposed an effective MLT algorithm to identify trustworthy recommenders.Moreover, we have deployed a fully distributed VANET scenario based on the celebrated

Figure 1 :
Figure 1: Classic trust models in VANETs (where A∼C denote CAs and a∼f represent vehicles).

Figure 4 :
Figure 4: General evaluation procedure in our LSOT model.

Figure 5 :
Figure 5: Trust network formation based on testing interactions.

Figure 6 :
Figure 6: Average trust value variations of three kinds of messages in our LSOT model.

Figure 7 :
Figure 7: Average acceptation rate variations of three kinds of messages in our LSOT model.

Figure 8 :
Figure 8: Average acceptation rate variations of three kinds of messages in EBT and LCT models.

Figure 9 :Figure 10 :
Figure 9: Average acceptation rate comparisons of honest and general messages in three kinds of trust models.

Figure 11 :
Figure 11: Average acceptation rate comparisons of honest messages with different PCC values.

Table 1 :
Intuitive comparisons between our LSOT model and some other trust models in VANETs.
is a time unit which controls the speed of time decay.If the time difference between TN and TS(, ) exceeds , TC(, ) is considered unreliable; therefore WT(, ) is set as 0. Otherwise, WT(, ) is represented as an exponential decay function of time difference.4.2.3.Context Weight.Last but not least, we also take the context weight into account for TC(, ).Specifically, we consider two kinds of most important contextual properties, namely, message type and location.
(a) Message Type.As we mentioned earlier, the node may first accumulate high trust value through releasing authentic but unimportant message and then cheat the other nodes by issuing important but unreal message (i.e., value imbalance attack); thus we consider the message type similarity weight WY(, ) for TC(, ) as WY (, ) = { 1, if  (TY (, )) ≥  (MY ()) , , otherwise, is a distance threshold and  is a constant which controls the speed of distance decay.If the distance between certifier  and trustor  exceeds , TC(, ) is viewed as unreliable; thus WL(, ) is set as 0. Otherwise, WL(, ) is denoted as an exponential decay function of distance.
4.3.Trust Calculation Method.Next, we detail the trust certificate-based trust calculation method.At the end of each past interaction, the certifier (e.g., ) generated a trust certificate (e.g., TC(, )) and sent it to trustee .When trustee  needs to release a message MS(), it first chooses () most advantageous trust certificates from its local storage based on the weighted rating value RW(, ), which can be derived from RW (, ) = RV (, ) * WT (, ) * WY (, ) .

Table 2 :
Parameter settings in our simulations.