Anomaly Detection for Internet of Vehicles : A Trust Management Scheme with Affinity Propagation

Anomaly detection is critical for intelligent vehicle (IV) collaboration. Forming clusters/platoons, IVs can work together to accomplish complex jobs that they are unable to perform individually. To improve security and efficiency of Internet of Vehicles, IVs’ anomaly detection has been extensively studied and a number of trust-based approaches have been proposed. However, most of these proposals either pay little attention to leader-based detection algorithm or ignore the utility of networked RoadsideUnits (RSUs). In this paper, we introduce a trust-based anomaly detection scheme for IVs, where some malicious or incapable vehicles are existing on roads. The proposed scheme works by allowing IVs to detect abnormal vehicles, communicate with each other, and finally converge to some trustworthy cluster heads (CHs). Periodically, the CHs take responsibility for intracluster trust management. Moreover, the scheme is enhanced with a distributed supervising mechanism and a central reputation arbitrator to assure robustness and fairness in detecting process.The simulation results show that our scheme can achieve a low detection failure rate below 1%, demonstrating its ability to detect and filter the abnormal vehicles.


Introduction
Internet of Vehicles (IoV) is an open converged network system supporting human-vehicles-environment cooperation [1].Fusing multiple advanced terms, such as VANET [2], autonomous driving [3], cloud computing [4], and multiagent system (MAS) [5], this hybrid concept plays a fundamental role towards a cooperative and effective intelligent transport system.An anomaly detection scheme is desirable in an environment filled up with uncertainty.Primarily, the security problem is motivated by the question [6] "How can I trust the information content I receive?"This issue is then decomposed into two subterms: "Is the communication channel via which I receive messages from a sender secure?" and "How can I trust the sender of the messages I receive?"The decomposition allows us to tell the difference between computational trust and behavioral trust.Being a complementary part of computational trust (such as encryption and tamper-proofing), the model of behavioral trust admits information's imperfection in an open system; therefore, individuals need extra trust-related information in decision-making [7].Extra trust-related information could be extracted from history reputation or could be elicited from interaction experience between two individuals.Being capable of providing a measurement of trustworthiness, behavioral-based trust management enables intelligent vehicles to improve collaborations by reducing false or malicious behaviors.Anomaly detection technology is the key method to build behavioral trust.
This paper aims to detect anomaly vehicles in autonomous driving environment.As commercial IVs are drawing near, we have to face the facts that vehicles are more and more intelligent.Meanwhile, cyber vehicles are unprecedented and vulnerable when supported by an uncertain and dynamic network [8].Malicious attacks and information tampering, along with system failures, will directly threaten human lives and properties.Anomaly vehicles include malicious vehicles and incapable vehicles.Malicious vehicles are entities with intentions to make damage in driving environment.Incapable vehicles are not intentionally to give negative influence; however, they may disturb order due to their limited capability.For example, an incapable intelligent vehicle could not 2 Mobile Information Systems behave properly in a rigid and accurate-ordered automatic driving platoon but may behave well in a normal driving pattern.On the other hand, a malicious intelligent vehicle should be forbidden in any situation.To highlight our motivations, we present the following two illustrative scenarios.Scenario 1: in cluster/platoon-based driving, IVs frequently communicate with each other to maintain lateral/longitudinal control.Vehicles with incapability or malicious intention may join the cluster or platoon.Their malicious or false behaviors are very likely to temper/disturb collaboration.In this safetyoriented case, local vehicles should be able to maintain robust intracluster trust to wipe out unqualified vehicles.Scenario 2: in efficiency-oriented case, where IVs need to collaborate in a broad area, they exchange message to presence traffic conditions, request parking plot information through VANET, and even negotiate routes to prevent traffic congestions.None of these three functions would be efficient without trustworthy collaboration.The above two scenarios suggest that a trust management scheme with anomaly detection is urgently need.
Solutions on IoV's anomaly detection still face many challenges raised by mobility including dynamic vehicle groups, real-time constraints, and intrinsic dynamic property of trust itself, which makes single or static trust measurement ineffective.Considering the mobile nature of vehicles, topology is changing so rapidly that preestablished trust relationships are likely to be invalid.As a result, two nodes need to build up trust in a timely fashion.Moreover, trust is not constant but changing along with different driving situations.An accurate trust should capture the context of interaction and history reputation.For example, a car with good reputation may not be trustworthy when it is over speed.Trust management system therefore calls for the ability to synthesize multiple resources, either from roads or from cloud.The essence of Internet of Vehicles is to obtain more safety and efficiency by integrating multiple infrastructures, networks, and vehicle intelligence.In accordance with this idea, we propose a hybrid approach called Cluster-Based Anomaly Detection (CAD).Figure 1 describes the framework of CAD.CAD is composed of two big components, namely, cluster-based trust component and central reputation component.Clusterbased trust component builds time-fashioned trust to reflect dynamic situation while central reputation component is to evaluate one's trust from a long-term perspective.These two components interact by evidence uploading and reputation providing.Cluster-based trust component has two major functions, namely, trust-based AP clustering and mutual supervision to maintain the robustness of dynamic trust.
The major contribution of this paper lies in the following two aspects: (i) We identify cluster-based trust and reputation as two major components of anomaly detection.(ii) We adopt a sparse RSU-enhanced reputation provision scheme.Central Arbitrator (CA) collects evidences from sparse RSUs.Then, a reputation system is established to evaluate global and history reputation from accumulated data.

Related Work
Trust issues stem from secure and social psychology fields and have been growing theoretically in organization management.More recently, as network technology is constantly changing the way people interact, former stable and wellstructured organizations are likely to transform into another paradigm featured by agile structures and ad hoc groups.
IoV, for example, is a typical agile structure that calls for collaboration among agents.Ramchurn et al. [9] pointed out that "trust pervades multiagent interaction at all levels," generally including (1) individual-level trust, whereby an agent has some beliefs about the honesty or reciprocative nature of its interaction partners, and (2) system-level trust, whereby the agents in the system are forced to be trustworthy by the rules of encounter that regulate the system.Although various schemes have been investigated, the author noticed that trust at these two levels has been dealt with separately in most times.This insight inspired us to develop a hybrid framework which takes both levels of trust into consideration.Most existing systems in VANETs use distributed approach.Raya et al. [10] argue that the trust should be attributed to data per se in ephemeral ad hoc networks and proposed a framework for data-centric trust establishment.Their scheme shows high resilience to attackers and could converge to stable right decision.However, Raya's trust mechanism may make no contribution to reduce attackers in system level; since there is no punishment for cheating, attackers are seldom suppressed.Chen et al. [11] present a decentralized framework combined with message propagation and trust evaluation in VANET.Specifically, trust  measurement consists of role-based trust and experiencebased trust.It is a good attempt to synthesize static priori trust (role-based trust) with dynamic situational trust (experiencebased trust).Nonetheless, they did not take historical reputation into consideration.Rostamzadeh et al. [12] focus on trustworthy information dissemination by assigning trust value to each road segment.The dissemination task is to find a path which consists of a series of safe road segments.Their work is featured by good scalability and thus potential in many applications.DTM 2 [13] is a distributed trust model inspired by Job Market model.With the help of third party hardware, system could incent good behaviors and punish malicious behaviors by changing each vehicle's signal value.To conclude, the decentralized approach is developed under the assumption that there is no centralized third party to evaluate and maintain the trust value.
Recently, the RSU deployment is promoted by intelligent transport system group.Centralized trust management is not an ambiguous goal with the help of RSUs.Centralized approach is able to evaluate trust value from a global and historical view.Therefore, many works have preliminarily emerged centralized trend as a complementary of distributed system.Wang et al. [14] proposed a vertical handoff method, which improves availability of network access.Their method therefore makes contributions to building centralized trust management system.Machado and Venkatasubramanian [15] aim to aggregate advantages of both centralized and distributed trust computation.The authors categorize the messages exchanged in VANET into alerts and reports; alerts are time-critical in response to an incident while reports are evidence to evaluate quality of alerts.RSUs play Central Authority (CA) who keep track of messages and accordingly maintain a global reputation for each vehicle.Their central grading system could efficiently distinguish dishonest nodes in real-life scenarios.Huang et al. [16] utilize identity-based cryptography to integrate entity-based trust and social trust in proxy server.The email interactions among individuals are mined to obtain social trust.Trust measurement should be requested and acquired from this server.One disadvantage of this system, as the author mentioned, is that service may experience long delay due to network latency and the management entities to mine the email source.Such latency problem bothers centralized reputation system.The author then proposes a situation-aware trust architecture for VANETs [17].A predictive trust setup system is designed to reduce on-the-scene trust setup latency.They also envision that the roadside infrastructure deserves more attention and research.

Trust Establishment by Peer Detecting
In this section, we illustrate the establishment of cluster-based trust.To establish trust among IVs, the key is to generate the trustworthy CH.Cluster and its head are generated after several rounds of iteration.The generated CH is an authoritative node managing intracluster trust.One of the cluster algorithms which works by passing messages between nodes is Affinity Propagation (AP).To start, measures of similarities are calculated for each pair; real-valued messages are then exchanged between pairs of nodes until high quality exemplars and corresponding clusters gradually emerge.The schematic is shown in Figure 2.
AP works by passing messages between nodes, which is naturally more suitable for trust establishment than other clustering algorithms because of the following characteristics: (1) transitivity: in trust theory, if   has no direct trust with   , it could still build an indirect trust relation via   to   ; likewise, in our AP, Vℎ  makes a judgement about Vℎ  with the help of indirect judgement from other nodes; the primitive AP clustering algorithm therefore well-reflects transitivity, making it fit into trust establishment; (2) asymmetry: trust is not symmetric; that   trusts   does not guarantee   trusts   .AP has the ability to cluster by asymmetric "distance measurement"; (3) distributed manner: AP runs in a completely distributed manner, increasing robustness to attacks; (4) moreover, it Mobile Information Systems achieves a much lower average squared error than normal clustering method [18].
The AP algorithm works iteratively.The similarity (, ) is sent from   to   to measure "distance" between a pair.The responsibility (, ) is sent from   to   to tell how eager  wants  to be CH.The availability (, ) is sent from   to   to tell how eager  wants to be i's CH.The self-responsibility (, ) and self-availability (, ) both represent accumulated evidence reflecting if  is suitable to be CH.The updating process for responsibility and availability in every iteration procedure is illustrated below.More detailed works are [18,19], which have lain the foundation of our work.
Primitive AP Iteration Process is as follows: To make real-valued message converge, messages are damped by ,   =   + (1 − )  , where  is a weighing factor that ranges from 0 to 1.When messages converged, a CH is generated: (2) 3.1.UntrustDegree.Our proposed scheme uses the fundamental idea of Affinity Propagation from a trust perspective.
In general, AP could detect anomaly vehicles in a group.We design an UntrustDegree function as "distance measurement" for AP algorithm to find "the most trustworthy node," that is, to find the node which minimizes overall UntrustDegree.The function (, ) is automatically calculated by IV.An IV can observe other vehicles' behaviors and give an UntrustDegree according to its knowledge: Identity is one item from set  = {, , , V, . ..} denoting real identity of one car and could be represented by a unique digital number.
→  is a vector predefined as some basic values which gives environmental context (e.g., the weather).
→ ℎV  is a vector recording basic actions that   has done recently.With the help of behavior detection technologies [20] or interactive gaming [21], we reasonably assume that IVs are intelligent enough to evaluate each other.The value, (, ) ∈ [0, 1], is primarily positive but set negative, namely, −(, ) ∈ [−1, 0], to fit AP algorithm.
The self UntrustDegree, (, ), is initialized to the same value.It should be noted that a higher selftrust degree makes it more likely to become the cluster  3.
To assure a stable and honest supervisor, we apply Algorithm 1. From this algorithm, we see that   has possibility to supervise another   only when (1)   does not tend to believe   and (2)   and   have small relative mobility.That is, they are stable driving companions.This mechanism builds up mutual supervision relationship between two adversary nodes so that supervisor and supervisee are not likely to collude.More important, it can identify cheating nodes in message passing process.The input of Algorithm 1 is IV's state tuples (position, speed).For   , running this algorithm will work out a supervisee.For each   in DSRC range,   calculates mobility metric  , (lines (1)-( 2)); the smaller the metric is, the more similar the two motions are.Thus, a small metric indicates a stable driving companion.If any   has no supervisor and (, ) ≤ ℎℎ (this indicates that  does not tend to trust ),   adds   in Supervisee Candidate List (Line (3)-( 4)).After that,   chooses the most stable candidate (with the smallest mobility metric) to be the supervisee (lines ( 6)-( 8)).Finally, a pair (  ,   ) is returned (line ( 9)).

Generating CH by Message Passing.
We try to use a distributed algorithm to reach a consensus among large amounts of opinions.Each   maintains a neighbor list   .As Table 1 shows, the list consists of    for each neighbor   .Additionally,   also maintains a supervision field for a supervisee   .
Generating CH needs several iterations which are periodically triggered by time.Besides, broadcasting and supervising also need a synchronous clock.Hello beacons are broadcast and received to maintain local awareness.
(2) Each receiving neighbor   calculates -(, ) if they are traveling in the same direction.
(3)   adds/updates    in its neighbor list: ⟨, (, )  , (V  , V  )  ,  (, ) ,   ,   ⟩ . ( Availability and responsibility messages should be broadcast periodically.We define this period as   .Each   will calculate (, ) and (, ) for each neighbor   .This value is damped with the previous value stored in the neighbor list.  then broadcasts (, ) and (, ) of all neighbors   .
According to mutual supervisor model, the process of calculating (, ) and (, ) should be supervised.Each IV automatically chooses a supervisee by Supervisor Matching algorithm.Supervisor checks supervisee's calculation result and releases alert on condition that supervisee's message is suspicious.The process enhanced by mutual supervisor model is illustrated below.  must be small enough to allow algorithm converged within a   .We have injected a supervision mechanism into clustering process.Any node that broadcasts false availability and responsibility would very likely be discovered.The punishment to malicious nodes is twofold: first, its message would be ignored by neighbors through alert; second, a malicious behavior would be reported to CA.

Supervising and Message
In any   , there is a  −1 generated from  −1 . −1 will claim its role and broadcast Final Message, which represents CH's final evaluation to each cluster member   : FinalMessage is trustworthy since it is sent from CH, which is elected as "the most trustworthy node" by all group members.Built upon FinalMessage, intracluster trust management is relatively reliable to support IVs' collaborations.

Degrading Anomaly by Evidence Evaluation
Reputation-based method has been widely used in web service [22,23] and cloud computing [24] to enhance system reliability and robustness.We believe this method could also improve system performance in Internet of Vehicles.In this scenario, IVs will observe and evaluate qualities of each other.Moreover, they form evidences and report them to CA. CA is supported by strong storage and computational resources, thus being capable of computing reputation from a global view.A global reputation is valuable for on-the-road IVs to choose potential collaborators.More importantly, reputation can be increased or degraded, as a system-level enforcement, to incent good behaviors as well as to punish bad ones.
IVs leverage "store-upload" mechanism in delivering evidences to a CA.Since RSUs are sparsely deployed, each IV would store evidences in its storage firstly and then upload them when moving into a RSU's service range.Evidence evaluation lies in the core of reputation.CA is able to make a conclusion on certain behavior by evaluating and merging different pieces of evidences from different individuals.Note that not all evidences are consistent, and not all evidences are trustworthy.For instance, in order to disturb reputation system, a malicious node may report false evidences.
To mathematically model evidence evaluation, assume CA has to decide among several basic behaviors   ∈ Ω, based on  pieces of evidences {   } to   which are uploaded from different  IVs.Let   denote the final judgement on behavior type of   .The following three methods are leveraged to get a consensus evaluation, with the ability to filter false evidences.
(A) Majority Voting.The final evaluation accords with the majority.Given counts of each type of observed behaviors,   , the behavior type of   is defined by (B) Weighted Voting.For each behavior, this method sums up all the votes value supporting this behavior.The votes are weighed by corresponding trust level    .Then, the type with the highest value is final evaluation: (C) Bayesian Inference.Among the data fusion techniques, Bayesian Inference (BI) is the most popular one used for trust building and managing.To use BI, the a priori probability of each action   is firstly assigned.A posterior probability of each action   is calculated given a set of evidences  = { Final consensus is the actions type with the maximum posterior probability: Besides evidence evaluation, reputation evolution rule is another critical issue.An effective reputation system requires appropriate reputation evolving rules.We will discuss rules in Section 5.1.

Performance and Analysis
To evaluate performance of our scheme, we ran an extensive simulation in TransModeler with real map and high fidelity data.We use a map of urban area of San Antonio, USA.We feed real macroscopic traffic data, which are measured in critical roads and sections, to reconstruct real traffic scenario.We believe that macroscopic data could reflect traffic dynamic to a high extent.We do not simulate the wireless medium in this case since it is orthogonal to our evaluation.All simulations were performed with approximately 400 vehicles on a 6 miles' expressway.Five RSUs are sparsely deployed along the expressway as Figure 4.The DSRC range is set at  300 m.Each simulation ran for 600 s; however, only the last 400 s were used for performance metric calculations.
As noted earlier, an IV will be observed and evaluated by neighbor IVs.We use the example with three Basic Behaviors in Table 2.Each behavior causes different interactive trust.According to reputation evolution rules, one behavior deserves change in reputation.
To depict complex malicious/inappropriate behaviors, which are often mixed with different basic behaviors, we simulate several behavior patterns in Table 3.An anomaly node produces one behavior in every   .These patterns are simplified to make simulation feasible.We believe they could still well-reflect validity of our designed scheme.

The Effect of AP Algorithm.
Ideally, AP clustering would generate a CH for every on-the-road vehicle.However, a small portion of vehicles,   , could be left alone when iterations are finished.There are two major reasons for these nodes: (1) the node could not find a converged CH candidate in its neighborhood and (2) the node itself is the CH but is the only member of cluster.Beside   , there are   nodes which form  normal clusters.In anomaly node-free  simulation, several results are shown in Table 2. Covered ratio is a parameter describing how much the clustering results could cover the whole participants: In anomaly simulation, several results are shown in Table 3.The simulations are ran several times so Figures 5,  6, 7, and 8 are averaged.A trade-off between Covered Ratio and Cluster Member Number could be found through Table 4.The higher the Covered Ratio, the lower the Cluster Member Number.
According to reference [18], Damping Factor is critical for convergence.Different Damping Factors result in different cluster outcomes.In general, a bigger Damping Factor leads to a relatively higher Covered Ratio and a lower Cluster Member Number.We recommend to set  ∈ [0.6, 0.8] so that algorithm tends to come out as an approximate but stable solution.
Another important parameter not mentioned in [18] is Iteration Cycle.Mathematically, the convergence of AP clustering is only influenced by Damping Factor, because We use two metrics to measure effectiveness of modified AP algorithm.
(1) Direct Influence.We define one Failure as an anomaly node elected to be CH; Failure Rate is to measure direct influence of one anomaly node, also called unsuccessful anomaly detection rate: (2) Indirect Influence.We define Risk Degree to feature how much potential influence an anomaly   has when it is in one cluster: If an anomaly node becomes CH, UntrustDegree is 0; otherwise, UntrustDegree is referred to CH's Final Message, which expresses CH's opinion of each node.Risk Degree could feature the indirect influence of an unqualified node according to its role (CH or member) and UntrustDegree.As a converged model, CAP combines historical reputation with real-time cluster-based trust.CA collects uploaded evidences from on-road vehicles and uses three techniques to fuse evidences: (1) Majority Voting, (2) Weighted Voting, and (3) Bayesian Inference.For Bayesian Inference, the prior distribution of behaviors and observed results are defined in Table 5.
We assume that anomaly nodes use a random reporting strategy, which means they generate evidences randomly regardless of what other nodes really have done.Normal nodes will always report true evidences.Figure 5 describes the effects of different evidence merging techniques.In this simulation, three techniques are almost equally effective.However, MV and WV are more suitable for data merging since they have less computation overhead.Figure 6 shows four anomaly nodes' reputation evolves in system.Anomaly nodes would be distinguished and punished by CA.
In a process of iteration, IVs with larger values of Self-UntrustDegree are more likely to be chosen as CH.These values are "preferences."In PAP/TAP/TSAP,   preference is set as median of (, ).However, in CAP where historical reputation is considered by algorithm,   's preference is calculated by When   's reputation is low, ℎ  ∈ [1, 2] is big; preference therefore becomes small (preference is a negative real number), indicating   is not suitable to be CH.
Figure 7 shows the comparison of four models.We set unqualified node percentage as variable.We simulate with different percentages ranges from [0, 0.25] because too high percentage is not realistic.
Generally, when anomaly nodes percentage is low (≤5%), Failure Rate is 0%.As percentage goes up, Failure Rate also goes higher.TAP is a model with tempering/disturbing and no supervision mechanism, so it performs worse than PAP (no tempering/disturbing) and TSAP (tempering/disturbing, supervision model).In contrast, CAP is a converged model (tempering/disturbing, supervision model, and reputation) with a strong defense to anomaly nodes, so it shows the highest robustness among four models.Furthermore, either model could limit failure rate below 1% even when anomaly nodes percentage is up to 25%.
Risk Degree features how much potential influence an anomaly node has when it is in one cluster.Figure 8 shows the Anomaly Percentage-Risk Degree curve for four models.Risk Degree is firstly low when Anomaly Node Percentage is low.However, it suddenly goes to peak when Percentage slightly increases.Finally, it stably declines with increasing percentage.The explanation for this curve is as follows: (1) when Percentage is very low (≤1%), tempering/disturbing is few, and anomaly nodes therefore are easily distinguished by normal nodes.As a result, anomaly nodes are very likely to be left alone.That is, they are excluded from big clusters by AP algorithm.So the overall Risk Degree is low.(2) When Percentage goes higher but not that high (≤5%), this percentage still indicates a "safe environment"; IVs tend to form "big clusters."However, with more anomaly nodes percentage, more anomaly nodes have chances to join big clusters by more tempering/disturbing.According to formula (13), even one anomaly node in a big cluster would cause a big risk degree.(3) When Percentage increases over 5%, our algorithms tend to be conservative and form "small clusters," which have fewer cluster members.Fewer members render lower Risk Degree.According to Figure 8, CAP could limit Risk Degree under 4, demonstrating that our trust management is effective on risk control.

Conclusion and Future Work
Our system aims to build a trustworthy platform to detect abnormal vehicles.To this end, we modified Affinity Propagation to elect a most trustworthy node, called cluster head, among vehicles.CH maintains trust management during a period until a new CH is elected.We also considered that AP is executed in a distributed manner thus easily tempered by malicious nodes.So we presented a mutual supervision model to tackle tempering behaviors.Lastly, we blend another component, CA, into our system.CA consisted of servers and sparse RSUs and is able to provide historical reputation for better decision-making.Overall, this trust management system could detect and filter anomaly nodes.
In the future, great efforts are needed on both the invehicular system and RSUs to strengthen our secure system.These efforts include deploying mobile and local CA using cloud computing techniques, improving intelligence of

Figure 2 :
Figure 2: Four steps of Peer Detecting-based trust establishment.

Figure 5 :
Figure 5: Effects of three merging techniques.
We proposed a supervisor model to alleviate cheating/mistaking in this process.The core of mutual supervisor model is to match   with a supervisor   .Among moving companions of one vehicle, a supervisor is another IV which can receive almost the same broadcast information by sharing the same wireless channel.A supervisor therefore listens to the supervisee related message to validate availability/responsibility by repeating the calculation of suspicious   .The result calculated by   itself is    .The supervisor   's calculation result for   is head.In our final model (discussed in Section 5.2), valid selftrust is set at a value which balances IV's evaluation and historical reputation.Historical reputation can only be legally announced by CA.When a group of IVs pass by a RSU, RSU will proactively download/broadcast reputations to IVs.3.2.Mutual Supervisor Model.Each   receives responsibility (, ) from the neighborhood.Also,   broadcasts (, ) to the neighborhood to claim how suitable it is to become a CH.However, a malicious/incapable node can cheat/mistake in this message passing process by broadcasting a false (, ).For example, if   broadcasts very high (, ) to other nodes, it is more likely to be elected CH according to the AP algorithm.We need a mechanism to prevent nodes broadcast false availability or responsibility.

Table 1 :
Input: a supervisor   , nearby node' states (position, speed) Output: pair(supervisor, supervisee) (1) For Node   in DSRC(Dedicated Short Range Communication) range (2)  , = ((  ,   ), (  ,   )) //calculate mobility metric (3) If   has no supervisor and (, ) ≤ ℎℎ Neighbor and supervision field. Position of     (, )   's last availability received from   (V  , V  )  Speed of    (, ) UntrustDegree from   to     (, )   's last availability sent to   (, ) Last availability received from   (, ) Last availability sent to     (, )   's last responsibility received from   (, ) Last responsibility received from   (, ) Last responsibility sent to     (, )   's last responsibility sent to    V, Cluster head converge flag for       's supervisee Passing Process.For every   , each   will do the following: If it hears an alert about   , each   will ignore   's messages in this   .It will supervise   :   updates and calculates   (, ) and   (, ) for   .

Table 4 :
The results of primary AP clustering.So Iteration Cycle should be regarded as a critical parameter.If it is too short, clustering process will not be able to produce enough CHs to cover most nodes.On the other hand, if the cycle is too long, over-iteration will generate too many CHs.To conclude,

Table 5 :
Prior distribution of behaviors and observed behaviors.