Rational Uniform Consensus with General Omission Failures

Generally, system failures, such as crash failures, Byzantine failures, and so on, are considered as common reasons for the inconsistencies of distributed consensus and have been extensively studied. In fact, strategic manipulations by rational agents are not ignored for reaching consensus in a distributed system. In this paper, we extend the game-theoretic analysis of consensus and design an algorithm of rational uniform consensus with general omission failures under the assumption that processes are controlled by rational agents and prefer consensus. Different from crashing one, agent with omission failures may crash or omit to send or receive messages when it should, which leads to difficulty of detecting faulty agents. By combining the possible failures of agents at the both ends of a link, we convert omission failure model into link state model to make faulty detection possible. Through analyzing message passing mechanism in the distributed system with n agents, among which t agents may commit omission failures, we provide the upper bound on message passing time for reaching consensus on a state among nonfaulty agents and message chain mechanism for validating messages. Then, we prove that our rational uniform consensus is a Nash equilibrium when n > 2t + 1, and failure patterns and initial preferences are blind (an assumption of randomness). Thus, agents have no motivation to deviate the consensus, which could provide interpretable stability for the algorithm in multiagent systems such as distributed energy systems. Our research strengthens the reliability of consensus with omission failures from the perspective of game theory.


Introduction
How to reach consensus despite failures is a fundamental problem in distributed computing. In consensus, each process proposes an initial value and then executes a unique consensus algorithm independently. Eventually all processes need to agree on a same decision chosen from the set of initial values even if there may be some system failures, such as crash failures, omission failures, and Byzantine failures [1]. In the crash model, processes can get into failure state by stopping executing the remaining protocol. In the omission model, processes can get into failure state by omitting to send or receive messages. Also, in the Byzantine model, processes can fail by exhibiting arbitrary behavior. Extensive studies have been conducted on fault-tolerant consensus.
Moreover, two kinds of consensus problems are usually distinguished. One is non-uniform version (usually called "consensus" directly) where no two nonfaulty processes decide differently. The other is uniform version (called "uniform consensus") where no two processes (whether correct or not) decide on different values. We believe that consensus protocols cannot simply replace uniform consensus protocols because the condition of non-uniform consensus is inadequate for many applications [2]. From [3], uniform consensus is harder than consensus because one additional round is needed to decide. Also, uniform consensus is meaningless with Byzantine failures.
Game theory provides interpretable equilibrium by analyzing the game among intelligent players. We argue that its incentive mechanism and punishment mechanism can be effectively applied in distributed systems. Recently, there is an increasing interest on distributed game theory especially in several fields such as peer-to-peer network, biological system, cryptocurrency, and e-commerce, in which processes are selfish called rational agents (or intelligent agents). Combining distributed computing with algorithmic game theory is an interesting research area enriching the theory of fault-tolerant distributed computing. In this framework, agents may deviate from protocols with any behaviors in order to increase their own profits according to utility functions, which could be regarded as general Artificial Intelligence. In [4], this kind of deviation is referred to as strategic manipulation of distributed protocol. This research is necessary in some practical scenarios, in which each process has selfish incentives. Also, we argue that the fairness of algorithms must be promoted by game theory. Clearly, the goal of distributed computing in the context of game theory is to design algorithms for reaching Nash equilibrium, in which all agents have no incentive to deviate from the algorithms. Perhaps, this framework has been investigated and formalized for the first time in the context of secret sharing and multiparty computation [5][6][7][8]. More recently, some fundamental tasks in distributed computing such as leader election and consensus have been studied from the perspective of game theory [9][10][11][12][13][14][15][16][17].
Following this new line of research, we combine faulttolerant consensus with rational agents and study the rational uniform consensus problem in synchronous roundbased system, where every agent has its own preference on consensus decisions. Thus, an algorithm of rational uniform consensus needs to be constructed. Also, for each agent, its utility is not less with following the consensus algorithm than with deviating from the algorithm. That achieves a Nash equilibrium. It is easy to see that standard consensus algorithms cannot reach equilibrium and they can be easily manipulated by even a single rational agent. Several research studies on rational consensus have been conducted [4, 12-16, 18, 19], but none of them consider the uniform property. Also, most studies on rational consensus only support that there are crash failures or no system failures. We argue that omission failures, which are more subtle and complicated than crashing one, cannot be ignored for reaching uniform consensus. In this paper, we pay attention to a distributed system with n agents, among which t agents may experience omission failures. In this setting, we extend the game-theoretic analysis of consensus. Specifically, our contributions in this paper include the following: (i) We utilize a punishment mechanism to convert omission failure model into link state model, which makes faulty detection more direct. In the link state model, faulty links never recover whether or not omission failures recover. Therefore, it can provide an idea to simplify the problem of faulty recovery in distributed computing. (ii) An almost complete mechanism analysis is given for message passing in the distributed system with general omission failures. Then, we provide the upper bound x + 1 on message passing time for reaching consensus on a link state. The upper bound determines the round complexity of our algorithm. Next, a message chain mechanism is introduced for validating messages.
(iii) An algorithm of rational uniform consensus with agent omission failures is presented for any n > 2t + 1. We give a complete formal proof of correctness of our algorithm. The proof shows that our consensus is a Nash equilibrium.
The rest of the paper is organized as follows. Section 2 introduces the related work. Section 3 describes the model that we are working on. Section 4 presents the algorithm of rational uniform consensus for achieving Nash equilibrium and proves it correct. Section 5 concludes the paper.

Related Work
From the view point of modeling methods about agents, the research framework for distributed game theory in the literature may be divided into three categories. In the first category, all of the agents in distributed system are controlled by rational agents preferring consensus and some of them may randomly fail by system failures. Bei et al. [4] studied distributed consensus tolerating both unexpected crash failures and strategic manipulations by rational agents. They considered agents that may fail by crashing. However, the correctness of their protocols needs a strong requirement that it must achieve agreement even if agents deviate. Afek et al. [18] proposed two basic rational building blocks for distributed system and presented several fundamental distributed algorithms by using these building blocks. However, their protocol is not robust against even crash failures. Halpern and Vilaça [12] presented a rational fair consensus with rational agents and crash failures. They used failure pattern to describe the random crash failures of agents. Clementi et al. [13] studied the problem of rational consensus with crash failures in the synchronous gossip communication model. The protocols of Halpern et al. and Clementi et al. do not tolerate omission failure, but we think the consideration to it is necessary. Harel et al. [15] studied the equilibria of consensus resilient to coalitions of n − 1 and n − 2 agents. They gave a separation between binary and multi-valued consensus. However, they assumed that there are no faulty agents.
The second category is named rational adversary. Groce et al. [19] studied the problem of Byzantine agreement with a rational adversary. Rather than the first model, they assumed that there are two kinds of processes: one is honest and follows the protocol without question; the other is a rational adversary and prefers disagreement. Amoussou-Guenou et al. [14] studied Byzantine fault-tolerant consensus from the game theory point. They modeled processes as rational players or Byzantine players and consensus as a committee coordination game. In [14], the Byzantine players have utility functions and strategies, which can be regarded as rational adversaries similar to [19]. In our opinion, this framework limits the scope of the Byzantine problem.
Finally, the BAR framework (Byzantine, Altruistic, and Rational) was proposed in [20]. In [16], Ranchal-Pedrosa and Gramoli studied the gap between rational agreements that are robust against Byzantine failures and rational agreements that are robust against crash failures. Their model consists of four different types of players: correct, rational, crash, or Byzantine, which is similar to the BAR model. They consider that rational players prefer to cause a disagreement than to satisfy agreement, which we view as a bit limited because only referring rational players as rational adversaries is one of the questions in the Byzantine model. Moreover, no protocols are proposed in [16].

Model
We consider a synchronous system with n agents and each of agent has a unique and commonly known identify in N � 1, . . . , n { }. Execution time is divided into a sequence of rounds. Each round is identified by the consecutive integer starting from 1. There are three successive phases in a round: a send phase in which each agent sends messages to other agents in system, a receive phase in which each agent receives messages that are sent by other agents in the send phase of the same round, and a computation phase where each agent verifies and updates the value of local variables and executes local computation based on the messages sent and received in that round. We assume that every pair of agents i and j in N is connected by a reliable communication link denoted by link ij . For an agent i, all links in the system can be divided into two types: direct link link ij where j ∈ N, and indirect link link kp where neither k nor p is equal to i.

Failure Model.
Here the general omission failures [21], which occur in agents and not in communication links [22], are considered. That is, an agent crashes or experiences either send omissions or receive omissions. Also, send omission means that the agent omits sending messages that it is supposed to send. Receive omission means that the agent omits receiving messages that it should receive. We define that agent omission failures never recover. We argue that our protocol also works even if failures could recover, but proving this seems more complicated. It is easy to see that crash failure can be converted to omission failure because if an agent crashes, it must omit to send and receive messages with all other agents after it has crashed. We assume that there are t agents undergoing general omission failures.
Based on the failure model, we divide the agents in the system into three types: It is easy to see that t is the sum of the number of risk and faulty agents. We treat good agents and risk agents as nonfaulty agents uniformly. Send omission and receive omission are symmetrical. For example, the cases that i omits to send messages with j and that j omits to receive messages with i have the same view for i and j. Therefore, we may not be able to directly detect the states of some agents with omission failures. Thus, we call them risk agents and consider them as correct agents. For an agent that has omission failures with more than t agents, it must have omission failures with at least one good agent and then clearly we can know it is a faulty agent.
Due to the symmetry of agent omission failures, we model the agent omission failures as the link state problem by a punishment mechanism. Specifically, in our protocol, if an agent i receives no messages from j in a round, then in the following rounds, i sends no messages to j and does not receive messages from j [23]. Thus, both send omission or receive omission will cause the link interruption. So in a round, we divide each link link ij into three types: correct link, where neither i nor j experiences omission failures in this round, faulty link, where at least one of i and j has omission failures with the other one in the round, and unknown − state link, where the state of link ij in this round is unknown to another agent k. It is easy to see that we can determine the type of an agent by the number of correct direct links of it, which is the fault detection method in our protocol. Similarly, faulty links never recover under this punishment mechanism whether or not omission failures recover.

Consensus.
In the consensus problem, we assume that every agent i has an initial preference v i in a fixed value set V (we follow the concept of initial preference in [12]). We are interested in uniform consensus in this paper. A protocol solving uniform consensus must satisfy the following properties [3]: To solve uniform consensus in presence of agent omission failures, we assume that n > 2t + 1 and n ≥ 3.
In uniform consensus, an agent's final decision must be one of the following formalized types: (i) ⊥: it means that there is no consensus. ⊥ is a punishment for inconsistency. (ii) ‖: it means no decision. Deciding ‖ is not ambiguous with validity, as ‖ cannot be proposed [23]. It does not affect the final consensus outcome. (iii) v ∈ V: it satisfies the property of validity, which must be the initial preference of some agent.

Rational Agent.
We consider that distributed processes act as rational agents according to the definition in game theory. Each agent i has a utility function u i . We assume that agents have solution preference [18], and an agent's utility depends only on the consensus value achieved. Thus, for each agent i, there are three values of u i based on the consensus value achieved: Computational Intelligence and Neuroscience 3 consensus value which is not equal to i's initial preference; (iii) β 2 is i's utility if there is no consensus. It is easy to see that β 0 > β 1 > β 2 , and our results can easily be extended to deal with independent utility function for each agent. The strategy of an agent i is a local protocol σ i satisfying the system constrains. i takes actions according to the protocol σ i in each round. That is, σ i is a function from the set of messages received to actions. Each agent chooses the protocol in order to maximize its expected utility. Thus, there are n local protocols chosen by every agent, which is called strategy profile σ → in game theory. The equilibrium is a strategy profile, where each agent cannot increase its utility by deviating if the other agents fix their strategies. For each agent i, if the local protocol σ i is our consensus algorithm when reaching an equilibrium, then we say that consensus is a Nash equilibrium and the consensus reaching a Nash equilibrium is called rational consensus. Formally, if a strategy profile (or consensus) σ → is a Nash equilibrium, then for all agents i and all strategies σ i ′ for i, it must have

Notation Description.
The main notations used in following sections are summarized in Table 1.

A Rational Uniform Consensus in Synchronous Systems.
In order to reach rational uniform consensus that can tolerate omission failures, our protocol adopts a simple idea from an early consensus protocol [23]: An agent does not send or receive any messages to those agents that did not send messages to it previously. Then, we convert the omission failure model which cannot be detected into the link state model which can be detected by agents in each round. However, the presence of rational agents makes protocol more complicated. It requires the protocol to prevent the manipulation of rational agents. Hence, the security of the algorithm needs to be improved from three aspects. The first is interacting with the latest network link states and message sources in each round. The update process of the latest link states within each agent depends on complete message chains, and we can obtain a unified decision round and decision set from message passing mechanism in omission failure environment. The second is using secret sharing for agents' initial preferences [24]. It encrypts the initial preferences so as to prevent an agent knowing the values of other agents in advance. The third is signing each message with a random number and marking faulty links by faulty random numbers [4]. This can improve the difficulty of a rational agent to do evil. The protocol is described in Algorithm 1. In more detail, we proceed as follows.
Initially, each agent i generates a random number proposal i which is used for consensus election later (line 2). Then i computes two random 1-degree polynomials q i and b i with q i (0) � v i and b i (0) � proposal i , respectively (line 3). They satisfy (2, n) threshold which means that an agent j ≠ i can restore v i or proposal i if it knows more than two pieces of q i or b i . Then i initialize set lost i , NS 0 i , HS i , decision i and consensus i (line 4); we discuss these in more detail below. Then i generates the faulty random number X-random 1 i [k][l ij ] for each agent k ≠ i and each direct link l ij that is the abbreviation of link ij for the link between i and j (lines 5-7; l r ij represents the link ij in round r). And the message random number random 1 i for round 1 is randomly chosen from 0, . . . , n − 1 { } (line 8). For each link, i generates n − 1 faulty random numbers and then sends them to other agents, respectively, in round 1. So, we can get that X-random 1 i contains (n − 1) 2 faulty random numbers in total. Then i puts X-random 1 i and random 1 i into X-RANDOM and RANDOM, respectively (lines 9 and 10), where X-RANDOM is a function storing all faulty random numbers known to i and RANDOM stores all message random numbers. Agents can invoke these two functions to verify random numbers. Specifically, input the id, link, and round to invoke X-RANDOM and input id and round to invoke X-RANDOM.
There are t + 4 rounds in total and each round has three phases. In phase 1 of round r, 1 ≤ r ≤ t + 4, i only sends messages to each agent j who does not belong to lost i that is a set of agents that have omission failures with i detected by i (line 15).
which contains n − 1 random numbers (lines 16 and 17). If r � 1, i also sends the piece of q i , q i (j) and the piece of b i , b i (j) to j (line 16). If r � t + 3, i also sends all the secret shares q l (i) and b l (i)(l ≠ j) that it has received from other agents (line 18). It is easy to see that the piece q l (i) and b l (i) must be in pairs. That is, if i restores v j , then it can also restore proposal j . Finally, if r � t + 4, i only sends consensus i to j (line 19). For each agent i, consensus i is the set of all consensus values calculated and received by i. Hence, if the algorithm is executed validly, |consensus i | must be equal to 1.
In phase 2 of round r, 1 ≤ r ≤ t + 4, i only receives messages from agents that are not in set lost i (line 22). And if there are no messages received from an agent j, j ∉ lost i , i The number ofi for computing consensus Detection result of agent i on the state of l ij in round r Agent chain (or message propagation path) from agent i to j. Nf r Set of nonfaulty agents in round r F r Set of faulty agents in round r F Δr Set of faulty agents newly detected in round r x r The number of risk agents in round r 4 Computational Intelligence and Neuroscience Phase 1: send phase (15) for allj ∉ lost i andj ≠ ido Phase 2: receive phase (22) for allj ∉ lost i andj ≠ ido (23) if newmessage has received from jthen (24) puts ifall values are known in Dthen (55) C←the set of agents with the second max proposal in D and message random numbers random r j , respectively, received by i from each agents j ∉ lost i in round r (line 26). Correspondingly, the elements in NS r− 1 and random r { } are one-to-one correspondence. Specially, if |lost i | > t, i knows that it becomes a faulty agent and then i must decide ‖ directly and no longer run in later rounds (line 31). And we say that ‖ means agent i does not decide in the end, which has no influence on the solution.
In phase 3 of round r ≤ t + 3, i firstly uses NS r− 1 i to update NS r i which is useful for the update and verification of link states (line 35). Then i invokes the function VER-IFYANDUPDATE to verify and update NS r i and HS i by NS r− 1 and random r { } (line 36; see Algorithm 2 for details). NS r i is the latest state known to i of all links in the system in round r. HS i is the historical link state including t + 3 rounds in total. If r ≤ t + 2, i generates the message random number random r+1 i and the faulty random numbers X-random r+1 i for round r + 1, which will be sent to other agents in round r + 1, and then puts these random numbers into X-RAN-DOM and RANDOM, respectively (lines 37-43). Then if r � t + 3, i last updates HS i by NS t+3 i (line 45). Specifically, if a link l kp is faulty in round m in NS t+3 i , then change the state of l kp into fault from round m to round t + 3 in HS i . This is the last time modifying HS i . And following that, i utilizes HS i to find the decision round m * from round 1 to round t + 2, which is the first reliable round in HS i (lines 46-48). We follow the concept of clean round in [12]. The number of faulty agents does not increase in clean round and the previous round of clean round is reliable round. Specially, we say that reliable round cannot be round 0, so that the first reliable round is the previous round of the second clean round if the first clean round is round 1. In HS r i , if less than n − t − 1 links to agent j are correct, then j is a faulty agent. Otherwise, j is a nonfaulty agent. We define that it must remove the explicitly faulty agents when computing the state of j in round r by HS i . Then i computes the decision set D that is the set of nonfaulty agents in HS m * i (line 49). And then i uses all the secret shares it has received in round t + 3 to try to restore the initial preference and proposal of each agent j ∈ D (lines 50-53). If i can reconstruct the values of all agents ∈ D, then i must know all proposals of these agents. Then i computes the consensus proposal (lines 54-63). Firstly, i sorts all the proposals in D and finds the set C of agents with the second max proposal value (line 55). Then if there is only one agent in C, i puts the initial preference of this agent into consensus i (line 56). If there are more than one agents in C, that is, more than one agents have the same second max proposal value pr in D (we say that the probability is extremely low), then i uses the pr to mod the agent number of C and gets S (lines 60-62). In this case, i finally puts v j into its consensus i where j is the The detailed implementation of the verification and update protocol in phase 3 is given in Algorithm 2.
Basically, for each link l kp , NS r i [l kp ] is a tuple containing two tuples, t A and t B . The first tuple t A represents the state of l kp , which contains three types: Msg-R, Msg-X and Msg-O, representing correct link, faulty link, and unknown-state link, respectively. The format of type Msg-R is (m, k, random m p ), where m is the round of the link state, k is the agent reporting the link state, and random m p is the message random number sent by p in round m. It is easy to see that if k reports the state of its direct link l kp is correct , where m and k are the same as those in Msg-R, X is an identifier, and X-random m k [l kp ] is the sorted set of faulty random numbers on l kp which is generated by k in round m-1. Specifically, the set is sorted by the ids of agents from small to large. The format of type Msg-O is ∅ because the state of l kp is unknown for i. The second tuple t B describes the source of t A and has the form (j, m), where j is the agent sending the link state t A to i, and m is the round when j sends it to i. Specially, for direct link, when i first updates the state in round r, t B is ϕ meaning that the message source is i itself. HS r i [l kp ] denotes the state of link l r kp known to i and r could range from 1 to t + 3. It contains at most two different tuples because the state of l kp in round r can only be detected and Phase 1: update the state of direct links (6) for j ∈ T do APPENDHS (HS i , l ij , (r, i, random r j )) ⊳ append the state into HS or decide ⊥ (9) for j ∈ S do (10) if Type NS r i [l ij ] � Msg − X then (11) continue (12) else Phase 2: verify message chain (17) for j ∈ T do (18) Phase 3: verify and update (21) for j ∈ T do (22) for Computational Intelligence and Neuroscience 7 reported by k and p. The form of each tuple is similar to t A but the round in t A must be r. And the agents in the two tuples must be different and be k and p, respectively. Specially, if the types of two tuples are Msg-R and Msg-X, then we think the state of l r kp is faulty. And if Msg-O and Msg-O, then l r kp is a unknown-state link which is regarded as a correct link when computing decision round and decision set in round t + 3.
The pseudocode in Algorithm 2 is explained in detail as follows.
i initially generatesT from NS r− 1 , which is easy to see that i must receive the messages sent by agent j ∈ T in round r (line 2). And i also computes set S that is equal to lost i (line 3).
Firstly, in phase 1, i updates the states of direct links in round r. For each agent j ∈ T, i has received the messages from it in round r so that i updates the t A of NS r i [l ij ] to Type Msg-R (line 7). And i must be able to obtain the message random number random r j from random r { }. Then i invokes APPENDHS to append the state (r, i, random r j ) into HS r i [l ij ] (line 8). We stipulate APPENDHS must guarantee that the inputting state satisfies the properties of HS which we have discussed above. For example, each link l kp has at most two different tuples in each round, and they come from different agents, k and p. If a state violated the properties of HS, APPENDHS would decide ⊥ and terminate the protocol early. Then for each agent j ∈ S, it has omission failures detected by i because i does not receive a message from it. If the type of link l ij is already faulty in NS r i inherited NS r− 1 i , i does nothing because for a link, NS only records the earliest round when the link has failures (lines [10][11]. Otherwise, i updates the t A to Type Msg-X and appends the new state into HS i (lines [13][14].
Then in phase 2, i utilizes message chain mechanism to verify the correctness of messages NS r− 1 received in receive phase (lines [17][18]. Message Chain Mechanism. For each agent j, its message NS m j has the following properties. Suppose S m j is the set of agents that disconnected from j in or before round m and T m j is the set of agents that are still connected to j in round m. Suppose X(r) represents the Msg − X tuple where the round number is equal to r. Claim 1. For link l kp in NS m j , where k � j and p ∈ S m j ∪ T m j , its state in round m must be known and the number of correct links in l jp is greater than or equal to n − t − 1.

Claim 2.
For link l kp in NS m j , where k � j and p ∈ S m j ∪ T m j , its state in round m + 1 and later must be unknown.    Claim 9. In Case 1, there must be r ′ < r.
,then i detects an inconsistency and decides ⊥ (lines 41-42).    If i detects an inconsistency, then it decides ⊥ (line 27). If not, i updates the states as previously discussed.

4.2.
Proof of the Protocol. The proof assumes n > 2t + 1. Some variables are defined as follows. Definition 3. Nf r denotes the set of nonfaulty agents in round r. F r denotes the set of faulty agents in round r. F Δr denotes the set of faulty agents newly detected in round r. x r denotes the number of risk agents in round r.
We first prove the upper bound of message passing time and give the round complexity of the algorithm. Theorem 1. (message passing mechanism). If i, j ∈ Nf r+t+1 , all link states in round r can be reached a consensus between i and j at the latest in round r + t + 1.
Proof. Consider the state of link kp in round r, where k, p ∈ N. Specially, we can consider the messages sent by k and p to be independent of each other and this does not affect the final consensus outcome. For example, Type(State k [l r kp ]) � Msg-X and it is received by all nonfaulty agents in round m 1 ( < r + t + 1), and Type(State p [l r kp ]) � Msg-R and it is received by all nonfaulty agents in round m 2 (m 1 < m 2 < r + t + 1). Even if the detection result of p may no longer be forwarded after round m 1 , we still have the correct consensus state in round r + t + 1 when we consider two detection results independently. We have following cases: (i) Case 1. k and p are good agents in round r. In round r + 1, k and p send their detection results to all good agents. So if t � 0, all agents reach a consensus on the state of l kp in round r + 1. If t > 0, all nonfaulty agents reach a consensus in round r + 2. Therefore, all link states of round r among good agents can reach a consensus in round r + t + 1. (ii) Case 2. k is a risk agent or faulty agent and p is not equal to k. Generally, since a receive omission can be converted to a send omission, then each risk agent and faulty agent has at most the following 3 choices when sending messages in each round: (

(3) Case 2.3. k chooses 3. So no good agents know
State k [l r kp ] in round r + 1 and k is detected faulty in round r + 1. Suppose that there is only a risk agent receiving the state. Since agents are independent of each other, it is easy to scale the number of the agents from one to many. We can also divide this case into two cases. (b) Case 2.3.2. k has no sending omissions with some risk agents or faulty agents other than p. Then in round r + 2, the risk agents and faulty agents that have received messages from k also have 3 choices. Take one of the risk agents l as an example. If l chooses 1 or 2 in round r + 2, the results are the same as those in case 2.1 and case 2.2 where the lemma holds. And if l chooses 3, no good agents know State k [l r kp ] in round r + 2. Suppose that when risk and faulty agents choose 3, they must send State k [l r kp ] to risk agents or faulty agents other than the source agent of the state because if they only send the state back to the source agent, the final results depend only on the source agent, not on themselves. Then until round r + t, if from round r + 1 to round r + t − 1, all risk agents and faulty agents that have received State k [l r kp ] choose 3, then the risk (or faulty) agent z in round r + t must be the last risk (or faulty) agent in system. At this time, z has only 2 choices: 1 and 2. And it is easy to get that State k [l r kp ] must be consensus in round r + t + 1. But if from round r + 1 to round r + t − 1, some risk agents or faulty agents that have received State k [l r kp ] choose 1 or 2, then the results are the same as those in case 2.1 and case 2.2.
In summary, the lemma holds. □ Corollary 1. If i, j ∈ Nf r+x r +1 , all link states in round r can reach a consensus between i and j in round r + x r + 1.
Proof. There are t − x r faulty agents in round r. Since the faulty agents before round r do not send any messages in round r + 2, it is equivalent to case 2.1 that k sends messages to these faulty agents in round r + 1. Then the total number of risk and faulty agents in case 2.3 can be reduced to x r . Therefore, in case 2.3.2, if keeping choosing 3, there are no risk or faulty agents anymore up to round r + x r and all link states in round r can be reached a consensus between i and j in round r + x r + 1.

Lemma 1. (round complexity). The link states HS of the second clean round and all previous rounds can reach a consensus among all nonfaulty agents at the latest in round t + 3.
Proof. By Theorem 1, the smaller the round r, the smaller the supremum of the round in which the link states in round r can reach a consensus. Hence, we directly consider the second clean round. Suppose the second clean round is y. Then there are already at least y − 2 faulty agents in round y. That is, x y ≤ t − y + 2. By Corollary 1, the link states in round y can reach consensus in round y + x y + 1. Since y + x y + 1 ≤ t + 3, the lemma holds.
Then it is proved that the algorithm satisfies all the properties of uniform consensus with general omission failures.

Lemma . If i is a nonfaulty agent, then Type
Proof. Link l kp cannot recover after a fault occurs. So if l kp is a faulty link in round r, then its state must also be Msg-X in subsequent rounds. Moreover, HS also expands all Msg-X states backwards in LASTUPDATE.

Lemma 3. If i is a nonfaulty agent, then Type
Proof. Since Type(HS r i [l kp ]) ≠ Msg-O, there must be an agent in k or p (supposing p) that has reported State p [l r kp ] in round r + 1, and finally the state has been transmitted to i. We suppose that i receives the state in round r ′ . Then we have C i p (r, r ′ ). Since link omission is irreversible, C i p (r, r ′ )must be nonfaulty from round 1 to round r − 1.   Proof. We can know that for a nonfaulty agent i, |lost i | must be less than or equal to t. So it is easy to get that i have correct links with at least n − t − 1 agents.

Lemma 6. A nonfaulty agent must have correct links with at least 2 good agents other than itself in a round.
Proof. We analyze the nonfaulty agent i from two aspects of good agent and risk agent.
Therefore, the lemma holds.
□ Remark 1. For a faulty agent f in round r, since it has faulty links with more than t agents in round r, then it does not send messages to any agents after at most 2 rounds. Hence, we claim that in round r and later, the faulty agent f needs to be removed when computing the number of connections of other agents.

Lemma 7. Suppose that the direct link state information of j in round r can be agreed by all nonfaulty agents in round m.
If i is a nonfaulty agent in round m and agent j is considered to be a uncertain agent in HS r i , then j must be a faulty agent in HS r+1 i .

Computational Intelligence and Neuroscience
Proof. The proof argument is by contradiction. Assume that j is considered to be a nonfaulty agent or a uncertain agent in HS r+1 i . (i) Case 1. j is a nonfaulty agent in HS r+1 i when it is considered to be a uncertain agent in HS r i . j must send NS r j to at least 2 good agents in round r + 1 (Lemma 6). Then these good agents send the direct link states of j to all nonfaulty agents. Hence, j must be a certain agent in HS r i . A contradiction. (ii) Case 2. j is a uncertain agent in HS r+1 i when it is considered to be a uncertain agent in HS r i . It is easy to see that the link states between j and good agents cannot be unknown-state in round r and r + 1. Since the number of good agents n − t must be greater than t, j cannot have faulty links with all good agents. Then it must send NS r j to some good agents in round r + 1. Equally, j must be a certain agent in HS r i . A contradiction.
Thus, we reach contradictions in all cases, which proves the lemma. to some good agents (denoted by U) and risk agents in round r. Then two results are sent to all good agents by the agents in U in round r + 1. So every good agent knows the uniform state of l r− 1 ij in round r + 1. Therefore, all nonfaulty agents reach a consensus on the state in round r + 2. (iii) Case 3. i is a good agent and j is a risk agent.
Similarly, it is easy to get that all good agents have the uniform state of l r− 1 ij in round r + 1 by case 1 and case 2. So this is what we want. Thus, the lemma holds. Proof. Assume, without influence, that the messages of p have no effect on the state of l kp . Since k is a nonfaulty agent, by Lemma 8, State k [l r− 1 kp ] is received by all nonfaulty agents in round r + 2. Hence, i must know the state of l r− 1 kp . The lemma holds. □ Corollary . If round r is a reliable round and the total number of rounds is greater than r + 3, there are not uncertain agents in round r.
Proof. For a contradiction, let i be a uncertain agent in round r, by Lemma 7, we have two cases. For both case 1 and case 2, by Lemma 8, there are contradictions to the assumption. Hence, i must be a faulty agent in HS r+1 . The unknown-state link is regarded as correct link so that i is regarded as a nonfaulty agent in HS r . Then it is a contradiction to the assumption that r is a reliable round. Thus, the lemma holds. Proof. Suppose, for a contradiction, that there is no clean round in t + 1 rounds. Then there must be new faulty agents added in each round. So there are at least t + 1 faulty agents in t + 1 rounds. This contradicts the assumption that there are at most t faulty agents. □ Corollary 3. There are at least two clean rounds in t + 2 rounds.

Corollary 4.
In t + 2 rounds, there must be one reliable round r in which at most one new faulty agent is detected.
Proof. Suppose that there are a clean rounds in t + 2 rounds. We prove the lemma from two cases: (i) Case 1. All clean rounds are greater than round 1.
And for a contradiction, two new faulty agents are detected in each reliable round. Then there are 2a faulty agents and there are still t − 2a faulty agents remaining in t + 2 − 2a rounds. It is easy to see that t + 2 − 2a > t − 2a. Therefore, there must be clean rounds in the remaining t + 2 − 2a rounds. This contradicts the assumption that there are a clean rounds in t + 2 rounds. (ii) Case 2. Round 1 is a clean round. And for a contradiction, two new faulty agents are detected in each reliable round. Then there are 2(a − 1) faulty agents and there are still t − 2a + 2 faulty agents remaining in t + 2 − 2a + 1 rounds. Since t + 2 − 2a+ 1 > t− 2a + 2, there must be clean rounds in the remaining t + 2 − 2a + 1 rounds, a contradiction.
Thus, we reach a contradiction in every case, which proves the lemma. □ Lemma 11. In round t + 3, if a faulty agent i ∈ F Δt+3 can receive messages from at least one good agent, the link states of the second clean round and all previous rounds can also reach a consensus among i and all nonfaulty agents.
Proof. By Corollary 4, from the second clean round y to round t + 3, there must be a reliable round (suppose the first is r) in which at most one new faulty agent is detected because the total number of risk and faulty agents is t − y + 2 and the total number of rounds is t − y + 4. Suppose r + 1 � y ′ which is a clean round. Since F Δt+3 ≠ ∅, y < y ′ < t + 3. Since no new faulty agents are detected in round y ′ , risk agents can only choose 2 in round y ′ (see details in Theorem 1). Hence, in round y ′ + 1, all good agents reach a consensus on the link states HS of round y and before rounds. We divide y ′ into two cases to prove as follows: (i) Case 1. y < y ′ < t + 2. Then we have y ′ + 2 ≤ t + 3. In round y ′ + 2, all good agents send the latest and uniform link states of round y and before rounds to all agents. Thus, i must reach a consensus.
(ii) Case 2. y ′ � t + 2. We assume that for the reliable rounds in which two or more faulty agents are detected, the faulty agents can be averaged to the next round and then the clean round can also be regarded as a normal round. Then it can be seen that the number of faulty agents keeps increasing in each round from round y + 1 to round t + 1. Thus, at least t + 1 − y faulty agents have been added until round y ′ . Since there are x y ( ≤ t − y + 2) risk agents in round y, then at most one risk agent remains in round y ′ and it must be i. Then it is easy to get see i must reach a consensus in round t + 3.
Thus, the lemma holds.

□
Theorem . Consensus solves uniform consensus if at most t agents omit to send or receive messages, n > 2t + 1, and suppose that all agents are honest.
Proof. Since n > 2t + 1, it is easy to see that no inconsistency is detected.
Termination. From Algorithm 1, nonfaulty agents must decide in round t + 4 and faulty agents decide before round t + 4.
Validity. Since no inconsistency is detected, all agents make decisions different from ⊥. For agent i, if i decides a value decision i , decision i must be the initial preference of an agent in decision set D. Since D depends on HS m * i , it must have D⊆N. Therefore, decision i satisfies the validity property. If i decides ‖, i has no decision and ‖ does not affect the final consensus outcome. Thus, it also conforms to the validity.
Uniform Agreement. We prove this from the following cases: (i) Case 1. Agents i and j ∈ Nf t+4 . By Corollary 3, there must be a decision round m * in t + 3 rounds. And by Lemma 1, we have Then D i � D j � D * . Since the pieces of preferences and proposals of all the agents in D * must be saved by at least 2 good agents (Lemma 6), all good agents and some risk agents can restore all initial values and proposals of the agents in D * in round t + 3. We denote these agents by To achieve the Nash equilibrium, we make some appropriate assumptions about initial preferences and failure patterns. Failure pattern represents a set of failures that occur during an execution of the consensus protocol [12]. Specifically, we assume that initial preferences and failure patterns are blind. □ Definition 4. The blind initial values mean that each agent cannot guess the preferences of other agents and the probability of its own preference becoming the consensus cannot be improved by trusting others.
By Definition 4, we can get that if an agent wants to improve its own utility, it can only rely on itself, for example, increasing the probability of entering the decision set and reducing the number of agents in the decision set and so on.

Computational Intelligence and Neuroscience 13
Definition 5. The blind failure patterns mean that before t faulty agents appear, an agent cannot guess the link states in the following rounds. Then we have that (i) If agent i does not know the link states of round m in round r and j is a nonfaulty agent in round r, thenP(i ∈ F Δm |link ij is faulty) � P(j ∈ F Δm |link ij is faulty) ≤ α. (ii) For round m 1 and m 2 , if m 1 < m 2 and i does not know the link states of round m 2 , then P(v i becomesconsensus|m 1 isthedecisionround) �P(v i becomesconsensus|m 2 is the decision round). (iii) For t + 1 rounds, if the link states of each round in t + 1 rounds are unknown to agent i, then for a round r in t + 1 rounds,P(r is a clean round) ≥ 1/ (t + 1).

Theorem 3.
If n > 2t + 1, at most t agents have omission failures at the same time, agents prefer consensus, and failure patterns and initial preferences are blind, then σ →CONSENSUS is a Nash equilibrium.
Proof. To prove Nash equilibrium, we need to show that it is impossible for each agent i ∈ N to increase its utility u i with all possible deviations σ i . That is, proving that for each agent i, there must be We use the same deduction method as in [12]. Consider all the ways that i can deviate from the protocol to affect the outcome as follows: (1) i generates a different value v i ′ ≠ v i (or proposal i ′ ≠ proposal i ) and sends q i ′ (j) (or b i ′ (j)) to some agents j ≠ i.
(2) q i (j) (or b i (j)) sent by i cannot restore v i (or proposal i ). i sends an incorrect random or X-random of l kp to j in round m. (8) i sends an incorrect q l (i) (or b l (i)) to j ≠ l different from the q l (i) (or b l (i)) that i has received from l in round 1. (9) i sends an incorrect consensus i to j ≠ i in round t + 4. (10) i pretends to crash in round m.
We consider these deviations one by one and prove that i does not gain by any of deviations. That is, equation (1) holds if i deviates from the protocol by these deviations on the list above.
(i) Type 1. (i) If i sends q i ′ (j) to some agents, then either an inconsistency is detected because of secret restoring error, or i does not gain. Specifically, if i is the agent whose value is chosen, then i is worse off if it lies than it does not, since some agents cannot restore v i , but they can restore it when following the protocol. Then if i is not the agent whose preference is chosen, then it does not affect the outcome. (ii) i sends b i ′ (j). Then an inconsistency is detected if restoring polynomial error or generating different consensus values in the system. And if no inconsistency is detected, then either all agents that receive b i or b i ′ are faulty or both b i and b i ′ do not affect the final outcome. Since changing the proposal cannot increase i's utility, i does not gain. Therefore, both (i) and (ii), i does at least as well if i uses the strategy σ CONSENSUS i , as it deviates from the protocol according to type 1. So, equation (1) holds. (ii) Type 2. It is easy to see that either an inconsistency is detected or no benefit because there is no increase in the probability that v i becomes the consensus. Thus, equation (1) holds. (iii) Type 3. (i) Since other agents follow the protocol, it does not affect the final outcome because the two kinds of random numbers are only used for verification. (ii) Since i does not know the proposals of other agents in round 1, then using different proposals cannot improve the probability that v i becomes the consensus. Thus, equation (1) holds. (iv) Type 4. If i sends an incorrectly formatted message to j, then either an inconsistency is detected by j or it does not affect the outcome since j omits to receive messages from i in round m. Thus, i does not gain, so equation (1) holds. (v) Type 5. Since |lost i | > t, i does not receive messages from at least t + 1 agents in round m, that is, i has receiving omission failures with at least two good agents.
(1) Case 1. i does not guess message random numbers in round m + 1. Then by Claim 1, an inconsistency is detected. (2) Case 2. i guesses the message random number in round m + 1 and has correct links with the remaining n − t − 2 agents, and these agents are all nonfaulty agents. Then by the Claim 1, i can successfully send messages in round m + 1 iff i can guess a message random number random m j from a nonfaulty agent j and i has no sending omission failures with j. That is, the random guessing does not change the detection result of the state of i by other agents in