Games Based Study of Nonblind Confrontation

1Guizhou Provincial Key Laboratory of Public Big Data, Guizhou University, Guiyang 550025, China 2Information Security Center, State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications, Beijing 100876, China 3National Engineering Laboratory for Disaster Backup and Recovery, Beijing University of Posts and Telecommunications, Beijing 100876, China


Introduction
The core of all security issues represented by cyberspace security [1], economic security, and territorial security is confrontation.Network confrontation [2], especially in big data era [3], has been widely studied in the field of cyberspace security.There are two strategies in network confrontation: blind confrontation and nonblind confrontation.The socalled "blind confrontation" is the confrontation in which both the attacker and defender are only aware of their selfassessment results and know nothing about the enemy's assessment results after each round of confrontation.The superpower rivalry, battlefield fight, network attack and defense, espionage war, and other brutal confrontations, usually belong to the blind confrontation.The so-called "nonblind confrontation" is the confrontation in which both the attacker and defender know the consistent result after each round.The games studied in this paper are all belonging to the nonblind confrontation.
"Security meridian" is the first cornerstone of the General Theory of Security which has been well established [4,5].Security confrontation is the second cornerstone of the General Theory of Security, where we have studied the blind confrontation and gave the precise limitation of hacker attack ability (honker defense ability) [4,5].Comparing with the blind confrontation, the winning or losing rules of nonblind confrontation are more complex and not easy to study.In this paper, based on the Shannon Information Theory [6], we study several well-known games of the nonblind confrontation: "rock-paper-scissors" [7], "coin tossing" [8], "palm or back," "draw boxing," and "finger guessing" [9], from a novel point of view.The famous game, "rock-paper-scissors," has been played for thousands of years.However, there are few related analyses on it.The interdisciplinary team of Zhejiang University, Chinese Academy of Sciences, and other institutions, in cooperation with more than three hundred volunteers, spent four years playing "rock-paper-scissors" and giving corresponding analysis of game.And the findings were awarded as "Best of 2014: MIT technology review." We obtain some significant results.The contributions of this paper are as follows: (i) Channel models of all the above three games are established.
(ii) The conclusion that the attacker or the defender wining one time is equivalent to one bit transmitted successfully in the channel is found.
(iii) Unified solutions for all the nonblind confrontations are given.
The rest of the paper is organized as follows.The model of rock-paper-scissors is introduced in Section 2, models of coin tossing and palm or back are introduced in Section 3, models of finger guessing and drawing boxing are introduced in Section 4, unified model of linear separable nonblind confrontation is introduced in Section 5, and Section 6 concludes this paper.
Law of Large Numbers indicates that the limit of the frequency tends to probability; thus the choice habits of  and  can be represented as the probability distribution of random variables  and : Pr( = 0) =  means the probability of  for "scissors"; Pr( = 1) =  means the probability of  for "rock"; Pr( = 2) = 1 −  −  means the probability of  for "paper", where 0 < ,  and  +  < 1; Pr( = 0) =  means the probability of  for "scissors"; Pr( = 1) =  means the probability of  for "rock"; Pr( = 2) = 1 −  −  means the probability of  for "paper," where 0 < ,  and  +  < 1.
Construct another random variable  = [2(1 +  + )] mod 3 from  and .Because any two random variables can form a communication channel, we get a communication channel (; ) with  as the input and  as the output, which is called "Channel ," which is shown in Figure 1.If  wins, then there are only three cases.
In contrast, if "Channel " sends one bit from the sender to the receiver successfully, then there are only three possible cases.

Lemma 1. 𝐴 wins once if and only if "
Channel " sends one bit from the sender to the receiver successfully.Now we can construct another channel (; ) by using random variables  and  with  as the input and  as the output, which is called "Channel ."Then similarly, we can get the following lemma.

Lemma 2. 𝐵 wins once if and only if "
Channel " sends one bit from the sender to the receiver successfully.
Thus, the winning and losing problem of "rock-paperscissors" played by  and  converts to the problem of whether the information bits can be transmitted successfully by "Channel " and "Channel ."According to Shannon's second theorem [3], we know that channel capacity is equal to the maximal number of bits that the channel can transmit successfully.Therefore, the problem is transformed into the channel capacity problem.More accurately, we have the following theorem.Here, we calculate the channel capacity of "Channel " and "Channel " as follows.
For channel (; ) of :  denotes its transition probability matrix with 3 * 3 order, The channel transfer probability matrix is used to calculate the channel capacity: solve the equations  = , where  is the column vector: Consider the transition probability matrix .
And the probability distribution of  is obtained.If   () ≥ 0,  = 0, 1, 2, the channel capacity can be confirmed as .
(2) If  is irreversible, the equation has multiple solutions.Repeat the above steps; then we can get multiple  and the corresponding   ().If   () does not satisfy   () ≥ 0,  = 0, 1, 2, we delete the corresponding .
For channel (; ) of :  denotes its transition probability matrix with 3 * 3 order, The channel transfer probability matrix  is used to calculate the channel capacity .
Solution equation group  = , where ,  are the column vector: Consider the transition probability matrix .
(1) If  is reversible, there is a unique solution; that is, And the probability distribution of  is obtained.If   () ≥ 0,  = 0, 1, 2, the channel capacity can be confirmed as .
(2) If  is irreversible, the equation has multiple solutions.Repeat the above steps, then we can get multiple  and the corresponding   ().If   () does not satisfy   () ≥ 0,  = 0, 1, 2, we delete the corresponding .
In the above analysis, the problem of "rock-paperscissors" game has been solved perfectly, but the corresponding analysis is complex.Here, we give a more abstract and simple solution.
Law of Large Numbers indicates that the limit of the frequency tends to probability; thus the choice habits of  and  can be represented as the probability distribution of random variables  and : (5) The winning and losing rule of the game is if  = ,  = , then the necessary and sufficient condition of the winning of () is ( − ) mod 3 = 2. Now construct another random variable  = ( − 2) mod 3. Considering a channel (; ) consisting of  and , that is, a channel with  as an input and  as an output, then, there are the following event equations.
Conversely, if "one bit is successfully transmitted from the sender to the receiver in the channel," it means that the input () of the channel (; ) always equals its output ().That is,  = ( − 2) mod 3 = , which is exactly the necessary and sufficient conditions for  winning.
Based on the above discussions, () winning once means that the channel (; ) sends one bit from the sender to the receiver successfully and vice versa.Therefore, the channel (; ) can also play the role of "Channel " in the third section.
Similarly, if the random variable  = ( − 2) mod 3, then the channel (; ) can play the role of the above "Channel ." And now the form of channel capacity for channel (; ) and channel (; ) will be simpler.We have The maximal value here is taken for all possible   and   .So, (; ) is actually the function of  0 ,  1 , and  2 .

Similarly, 𝐶(𝑌
The maximal value here is taken for all possible   and   .So, (; ) is actually the function of  0 ,  1 , and  2 .

The Strategy of Win.
According to Theorem 3, if the probability of a specific action is determined, the victory of both parties in the "rock-paper-scissors" game is determined as well.In order to obtain the victory with higher probability, one must adjust his strategy.

2.2.1.
The Game between Two Fools.The so-called "two fools" means that  and  entrench their habits; that is, they choose their actions in accordance with the established habits no matter who won in the past.According to Theorem 3, statistically, if  < ,  will lose; if  > , then  will win; and if  = , then both parties are well-matched in strength.

The Game between a Fool and a Sage.
If  is a fool, he still insists on his inherent habit; then after confronting a sufficient number of times,  can calculate the distribution probabilities  and  of random variable  corresponding to .And  can get the channel capacity of  by some related conditional probability distribution at last, and then by adjusting their own habits (i.e., the probability distribution of the random variable  and the corresponding conditional probability distribution, etc.); then  enlarges his own channel capacity to make the rest of game more beneficial to himself; moreover, the channel capacity of  is larger enough, () > (); then  win the success at last.

The Game between Two Sages.
If both  and  get used to summarizing the habits of each other at any time, and adjust their habits, enlarge their channel capacity.At last, the two parties can get the equal value of channel capacities; that is, the competition between them will tend to a balance, a dynamically stable state.

Models of (Coin Tossing) and
(Palm or Back) 3.1.The Channel Capacity of "Coin Tossing" Game."Coin tossing" game: "banker" covers a coin under his hand on the table, and "player" guesses the head or tail of the coin.The "player" will win when he guesses correctly.
Obviously, this game is a kind of "nonblind confrontation."We will use the method of channel capacity to analyze the winning or losing of the game.
Based on the Law of Large Numbers in the probability theory, the frequency tends to probability.Thus, according to the habits of "banker" and "player," that is, the statistical regularities of their actions in the past, we can give the probability distribution of their actions.
We use the random variable  to denote the state of the "player." = 0 ( = 1) means that he guesses head (tail).
Because  guesses correctly = { = 0,  = 0} ∪ { = 1,  = 1} = one bit is successfully transmitted from the sender  to the receiver  in "Channel ," " wins one time" is equivalent to transmitting one bit of information successfully in "Channel ." Based on the channel coding theorem of Shannon's Information Theory, if the capacity of "Channel " is , for any transmission rate / ≤ , we can receive  bits successfully by sending  bits with an arbitrarily small probability of decoding error.Conversely, if "Channel " can transmit  bits to the receiver by sending  bits without error, there must be  ≤ .In a word, we have the following theorem.
Theorem 4 (banker theorem).Suppose that the channel capacity of "Channel " composed of the random variable (; ) is .Then one has the following: (1) if  wants to win  times, he must have a certain skill (corresponding to the Shannon coding) to achieve the goal by any probability close to 1 in the / rounds; conversely, (2) if  wins  times in  rounds, there must be  ≤ .
Thus, the mutual information (, ) of  and  equals Thus, the channel capacity  of "Channel " is equal to max[(, )] (the maximal value here is taken from all possible binary random variables ).In a word,  = max[(, )] 0 < ,  < 1 (where (, ) is the mutual information above).Thus, the channel capacity  of "Channel " is a function of , which is defined as ().
Suppose the random variable  = ( + 1) mod 2. Taking  as the input and  as the output, we obtain the channel (; ) which is called "Channel " in this paper.
Because { wins} = { = 0,  = 1} ∪ { = 1,  = 0} = { = 0,  = 0} ∪ { = 1,  = 1} = {one bit is successfully transmitted from the sender  to the receiver  in the "Channel Y"}, " wins one time" is equivalent to transmitting one bit of information successfully in "Channel ." Based on the Channel coding theorem of Shannon's Information Theory, if the capacity of "Channel " is , for any transmission rate / ≤ , we can receive  bits successfully by sending  bits with an arbitrarily small probability of decoding error.Conversely, if "Channel " can transmit  bits to the receiver by sending  bits without error, there must be  ≤ .In a word, we have the following theorem.
Theorem 5 (player theorem).Suppose that the channel capacity of "Channel " composed of the random variable (; ) is .Then one has the following: (1) if  wants to win  times, he must have a certain skill (corresponding to the Shannon coding) to achieve the goal by any probability close to 1 in the / rounds; conversely, (2) if  wins  times in the n rounds, there must be  ≤ .
According to Theorem 4, we can determine the winning limitation of  as long as we know the channel capacity  of "Channel ." Similarly, we can get the channel capacity  = max [(, )], 0 < ,  < 1, of "Channel ."Thus, the channel capacity  of "Channel " is a function of , which is denoted as ().

𝐼 (𝑌, 𝑍
From Theorems 3 and 4, we can obtain the quantitative results of "the statistical results of winning and losing" and "the game skills of banker and player."Theorem 6 (strength theorem).In the game of "coin tossing," if the channel capacities of "Channel " and "Channel " are () and (), respectively, one has the following.Case 1.If both  and  do not try to adjust their habits in the process of game, that is,  and  are constant, statistically, if () > (),  will win; if () < (),  will win; and if () = (), the final result of the game is a "draw."Case 2. If  implicitly adjusts his habit and  does not, that is, change the probability distribution  of the random variable  to enlarge the () of "Channel " such that () > (), statistically,  will win.On the contrary, if  implicitly adjusts his habit and  does not, that is, () < (),  will win.Case 3. If both  and  continuously adjust their habits and make () and () grow simultaneously, they will achieve a dynamic balance when  =  = 0.5, and there is no winner or loser in this case.

The Channel Capacity of "Palm or Back" Game.
The "palm or back" game: three participants (, , and ) choose their actions of "palm" or "back" at the same time; if one of the participants choose the opposite action to the others (e.g., the others choose "palm" when he chooses "back"), he will win this round.
Obviously, this game is also a kind of "nonblind confrontation."We will use the method of channel capacity to analyze the winning or losing of the game.
Based on the Law of Large Numbers in the probability theory, the frequency tends to probability.Thus, according to the habits of , , and , that is, the statistical regularities of their actions in the past, we have the probability distribution of their actions.
Similarly, according to the Law of Large Numbers, we can obtain the joint probability distributions of the random variables (, , ) from the records of their game results after some rounds; namely, Taking  as the input and  as the output, we obtain the channel (; ) which is called "Channel " in this paper.
Conversely, after removing the situations that three participants choose the same actions, if {one bit is successfully transmitted from sender () to the receiver () in the "Channel A"}, there is { = 0,  = 0} ∪ { = 1,  = 1} = { = 0,  = 1,  = 1} ∪ { = 1,  = 0,  = 0} = { for palm,  for back,  for back} ∪ { for back,  for palm,  for palm} = { wins}.Thus, " wins one time" is equivalent to transmitting one bit successfully from the sender  to the receiver  in the "Channel ."From the channel coding theorem of Shannon's Information Theory, if the capacity of the "Channel " is , for any transmission rate / ≤ , we can receive  bits successfully by sending  bits with an arbitrarily small probability of decoding error.Conversely, if the "Channel " can transmit  bits to the receiver by sending  bits without error, there must be  ≤ .In a word, we have the following theorem.
Theorem 7. Suppose that the channel capacity of the "Channel " composed of the random variable (; ) is .Then, after removing the situations in which three participants choose the same actions, one has the following: (1) if  wants to win  times, he must have a certain skill (corresponding to the Shannon coding theory) to achieve the goal by any probability close to 1 in the / rounds; conversely, (2) if  wins  times in the  rounds, there must be  ≤ .
In order to calculate the channel capacity of the channel (; ), we should first calculate the joint probability distribution of the random variable (, ): ] Thus, the channel capacity of "channel " is equal to  = max[(, )] and it is a function of  and , which is defined as (, ).
Taking  as the input and  as the output, we obtain the channel (, ) which is called "Channel ."Similarly, we have the following.Theorem 8. Suppose that the channel capacity of the "Channel " composed of the random variable (; ) is .Then, after removing the situation in which the three participants choose the same action, one has the following: (1) if  wants to win  times, he must have a certain skill (corresponding to the Shannon coding) to achieve the goal by any probability close to 1 in the / rounds; conversely, (2) if  wins  times in the n rounds, there must be  ≤ .
The channel capacity  can be calculated as the same way of calculating .Here, the capacity of "Channel " is a function of  and , which can be defined as (, ).
Similarly, taking  as the input and  as the output, we obtain the channel (, ) which is called "Channel ."So we have the following.Theorem 9. Suppose that the channel capacity of the "Channel " composed of the random variable (; ) is .Then, after removing the situations in which three participants choose the same actions, one has the following: (1) if  wants to win  times, he must have a certain skill (corresponding to the Shannon coding theory) to achieve the goal by any probability close to 1 in the / rounds; conversely, (2) if  wins  times in the  rounds, there must be  ≤ .
The channel capacity  can be calculated by the same way of calculating .Now the capacity of "Channel " is a function of  and , which can be defined as (, ).
From Theorems 6, 7, and 8, we can qualitatively describe the winning or losing situations of , , and  in the palm or back game.
Theorem 10.If the channel capacities of "Channel ," "Channel ," and "Channel " are , , and , respectively, the statistical results of winning or losing depend on the values of , , and .The one who has the largest channel capacity will gain the priority.We can know that the three channel capacities cannot be adjusted only by one participant individually.It is difficult to change the final results by adjusting one's habit (namely, only change one of the , , and  separately), unless two of them cooperate secretly.

Models of (Finger Guessing) and
(Draw Boxing) 4.1.Model of "Finger Guessing"."Finger guessing" is a game between the host and guest in the banquet.The rules of the game are the following.The host and the guest, respectively, choose one of the following four gestures at the same time in a round: bug, rooster, tiger, and stick.Then they decide the winner by the following regulations: "Bug" is inferior to "rooster"; "rooster" is inferior to "tiger"; "tiger" is inferior to "stick"; and "stick" is inferior to "bug".Beyond that, the game is ended in a draw and nobody will be punished.
The "host " and "guest " will play the "finger guessing game" again after the complete end of this round.The mathematical expression of "finger guessing game" is as follows: suppose  and  are denoted by random variables  and , respectively; there are 4 possible values of them.Specifically,  = 0 (or  = 0) when  (or ) shows "bug";  = 1 (or  = 1) when  (or ) shows "cock";  = 2 (or  = 2) when  (or ) shows "tiger";  = 3 (or  = 3) when  (or ) shows "stick".
If  shows  (namely,  = , 0 ≤  ≤ 3) and  shows  (namely,  = , 0 ≤  ≤ 3) in a round, the necessary and sufficient condition of  wins in this round is ( − ) mod 4 = 1.The necessary and sufficient condition of  wins in this round is ( − ) mod 4 = 1.Otherwise, this round ends in a draw and proceeds to the next round of the game.
Obviously, the "finger guessing" game is a kind of "nonblind confrontation."Who is the winner and how many times the winner wins?How can they make themselves win more?We will use the "channel capacity method" of the "General Theory of Security" to answer these questions.
Based on the Law of Large Numbers in the probability theory, the frequency tends to probability.Thus, according to the habits of "host ()" and "guest ()," that is, the statistical regularities of their actions in the past (if they meet for the first time, we can require them to play a "warm-up game" and record their habits), we can give the probability distribution of ,  and the joint probability distribution of (, ), respectively: In order to analyze the winning situation of , we construct a random variable  = ( + 1) mod 4. Then we use the random variables  and  to form a channel (; ), which is called "channel "; namely, the channel takes  as the input and  as the output.Then we analyze some equations of the events.If  shows  (namely,  = , 0 ≤  ≤ 3) and  shows  (namely,  = , 0 ≤  ≤ 3) in a round, one has the following.
If  wins in this round, there is ( − ) mod 4 = 1; that is,  = ( − 1) mod 4, so we have  = ( + 1) mod 4 = [( − 1) + 1] mod 4 =  mod 4 = .In other words, the output  is always equal to the input  in the channel  at this time.That is, a "bit" is successfully transmitted from the input  to its output .
In contrast, if a "bit" is successfully transmitted from the input  to the output  in the "channel ," "the output  is always equal to its input ; namely,  = " is true at this time.Then there is ( − ) mod 4 = ( − ) mod 4 = [( + 1) − ] mod 4 = 1 mod 4 = 1.Hence, we can judge that " wins" according to the rules of this game.
Combining with the situations above, one has the following.
Then, we use the random variables  and  to form a channel (; ), which is called "channel " and takes  as the input and  as the output.
In contrast, if one bit is successfully sent from the input  to the output  in the "channel ," "the output  = (, V) = (( − ) + )" is always equal to the "input (, )" at this time; also there is ( − ) =  when  +  = ; that is,  ̸ =  and  +  = .Thus,  wins this round according to the evaluation rules.
Combining with the cases above, we have the following.
Lemma 13.In a "draw boxing" game, " wins one time" is equivalent to one bit is successfully sent from the input of "channel " to its output.
Combining Lemma 13 with the "channel coding theorem" of Shannon's Information Theory, if the capacity of the "channel " is , for any transmission rate / ≤ , we can receive  bits successfully by sending  bits with an arbitrarily small probability of decoding error.Conversely, if the "channel " can transmit  bits to the receiver by sending  bits without error, there must be  ≤ .In a word, we have the following theorem.
Theorem 14. Suppose that the channel capacity of the "channel " composed of the random variable (; ) is .Then after removing the situation of "draw," one has the following: (1) if  wants to win  times, he must have a certain skill (corresponding to the Shannon coding) to achieve the goal by any probability close to 1 in the / rounds; conversely, (2) if  wins  times in the  rounds, there must be  ≤ .
Similarly, we can analyze the situation of " wins."We can see that the times of " wins" ((  ,   )) depend on the habit of (  ,   ).If both  and  stick to their habits, their winning or losing is determined; if either  or  adjusts his habit, he can win statistically when his channel capacity is larger; if both  and  adjust their habits, their situations will eventually reach a dynamic balance.
Remark 15.In the following, we will equivalently transform between "the methods   ,   " and "the numbers , " as needed; that is,   =  and   = .By the transformation, we can make the problem clear in the interpretation and simple in the form.

Figure 1 :
Figure 1: Block diagram of the channel model.
Theorem 3 ("rock-paper-scissors" theorem).If one does not consider the case that both  and  have the same state; then