Opinion Dynamics with Bayesian Learning

Bayesian learning is a rational and effective strategy in the opinion dynamic process. In this paper, we theoretically prove that individual Bayesian learning can realize asymptotic learning and we test it by simulations on the Zachary network. )en, we propose a Bayesian social learning model with signal update strategy and apply the model on the Zachary network to observe opinion dynamics. Finally, we contrast the two learning strategies and find that Bayesian social learning can lead to asymptotic learning more faster than individual Bayesian learning.


Introduction
We all have our own opinions on various topics of social issues. e opinions are formed by an evolutionary process in social context [1][2][3]. Opinion dynamics is the study of the opinion fusion process through interactions among a group of agents [4]. Some interesting models have been proposed, such as the DeGroot model [5], voter model [6,7], bounded confidence model [8][9][10], and many other models [11][12][13][14]. Among these models, a key element is the opinion update strategy in the dynamic process. All the opinion update strategies can be classified into two categories by using Bayesian update rule or not: one is Bayesian update strategy and the other is non-Bayesian strategy. Non-Bayesian learning refers to individuals updating their opinions by non-Bayesian strategy, such as linear combination of opinions of the neighbors [5,15,16] and various game theories in social network [17,18]. Most of these models try to explore how the society could achieve group consensus. But the consensus opinion might not necessarily be the truth; in other words, the true state might not be realized. While Bayesian learning assumes that individual updates their opinions according to Bayesian rule, individuals can learn the truth in the long run according to the prior information [19][20][21]. erefore, Bayesian learning can integrate the prior information and lead to realize the truth more rationally and effectively than non-Bayesian learning.
ough Bayesian learning can achieve the truth, there is no systematical proof in the existing literature [22,23]. e proof of individual Bayesian learning lays a theoretical foundation for the learning process, so in this paper, we will deduce the truthfulness of individual Bayesian learning by theoretical derivation. Moreover, considering people's crowd psychology, we put forward a signal update strategy which means that people adjust their observations to meet with the most people. Combined with this signal update strategy, we propose a Bayesian social learning model to study the opinion dynamics under social environment. Furthermore, we conduct simulations on Zachary network to observe the learning results. e rest of the paper is organized as follows. In Section 2, we give a theoretical proof of individual Bayesian learning. In Section 3, we propose a Bayesian social learning model with signal update strategy and test it by simulations on Zachary network. In Section 4, we draw conclusions.

Individual Bayesian Learning
Model. Let state space Θ � θ 1 , θ 2 , . . . , θ m and the underlying true state be θ * (θ * ∈ Θ). Individual i's opinion on state θ k at time t is denoted by probability distribution μ i,t (θ k ) k � 1, 2, . . . , m, where m is a finite integer. At each time period, the signal set S t � S 1 t , S 2 t , . . . , S n t ∈ S 1 × · · · × S n is generated by the likelihood function P(S t | θ k ) conditional on state θ k , where S i t ∈ S i denotes the signal privately observed by individual i at time t and S i denotes the individual i's signal space. e ith marginal of P(θ k ) is denoted by P i (θ k ) which is known as individual i's signal structure conditional on state θ k . We assume that each individual's private signal structure is commonly known. At each time, individual i receives his private signal S i t and updates his prior opinion to the posterior opinion by Bayesian law. Bayesian statistics combining prior information can make the inference results more accurate and effective [24]. As time goes by, the individual's opinion will show some amazing dynamics during the evolution process. Next, we will give the denotations of asymptotic learning which is an important definition in opinion dynamics. Definition 1. Asymptotic learning: asymptotic learning refers to that individual i who receives the signals generated by true state θ * ∈ Θ and achieves asymptotic learning on a path S i t ∞ t�1 if along that path μ i,t (θ * ) ⟶ 1 with probability one as t ⟶ ∞.
At each time step t, individual i has his prior opinion μ i,t (θ) on state θ; after receiving signal S i t+1 which is generated by true state θ * ∈ Θ, his opinion μ i,t+1 (θ) at the next time step t + 1 will be updated to his posterior probability by Bayesian law: Then, μ i,t (θ * ) ⟶ 1 will hold with probability one as t ⟶ ∞ and we call individual i realizes asymptotic learning.
Proof. Let state space Θ � θ 1 , θ 2 , . . . , θ m and the true state θ * � θ 1 . Suppose that the signal sequences S i t (t � 1, 2, . . .) are independent and identically distributed given θ k , then we have At time t + 1, individual i updates his opinion and obtains his posterior opinion by Bayesian rule (1) after Next, we compare the probability of two state of the partition, say θ 1 and θ 2 in the light of S i t+1 for individual i at time t + 1, where δ(S i r ) is the likelihood ration of θ 1 to θ 2 given S i r and R(S i t+1 ) is the likelihood ration of θ 1 to θ 2 given In the first case, suppose ϕ < 1, then P(R(S i t ) � ∞ | θ 1 ) � 1 − ϕ t which obviously approaches 1 with increasing t. Another forms of expression can be Equation (5) shows that the probability that PR(S i t ) given θ 1 is greater than any preassigned number is almost 1. e second case is ϕ � 1. Since much is known about sums of identically distributed independent random variables, it is natural to investigate thereby replacing a product by a sum. It is easily seen from the definition of δ(S i r ) that δ(S i r > 0 | θ 1 ) � 1, so for the case now at hand, the function log(δ(S i r )) is independent real bounded random variable.
Letting I � E(log(δ(S i r )) | θ 1 ), the weak law of large numbers implies that, for any ε > 0, and it can be transformed into 2 Complexity Equivalently, According to expectation inequality [25], consider that the equality can hold in (10) if and only if δ − 1 (S i r ) is constant with probability 1, given θ 1 . Since the expected value of δ − 1 (S i r ) is equal to 1, the equality will hold if and only if is means state θ 2 is observationally equivalent state to θ 1 , which is contradictory with condition (3) of the theorem. So, I > 0. en, according to (9), we can also infer that (5) holds.
erefore, under the assumption conditions of the theorem, we can demonstrate P(R(S i t ) � ∞ | θ 1 ) � 1 holds with probability one; consequently, according to (10), the ratio of individual i ′ s posterior probability of the real state θ * � θ 1 to the other state θ 2 tends to infinity, i.e., By the same method, it can be proved that the likelihood ratio functions of real states θ * to the other states θ k , k � 3, 4, . . . , m also tend to infinity, i.e., Since μ i,t+1 (θ * ) + m i�2 μ i,t+1 (θ k ) � 1 and 0 ≤ μ i,t+1 (θ) ≤ 1(∀θ ∈ Θ), we will have μ i,t+1 (θ * ) ⟶ 1 and In summary, the individual can become highly convinced of the truth by the Bayesian law and achieve asymptotic learning after abundant observations. □ 2.3. Simulation. We have theoretically proved that individuals can use Bayesian law to update their opinions and achieve asymptotic learning. Next, we will verify the truth of the above individual Bayesian learning model by simulations on individuals in Zachary network.

Initial Values Setting.
Before conducting the numerical simulations, some assumptions are given as follows: (1) Different signals are independent of each other (2) e connection between individuals is indirect (3) In the initial state, all individuals have the same opinion in different states (4) Each individual observes only one signal at a time (5) Different individuals have the same signal structure is simulation experiment mainly aims at social learning on two states. Now, we suppose that Θ � θ 1 , θ 2 and the true state θ * � θ 1 , and the signal space S � S 1 , S 2 , . . . , S 5 . e individual composition and their relationship are shown in Figure 1, which is famously known as Zachary network [26].
At time t � 0, individual i's opinion on θ 1 and θ 2 is μ i,0 (θ 1 ) and μ i,0 (θ 2 ), respectively. Under assumption (3), the individuals' initial opinions are set to be Under assumption (5), the signal structures are given for all i as follows and they will remain unchanged during the learning process: In the individual Bayesian learning model, it is assumed that individual opinion evolution is affected by their prior knowledge and signal characteristics. When the individual i's opinion on the underlying true state μ i,t (θ * ) is larger than 0.9999, we consider that he reaches asymptotic learning. If each individual realizes asymptotic learning, the whole society will form a social consensus and find the truth.

Simulation
Results. Under the above initial conditions, we conduct simulations on Zachary network, and results in Figure 2 show that μ i,t (θ 1 ) ⟶ 1 and μ i,t (θ 2 ) ⟶ 0 when t ⟶ ∞ for all i, which means all the individuals can achieve asymptotic learning and realize the true state.

Bayesian Social Learning.
In the individual Bayesian learning process, individuals update their opinions by Bayesian law to achieve asymptotic learning. But the whole society is a complex social network, and individuals are connected and have influences on each other. By communication with his neighbors, the individual can have knowledge of his neighbors' received signals and their opinions. So, the individual will continuously adjust his opinion according to the signals received not only by himself but also by his neighbors. erefore, we consider opinion dynamics with Bayesian law in a social network background, which is called Bayesian social learning.
Here, the social network is abstracted into a graph G � (V, E) composed of individuals and their interactions. Let V represent the set of all individuals and E represent the set of relationship between every two individuals. At time t + 1, each individual i receives a signal S i t+1 whose distribution follows his signal structure P i (· | θ * ). Because the individuals have influence on each other and people have herd mentality, then the individual i will adjust his signal to the signal S t+1 which is received by the most people.
erefore, individuals update their signals by the following rule: After updating signals, individual i will update his own opinion about state θ by Bayesian law as follows: Next, we will explore the social learning results in the opinion dynamic process according to the signal update rule (15) and opinion update rule (16) by simulations.

Simulation Results.
We also take Zachary network as an example society. e state space Θ � θ 1 , θ 1 , and signal structure and initial opinions μ i,0 (θ) are set to be the same as the previous simulations. e Bayesian social learning results are shown in Figure 3.
It is obviously seen that individuals can realize asymptotic learning by Bayesian social learning with signal update strategy. e strong connectivity of the network helps the group in the network to reach social consensus quickly with little fluctuation. Furthermore, we compare the average consensus time of Bayesian social learning and individual Bayesian learning as shown in Table 1.
We can see that Bayesian social learning with the signal update strategy model has a significantly faster learning speed than the individual Bayesian learning model, and it has less fluctuation. So, we can speculate that the interactions

Conclusion
In this paper, we research on two Bayesian learning models. e first model is the individual Bayesian learning model in which we deduce the truthfulness of individual Bayesian learning by theoretical derivation. Moreover, the numerical simulations also show that individuals who update their opinions by Bayesian law can realize asymptotic learning.
Furthermore, we propose the Bayesian social learning model with signal update strategy and test it by simulations on Zachary network. e results show that individuals adopting the signal update strategy proposed in this paper can realize asymptotic learning similarly.
We compare the results of two models and find that Bayesian social learning model can achieve asymptotic learning more faster under the same conditions. In the future study, we will explore the theoretical supports for the Bayesian social learning model.

Data Availability
No data were used to support this study.

Conflicts of Interest
e authors declare that there are no conflicts of interest regarding the publication of this paper.