^{1}

^{2}

^{3}

^{1}

^{2}

^{3}

We consider games of strategic substitutes and complements on networks and introduce two evolutionary dynamics in order to refine their multiplicity of equilibria. Within mean field, we find that for the best-shot game, taken as a representative example of strategic substitutes, replicator-like dynamics does not lead to Nash equilibria, whereas it leads to a unique equilibrium for complements, represented by a coordination game. On the other hand, when the dynamics becomes more cognitively demanding, predictions are always Nash equilibria: for the best-shot game we find a reduced set of equilibria with a definite value of the fraction of contributors, whereas, for the coordination game, symmetric equilibria arise only for low or high initial fractions of cooperators. We further extend our study by considering complex topologies through heterogeneous mean field and show that the nature of the selected equilibria does not change for the best-shot game. However, for coordination games, we reveal an important difference: on infinitely large scale-free networks, cooperative equilibria arise for any value of the incentive to cooperate. Our analytical results are confirmed by numerical simulations and open the question of whether there can be dynamics that consistently leads to stringent equilibria refinements for both classes of games.

Strategic interactions among individuals located on a network, be it geographical, social, or of any other nature, are becoming increasingly relevant in many economic contexts. Decisions made by our neighbors on the network influence ours and are in turn influenced by their other neighbors to whom we may or may not be connected. Such a framework makes finding the best strategy a very complex problem, almost always plagued by a very large multiplicity of equilibria. Researchers are devoting much effort to this problem, and an increasing body of knowledge is being consolidated [

Our work belongs to the literature on strategic interactions in networks and its applications to economics [

In the remainder of this introduction we present the games we study and the dynamics we apply for equilibrium refinement in detail, discuss the implications of such a framework on the informational settings we are considering, and summarize our main contributions.

We consider a finite set of agents

Each player can take one of two actions

In what follows we concentrate on two games, the best-shot game and a coordination game, as representative instances of strategic substitutes and strategic complements, respectively. We choose specific examples for the sake of being able to study analytically their dynamics. To define the payoffs we introduce the following notation:

Within the two games we have presented above, we now consider evolutionary dynamics for players’ strategies. Starting at

The reason to study these two dynamics is because they may lead to different results as they represent very different evolutions of the players’ strategies. In this respect, it is important to mention that, in the case

We study how the system evolves by either of these two dynamics, starting from an initial random distribution of strategies. In particular, we are interested in the global fraction of cooperators

As an extension of the results obtained in the above context, we also study the case of highly heterogeneous networks, that is, networks with broad degree distribution

Within this framework, our main contribution can be summarized as follows. In our basic setup of homogeneous networks (described by the mean field approximation): for the best-shot game, PI leads to a stationary state in which all players play

The paper is organized in seven sections including this introduction. Section

We begin by considering the case of strategic substitutes when imitation of a neighbor is only possible if he/she has obtained better payoff than the focal player; that is,

Within the mean field formalism, under PI dynamics, when

Working in a mean field context means that individuals are well-mixed, that is, every player interacts with average players. In this case the differential equation for the density of cooperators

As discussed above, PI does not necessarily lead to Nash equilibria as asymptotic, stationary states. This is clear in this case. For any

Within the mean field formalism, under PI dynamics, when

Equation (

As before, PI does not drive the population to a Nash equilibrium, independently of the probability of making a mistake. However, mistakes do introduce a bias towards cooperation and thus a new scenario: when their probability exceeds the cost of cooperating, the whole population ends up cooperating.

We now turn to the case of the best response dynamics, which (at least for

Indeed, for BR dynamics without mistakes, the homogeneous mean field equation for

To evaluate the two probabilities, we can recall that

The precise asymptotic value for the density of cooperators,

How is the above result modified by mistakes? When

To gain some insight on the cooperation levels arising from BR dynamics in the Nash equilibria, we have numerically solved (

Best-shot game under BR dynamics in the mean field framework. Shown are the asymptotic cooperation values

From Figure

We now turn to the case of strategic complements, exemplified by our coordination game. As above, we start from the case without mistakes, and we subsequently see how they affect the results.

Within the mean field formalism, under PI dynamics, when

Still within our homogeneous mean field context, the differential equation for the density of cooperators

It is easy to see that

The same (but opposite) intuition we discussed in Remark

In a system where players may have different degrees, while full defection is always a Nash equilibrium for the coordination game, full cooperation becomes a Nash equilibrium only when

When

When

When

Finally when

In summary, the region

The intuition behind the result above could be that mistakes can take a number of people away from the equilibrium, be it full defection or full cooperation, and that this takes place in a range of initial conditions that grows with the likelihood of mistakes.

Considering now the case of BR dynamics, the case of the coordination game is no different from that of the best-shot game and we cannot find rigorous proofs for our results, although we believe that we can substantiate them on firm grounds. To proceed, for this case, (

If

If

Finally if

As one can see, even without mistakes, BR equilibria with intermediate values of the density of cooperators can be obtained in a range of initial densities. Compared to the situation with PI, in which we only found the absorbing states as equilibria, this points to the fact that more rational players would eventually converge to equilibria with higher payoffs. It is interesting to note that such equilibria could be related to those found by Galeotti et al. [

A similar approach allows some insight on the situation

If

If

If

Adding mistakes to BR does not change dramatically the results, as it did occur with PI. The only relevant change is that equilibria for low or high densities of cooperators are never homogeneous, as there is a percentage of the population that chooses the wrong action. Other than that, in this case the situation is basically the same with a range of densities converging to an intermediate amount of cooperators.

Having found the equilibria selected by different evolutionary dynamics, it is interesting to inspect their corresponding welfare (measured in terms of average payoffs). We can again resort to the mean field approximation to approach this problem.

In the two previous sections we have confined ourselves to the case in which the only information about the network we use is the mean degree, that is, how many neighbors players do interact with on average. However, in many cases, we may consider information on details of the network, such as the degree distribution, and this is relevant as most networks of a given nature (e.g., social) are usually more complex and heterogeneous than Erdös-Rényi random graphs. The heterogeneous mean field (HMF) [

Note that in the following HMF calculations we always assume that our network is uncorrelated; that is,

In this framework, considering more complex network topologies does not change the results we found before, and we again find a final state that is not a Nash equilibrium, namely, full defection.

In the HMF setting, under PI dynamics, when

The HMF technique proceeds by building the

For the best-shot game with PI, the particular form of the degree distribution does not change anything. The outcome of evolution still is full defection, thus indicating that the failure to find a Nash equilibrium arises from the (bounded rational) dynamics and not from the underlying population structure. Again, this suggests that imitation is not a good procedure for the players to decide in this kind of games.

In the HMF setting, under PI dynamics, when

Equation (

Always within the deterministic scenario with

In order to assess the effect of degree heterogeneity, we have plotted in Figure

Asymptotic value of the cooperator density for the best-shot game with BR dynamics for Erdös-Rényi and scale-free random graphs (with

If we allow for the possibility of mistakes, the starting point of the analysis is—for each of the

Unfortunately, for the coordination game, working in the HMF framework is much more complicated, and we have been able to gain only qualitative but important insights on the system’s features. For the sake of clarity, we illustrate only the deterministic case in which no mistakes are made (

The average payoffs of cooperating and defecting for players with degree

We then use our starting point for the HMF formalism, (

While is it difficult to solve (

Coordination game with PI dynamics for scale-free networks (with

For BR dynamics, we would have to begin again from the fact that the differential equation for each of the

Before discussing and summarizing our results, one question that arises naturally is whether, given that mean field approaches are approximations in so far as they assume interactions with a typical individual (or classes of typical individuals), our results are accurate descriptions of the real dynamics of the system. Therefore, in this section, we present a brief comparison of the analytical results we obtained above with those arising from a complete program of numerical simulations of the system recently carried out by us, whose details can be found in [

Concerning the best-shot game, numerical simulations fully confirm our analytical results. With PI, the dynamical evolution is in perfect agreement with that predicted by both MF and HMF theory—which indeed coincide when (as in our case)

On the other hand, the agreement between theory and simulations is also good for coordination games with PI dynamics. On homogeneous networks, numerical simulations show an abrupt transition from full defection to full cooperation as

We can conclude that the set of analytical results we are presenting in this paper provides, in spite of its approximate character, a very good description of the evolutionary equilibria of our two prototypical games, particularly so when considering the more accurate HMF approach.

In this paper, we have presented two evolutionary approaches to two paradigmatic games on networks, namely, the best-shot game and the coordination game as representatives, respectively, of the wider classes of strategic substitutes and complements. As we have seen, using the MF approximation we have been able to prove a number of rigorous results and otherwise to get good insights on the outcome of the evolution. Importantly, numerical simulations support all our conclusions and confirm the validity of our analytical approach to the problem.

Proceeding in order of increasing cognitive demand, we first summarize what we have learned about PI dynamics, equivalent to replicator dynamics in a well-mixed population. For the case of the best-shot game, this dynamics has proven unable to refine the set of Nash equilibria, as it always leads to outcomes that are not Nash. On the other hand, the asymptotic states obtained for the coordination game are Nash equilibria and constitute indeed a drastic refinement, selecting a restricted set of values for the average cooperation. We believe that the difference between these results arises from the fact that PI is imitative dynamics and in a context such as the best-shot game, in which equilibria are not symmetric, this leads to players imitating others who are playing “correctly” in their own context but whose action is not appropriate for the imitator. In the coordination game, where the equilibria should be symmetric, this is not a problem and we find equilibria characterized by a homogeneous action. Note that imitation is quite difficult to justify for rational players (as humans are supposed to act), because it assumes bounded rationality or lack of information leaving players no choice but copying others’ strategies [

When going to a more demanding evolutionary rule, BR does lead by construction to Nash equilibria—when players are fully rational and do not make mistakes. We are then able to obtain predictions on the average level of cooperation for the best-shot game but still many possible equilibria are compatible with that value. Predictions are less specific for the coordination game, due to the fact that—in an intermediate range of initial conditions—different equilibria with finite densities of cooperators are found. The general picture remains the same in terms of finding full defection or full cooperation for low or high initial cooperation, but the intermediate region is much more difficult to study.

Besides, we have probed into the issue of degree heterogeneity by considering more complex network topologies. Generally speaking, the results do not change much, at least qualitatively, for any of the dynamics applied to the best-shot game. The coordination game is more difficult to deal with in this context but we were able to show that when the number of connections is very heterogeneous, cooperation may be obtained even if the incentive for cooperation vanishes. This vanishing of the transition point is reminiscent of what occurs for other processes on scale-free networks, such as percolation of epidemic spreading [

Finally, a comment is in order about the generality of our results. We believe that the insight on how PI dynamics drives the two types of games studied here should be applicable in general; that is, PI should lead to dramatic reductions of the set of equilibria for strategic complements, but is likely to be off and produce spurious results for strategic substitutes, due to imitation of inappropriate choices of action. On the other hand, BR must produce Nash equilibria, as already stated, leading to significant refinements for strategic substitutes but to only moderate ones for strategic complements. This conclusion hints that different types of dynamics should be considered when refining the equilibria of the two types of games, and raises the question of whether a consistently better refinement could be found with only one dynamics. In addition, our findings also hint to the possible little relevance of the particular network considered on the ability of the dynamics to cut down the number of equilibria. In this respect, it is important to clarify that while our results should apply to a wide class of networks going from homogeneous to extremely heterogeneous, networks with correlations, clustering, or other nontrivial structural properties might behave differently. These are relevant questions for network games that we hope will attract more research in the near future.

Proportional imitation

Best response

Homogeneous mean field

Heterogeneous mean field.

The author declares that there are no conflicts of interest regarding the publication of this paper.

The author is thankful to Antonio Cabrales, Claudio Castellano, Sanjeev Goyal, Angel Sánchez, and Fernando Vega-Redondo for their feedback on early versions of the manuscript and advice on the presentation of the results. This work was supported by the Swiss Natural Science Foundation (Grant no. PBFRP2_145872) and the EU project CoeGSS (Grant no. 676547).