Network and Agent Dynamics with Evolving Protection against Systemic Risk

The dynamics of protection processes has been a fundamental challenge in systemic risk analysis. The conceptual principle and methodological techniques behind the mechanisms involved [in such dynamics] have been harder to grasp than researchers understood them to be. In this paper, we show how to construct a large variety of behaviors by applying a simple algorithm to networked agents, which could, conceivably, offer a straightforward way out of the complexity. The model starts with the probability that systemic risk spreads. Even in a very random social structure, the propagation of risk is guaranteed by an arbitrary network property of a set of elements. Despite intensive systemic risk, the potential of the absence of failure could also be driven when there has been a strong investment in protection through a heuristically evolved protection level. It is very interesting to discover that many applications are still seeking the mechanisms through which networked individuals build many of these protection process or mechanisms based on fitness due to evolutionary drift. Our implementation still needs to be polished against what happens in the real world, but in general, the approach could be useful for researchers and those who need to use protection dynamics to guard against systemic risk under intrinsic randomness in artificial circumstances.


Introduction
Contemporary social elements are connected through a system of formal and informal flows of risks driven by complex interactions among their structures. It became clear, when seeking fundamental drivers and developing predictive models in order to capture the evolution of systemic risk, that research has moved to a new level [1]. However, as research for this paper expanded into different areas in our attempts to identify universal and domainspecific patterns of systemic risk, profound implications for our understanding of real-world dynamic behavior, ranging from protection processes to risk diffusion, became increasingly apparent [2]. Basically, the classical social network metaphor places individuals at the nodes of a network, with the network links representing interactions or connections between those individuals. Networks in any social system, however, are dynamical entities, and in this sense, the practical information of networks is continuously evolving [3]. Here, we developed a model of protection processes against risk diffusion, which along with its dynamics, is shown in the following sections.

Systemic Risk across a Network.
Systemic risk is a property of systems of interconnected components and can be described as system instability, caused or exacerbated by idiosyncratic events, resulting in potential catastrophe. Notably, in various studies, systemic risk has been blamed for high-profile disaster (e.g., for making a significant contribution to the financial crisis of 2008) and also for being a likely cause of cascading failures [4]. A distinguishing feature of such risks, sometimes called network risk, is that they emerge from the complex interactions among individual elements in a system or from their association with each other [3]. e context-varying mechanical flux on a system's risk is actually very complex [5]. In the face of all these distortions and patterns of the influences, the possibility to quantify systemic risk and capture the size needs to be established [6]. In particular, a random connectivity pattern proved to be the key to understanding the structure of the network and how elements communicate with each other [7]. e establishment of a simple mechanism (i.e., two types of agents interacting with other within and across their respective social groups) was proposed [8,9]. At the same time, it was suggested that a strategic decision process be added to explore the influence of a networked interaction on the agents [10]. Other properties of networks applied, such as the concept of evolutionary dynamics, all of which helped us to characterize and understand the architecture of artificial systems from the network property perspective [11,12].

Cognitive Bias and Heuristics.
Individuals have fundamental limitations in terms of their ability to assess probability and situations [13]. Models based on the underlying assumption suggest that individuals perceive their capabilities as becoming more biased if external uncertainty is greater than expected individual capacity. is perception, when distributed in the population, is not quite static [14]. For example, optimal profitability changes if an optionwhich is present at a given time-has a probability of disappearing or not reappearing in the next unit of time [15]. If there is a low-variance and a high-variance option, the lowvariance option is chosen at low reserves and the highvariance option at high reserves [16]. ere are several assumptions as to why individuals do not always perceive risk accurately [17]. Evolutionary heuristics, used for the recognition principle, take the best anchoring and adjustment [18]. e patterns of information-to which decision mechanisms may be matched-can arise from a variety of processes [19]. Behavior must be explained by an interaction between a heuristic and its social, institutional, or physical environment: an intuitive mode in which judgements and decisions are made automatically and rapidly [20]. Many papers outline an approach related to research on individual rules of thumb, which has the advantage of fast decisionmaking based on little information, and also of avoiding overfitting [21]. Individual rules of thumb from individuals provide more explicit examples of adaptation because individuals can be studied in the environments in which they evolved [22].

Imitation and Social Learning.
A social tool is essential [23]. Its character is vital for social well-being and resilience as it could be damaged or exploited; thus, formulating such a tool is challenging [24]. An unlimited variety of network dynamics and potentials are robust amplifiers of a cascade. It is therefore necessary to consider how the adaptive heuristics for inference and preference ties into a given interconnected social, institutional, and physical tool to produce adaptive behavior. To adequately assess the process, the progress must be made computationally, for example, performing agentbased simulations of evolving properties applied a coexisting macroscale with microinterconnection in a network with individual interconnected at the microlevel in a network [25]. As the agents represent individuals that occurred from the bottom, the actual state of their behaviour tends to be more informative [26] allowing sensible decisions to be made based on studying the interactions that occur in the artificially designed system [27]. Moreover, sophisticated behavioral adaptations by individuals are thought to reflect the power of the cultural evolutionary process, in that successful behaviors are copied by other individuals and then are propagated strategically through imitation and social learning, rather than through individuals' inherited traits [28]. We should expect individuals to have evolved a set of learning mechanisms that typically enables them to perform well across a range of different circumstances [29]. ese mechanisms encompass imitation and exploration in response to current stimuli, subject to sensory biases, and learning rules regarding adjusting their behavior in response to nearby individuals [30].

Gap Statement.
Standard evolutionary models in complex environments provide potentially different biases in decision-making [14] exposing different experimental groups at different transition probabilities [31]. e computational modeling technique, however, lacks a bridge between the dynamics of agent nodes (of which the fundamental element is a vertex) and the emergent properties of networks. As most tools for laying out networks are variants of the algorithm, it is hard to use them to explore how conditions of a network affect the network's dynamics [32]. While the assessment process is indeed capable of observing at macroscale for input performance, approaches to address the microscale to simultaneously obtain more detailed insight need to be treated within the structure of the network itself [33]. is requires large data repositories to be combined to construct representations of trajectories that can be analyzed from different scales and perspectives. Indeed, the mechanisms and the serial algorithm that underpin our understanding of systemic risk in networked agents are still a work in progress. e facts might lead us to find common ground regarding integration of knowledge and methodologies, agreement on definitions, and reconciliation of approaches that many fields have adopted to study networks, all of which present the difficulties and traps inherent in interdisciplinary work.

Purpose and Value.
To identify biases in the assessment of systemic risk, factors influencing these necessary concepts are mentioned above; thus, the primary purpose of this study is to examine mechanisms for the evolutionary origin of bias and make a heuristic assessment of protection against systemic risks in a contagious network. We established a modeling framework that can account for the quantitative measurement in agented networks, which allows us to explore on a macroscale and microscale how protecting potential affects the risk potential. e mechanism tests what we can clearly state for different values of probability and how protection could be drawn by a set of entities against a cascading failure. To reach a better assessment of the risk and how to reduce it, this model not only enables us to directly observe the spread of failure in agented network industries 2 Complexity but also to understand how the evolutionary heuristics protects against that spread.

Operating Principle of the Model. (1) Network Properties.
We consider an Erdös-Rényi network [34] with a given number n of nodes, connection probability p c , and resultant adjacency matrix A. Each node can be in one of the two states: not failed or failed. All nodes are initially without failure.
(2) Agent Properties. One agent is associated with each node and is characterized by its capital and strategy (see below). (3) Payoff Dynamics. In each time step, each agent receives one unit of payoff, which is added to its capital c, of which fractions f m and f p are spent on maintenance and protection, respectively, resulting in the updated capital In each time step, a failure potential can originate at each node with probability p n and can propagate along each link with probability p l . A failure potential turns into a failure with probability 1 − p p , depending on an agent's investment into protection; a possible choice is p p � p p,max /(1 + c p,1/2 /(f p c)). A failure lasts for one-time step and causes the loss of an agent's capital. (5) Strategy Dynamics. Each agent chooses its protection level according to the heuristics where C is a measure of the centrality of the agent's node normalized to the interval (0, 1). e strategy values f p0 and f p1 evolve through social learning and strategy exploration as follows. In each time step, each agent with probability p r randomly chooses another agent as a role model and imitates that agent's strategy values with probability p i � 1/(1 + exp(− sΔc)), where s is the strength of selection and Δc is the difference between the role model's capital and the focal agent's capital. In each time step, each agent with probability p e randomly chooses one of its two strategy values and alters it by a normally distributed increment with mean 0 and standard deviation σ e [f(x | μ, σ 2 )] (see Supplement Information for more detail (available here)).

Results
To observe the process of dynamics, the model uses an array as a probability of failure with a given number of initially influenced nodes. With the set of features regarding the protection dynamics applied according to the operating principle of the model, the simulation results are organized as follows: Section 1 describes the fundamental characteristics of risk diffusion in a random networked system. Section 2 investigates the framework that allows us to look the assumption in a tractable way while imposing realistic protection against the failure probability through social learning on agents. Section 3 characterizes their stationarity from the observations of applied dynamics over time to see how the spread of the failure happens.

Part 1: Fundamental Structure (in Random Network).
To start with, individuals in the model are considered as vertices (fundamental element drawn as nodes) and a set of two elements drawn as a line connecting two vertices (the lines are called edges) depending on the information in the graph (usually controlled by (n, p)). ere are two parameters: the number of nodes (n) and the probability that an edge is present (p). e results from Figure 1 follow from a standard representation in graph theory. It takes into account the fact that a higher-degree node has a higher chance of being connected to an agent in a simple way through the degree distribution. For a network artificially produced as G(n, p), n � 10 and p � 0.9 as an example. e number of possible edges is in n(n − 1)/2 � 10 * 9/2 � 45. e expected average of node degree is p(n − 1) � 0.9 * 9 � 8.1. To quantify the probability that a node has degree d for all [0 ≤ d ≤ (n − 1)], note that a node has degree zero if nothing is connected to it. A node has a degree (n − 1) if all nodes are connected to it. For a node to have degree d in a network with n nodes, there must be d "connections" and (n − 1 − d) "nodes that are not connected to it." Since the probability of a "connect" is p, the probability of a "not connecting to a node" is (1 − p). e outcome d "heads" and (n − 1 − d) "tails" occur with probability p k (1 − p) n− 1− d , but there are "(n − 1) choose d" ways in which this outcome can occur (the order of the flip results does not matter). e probability that a given node has degree d is given by the binomial distribution , the graph shows almost a complete graph, and the path length and time shorten. In network analysis, as indicators of centrality identify the most important vertices (nodes) within a graph, applications include identifying the most influential node(s) in a network. e eigenvector centrality for node is (Ax � λx), where A is the adjacency matrix of the network with eigenvalue λ. e principal has an entry for each of the n-vertices. e larger the entry for a vertex is, the higher the ranking with respect to eigenvector centrality (see Figure 1). With respect to the fundamental characteristic of the model, we implement that an individual (node) can catch a failure if one of its neighbors is infected to measure how cascades of failure can propagate through the network. e elementary level of risk depends on the network units, which depend on the cooccurrence of the i and j of the nodes. is reflects that individuals are more biased when an individual is highly linked in its network ( Figure 2).
In what follows, such a probability of failure will be determined by the number of links from the node of the specification scaled by R/S. [when we consider an individual characteristic (K) as constant] (K). R/S is equal to the risk (failure probability: p ∈ (0, 1)) as a function of connectivity which was created by the eigenvector centrality for nodes Complexity 3 (λx), and R is just equal to K over S (R � K/S). Nodes at lower (higher) links should have a lower (higher) connectivity with their risk, and vice versa. In other words, if we remove nodes from the network, the bias reduces where the links have decreased, even if they have retained their individual characteristics throughout the entire process. e intuition behind these results is that the higher-degree agents are more exposed to cascading failure risk than the lower-degree agents; this increases potential for cascading failure.

Part 2:
Protection against the Failure (Imposing Realistic Dynamics). In line with this proposition, protection dynamics was applied against the risk of failure. e model allows an agent to make a costly investment into protection. Note that we assume the systemic risk as failure potential from the dynamics turns into a failure with probability 1 − p p depending on an agent's investment into protection; p p � p p,max /(1 + c p,1/2 /(f p c)) when a failure lasts for one time step ( Figure 3).

Protection Level.
Inspired by the plausible scenarios presented above, we developed our focus on the parameter of (f p c) with many individuals and time steps (nodes � 100, t � 100). As will be noted, this is because the applied function of p p [� p p,max /(1 + c p,1/2 /(f p c))] on this application will be decided by [p r , p e , C ⟶ (f p c)] when we consider the p p,max , c p,1/2 , and time delay (�1) as constants. is refers to an investment made by an agent (we define nodes as agents, as agents make decisions regarding investment in protection) in order to protect itself against the risk of failure in terms of how the failure propagation mechanism influences the agents' decisions as they generate. Figure 4 shows different evolutionary observations parameterized p p,max � 1 and c p,1/2 � 0.5 as constant and they change only when the protection values control imitation and exploration probabilities (p r , p e ) with the eigenvector centrality (C). A possible scenario of the different conditions is as follows: scenario A � strong connection with strong imitation and exploration [p p � p p,max /(1 e patterns observed from the four parametrizations give us different evolutionary patterns according to the time series. For small links (connection p � 0.1) among the individuals, a weakly interacting pattern was observed in which the centralities were widely distributed, with strong interactions rather than large links (connection p � 0.9) being observed, in which the centralities were tightly distributed. We also discovered that  Figure 1: A prototype of the random network with its property. Number of nodes n � 10; connection probabilities p � 0.9 (a), p � 0.2 (b). At each section, the plots in the left side show the random (Erdös-Rényi) network created. A circle represents each node with an arbitrarily assigned label from 0 to 9, and each line represents a link. e plots of the middle show their adjacency matrix with its entry in row m column n (either 1 � red or 0 � blue) corresponding to its eigenvector centrality (right side at each section). once the link thickness was driven, a strongly interacting regime emerges in the sense that the large connection probability caused a symmetric pattern between the centrality and the protection level, while the small connection probability did not. e results give us some insights into what possible patterns are going to be, with the artificially designated parameters being a plausible concept of the protection factors against the systemic risk.

Strategies.
Following the previous observation, we now present another analytical characterization to see if an alternative strategy could affect the failure trend based on which strategy values f p0 and f p1 evolve through social learning and exploration. We start this simulation with the assumption that the environment has a high protection (p p,max � 1), strong centrality (p � 0.9), and imitation (p r � 0.9) but that an agent does not have enough opportunity to explore the other strategy values (p e � 0.1). As the exploration probability can cause the different protection levels in f p , the parameter set assumes that a smaller value of p e turns out to be incapable of the dynamics expected to result in a protection level. In other words, even if all agents could choose another agent as a role model and imitate that agents' strategy values (capital) in each time step with the applied function , each agent could rarely alter its strategy value to the other strategy value because there is no chance of exploring the other strategy. Figure 5 shows different strategies for different evolution in networks with failure. For the most argued dilemmas of social networks, we consider the fraction of f p0 as one strategy and the fraction of f p1 as the other strategy, averaged over 100 individuals and time steps. We were able to find quite clear correlation between the parameter values (capital and failure, as well as the strategies of f p0 and f p1 ). e strategy of f p1 multiplied by eigenvector centrality of the C in particular shows different behaviors. Although there is a relative score which is assigned to the f p1 individuals based on the concept that a high eigenvector-centrality score (C ∈ 0, 1) contributes more than the others that have a relatively lower score, this feature does not cause the same trend, and this results in an exponential decay of the trend in applied time series (plot on the left-hand side in Figure 5). e simulation results with a small exploration rate reflect that the centrality with the neighbor in the network conceptually matching relations of parameters expand nonlinearly with time; these densifications suggest that the existing structure of the network may constrain what will happen in the next time step. Evidence shows that biases between parameters occur at very early stages. Hence, the probability of exploration (social learning), which might be another crucial factor in providing resources to violate expectations and lead to novel trends with high impact (plot of Figure 6).

Part 3: Coevolutionary Outcomes (Stationary Case).
Regarding the influence of imitation and exploration, we expected the need to observe the evolutionary state in these traits (f p0 and f p1 ). We reduced the exploration probability and its normally distributed increment so that the trait variabilities in f p0 and f p1 become much smaller; we did this by measuring these trait variabilities using trait ranges as a coefficient of variation (standard deviation decided by a mean) of, at most, 10% in either trait. e case in Figure 7 seems to show that we are observing convergence or divergence to an evolutionary trajectory in the trait space (f p0 , f p1 ).
is makes the evolutionary phase portrait on the convergent side 7(b) meaningful and interesting: even though each new random seed produces a different evolutionary trajectory, at least the time series of outcome appears to converge. Note that we could recognize that the convergence was not every time but when it did not occur (the evolutionary phase portrait on the left-hand side), there was usually such a big variance (or fluctuation). us, only when the convergence happens, we continue to use a coefficient of variations for the horizontal and vertical lines, indicating the degrees of polymorphism in the evolutionary phase portraits as we observed the plot of the convergent side (b), and more initial conditions are added to the evolutionary phase portraits.
In Figure 8, we have followed the same nonevolutionary part of this model (seven parameters: nodes (n), connection probability (p), maintenance (f m ), propagation probability at  Complexity each node (p n ), propagation through each link (p l ), protection maximum (p p,max ), reference point (c p,1/2 )) except for the propagation probability through link (p l ) as the control parameter. We have also used the same evolutionary part of this model (four parameters: imitation probability (p r ), selection intensity (s), exploration probability (p e ), and normally distributed increment of the exploration (σ)) but reduced more of the exploration rate (as p e � 0.05) and its normally distributed increment (as σ � 0.015) with some additional simulations from other initial conditions and time steps (see the Initialize parameters in Figure 8). e coevolutionary characteristics of fixed points observed here are based on the communities with 0 < S < N (S � strategies, N � population) with a couple of different initial points. e phase portrait shows how the dynamical stability of species (f p0 and f p1 ) evolves in that space generated by the two-dimensional systems.
From the observation above, we draw the basic kinds of intuitions with respect to the following statements. First, for coevolutionary communities with S > 1, comprising several initial conditions, the notion of convergence proved useful in the classification of fixed points for the general identification of dynamical stability as demonstrated: (i) If each strategy (�species) is convergent, the fixed point might be an evolutionary attractor. (ii) If one strategy is convergent and the other divergent, the fixed point might be an evolutionary repeller. (iii) In all cases not covered by (i) and (ii), local stability of the fixed point can be turned just by varying the ratio of the evolutionary rate coefficients. Next, according to the obtained coevolutionary trajectories (upper side of Figure 8), the strategy values may still evolve in time; however, outcomes like failure (or capital) reach a stationary state (bottom side of Figure 8) even if the trajectories from the strategies exhibit different behaviors. erefore, the presence of a convergence indicates the possibility of a stationarity of the coevolutionary process which can depend critically on detailed dynamical features of the system (payoff, failure, and strategies). Finally, depicted evolutionary trajectories can be adaptive not only in a nonevolutionary (macroscale) controller (i.e., propagation of the failure through each link of the random network (p l )) but also in an evolutionary (microscale) controller (i.e., normally distributed increment of the exploration f(x | μ, σ 2 )) when the polymorphism served as constants (i.e., the case of coexistence of failure and absence of failure from the eigenvector centrality (λx) and imitation (p i ) high enough).
us, we extract the state values of f p0 and f p1 , taking the average over a suitable time interval that only contains fluctuations around the asymptotic.

Complexity
Although the existence of an evolutionary attractor seems to be convergence in a certain point, we may not be able to say this is obvious because the trajectories do not have the same end point (upper side of Figure 9). However, their coevolutionary trajectory fluctuations around a fixed mean value converge, and at the same time, averaged capital and failure proportion reach a stationary state as the lower side of Figure 9 in terms of their extracted time series (t �1∼4,000). Concerning the outcome according to the time series, we suggest the numerical tests to explain the results of the stationarity obtained above.

Proposition 1.
Stationarity implies that if we shift time by an arbitrary finite interval, then process properties do not change (they neither increase nor decay). For any initial distribution, the process will converge to the stationary probability. Simple potential dynamics is as follows: Here, h(x) is force and 9 is sigma. By stationarity, the result means that all variables are almost constant with a fluctuation force around their mean value. We considered that a system with a potential that exhibits two state levels A and B defined by the failure simply  Proofs of the Capital. Now, we calculate the average value of capital at the stationary state. First, we consider the case p l p ER � 1. It can be easily seen that the average value of capital in stationary states follows 1 where c is the average value of capital among individuals. e equations (updated capital: 1 + p p (1 − f p − f m )c � c) and protection probability: p p � p p,max /(1 + c p,1/2 /(f p c)) are combined. en, if we consider p l < 1, the protection probability p p in equation [ . us, as we discovered that the protection probability is dependent on the failure propagation (p l ) through the random network (p ER ), capital function should be modified as follows as well: is shows that when the replaced protection by p p ′ becomes small [1 − (1 − 0.5)(1 − (1 − 0.9) N f ) � 0.09] because of the high enough p l p ER � 0.9, the capital at stationary is going to be less than that of p p ′ which becomes large because of the weak failure propagation p l p ER � 0.1 (i.e., capital c � 1+ p p e intuition from the results at the stationarity is that even if (i) the strategy of f p0 and the strategy of f p1 behaviors are not the same: as we can notice in the trajectory representation in Figure 9, while the strategy of f p0 increase their states according to the controlled parameter value of p l in the applied random network p ER , the other strategy of f p1 multiplied by the eigenvector ). e plot on the right-hand side represents the individuals' dynamics with a random network; node color � states (failure (red) ⟷ (blue) absence of failure, green � protection potential, yellow-� initial structure of the state without failure and protection).

Generalization of the Obtained Outcomes.
is characteristic is of course not just restricted to this simulation controlled by the p l but is given the more general name of the network effect; every time someone links to a particular node on a network, it makes it a bit more likely that someone else will also link p ER determined by their connection probability. We expend our interest specifically in the p c (when propagation probability (p l � 0.1) constant) because the interpreted function of p p ′ � 1 − (1 − p p )(1 − (1 − p l p ER ) N f ) was not only dependent on the failure propagation (p l ) but also through the random network (p ER ) property (from the eigenvector centrality (λx) controlled by the connection probability (p c )). And we could observe a quite similar trend of their outcomes as we observed in the simulation results controlled by the p l . Figures 10 and 11 shows that the scatter plot which is constructed by the connection probability (p c ) do obey strong power-law relationships like the previous observation p l (see the fixed mean value of the failure in Tables 1  and 2). In order to interpret the potential state variable (placed as a probability) according to the control parameters of p c compared to the p l , let us use a simple way (going back to the primary feature of the failure which mentioned in the section of Cascading failure in the network (R � K/S)) to specify this detail with the numerical test. Intuitively, we assume that the parameter value of p l or p c can be considered as the main variable (x � p l p ER ) in this case and cannot be equal zero since the [p l ∈ (0, 1), p c ∈ (0, 1)]. Yielded function would be [f(x) � k/x, x ≠ 0] as simply as possible to explain what happens. We assumed the denominator of the fraction as domain (greater than zero (x > 0)) which is from the probability of the failure propagated through each link (p l in case of Figure 8) and at the same time, from the probability of the connection in the random network (p ER determined by p c in case of Figure 10). If we keep the nominator of the fraction as constant (k) which can be obtained from an inherited fitness for every individual at the time, and one more value in that each agent receives one unit of payoff in each time step (c � 1: corresponding to the model's payoff dynamics), such that function will be In what follows in Table 3, such a range [f ′ (x)] will be decided by the failure propagated through each link in the random network (p l p ER ) which was determined by both parameters (p l and p c ) with the same weight. We simply proved this numerical trend about the defined protection probability by that must be the similar to the p l (when the p ER � constant (Table 1)) as well as to the p c (when the p l � constant ( Table 2)).
us, failure influences in these simulations are almost identical as can be seen in Tables 1 and 2 according to these parameters changing including where increase or decrease occurs following a nonlinear curvature of the power law.
is interpretation helps to illustrate the dynamics behind how these propagations through the system can move or develop in a particular direction as many real-world networks (such as finance, supply chain, and disease) have proven this power law relationship between size and quantity. We should note here again that applied potential can go in a short period of time which we might cite as a bias or rationality. e inspiration from this observation and interpretation is to get a sense of the qualitatively different nature of propagation within systems that the failure is not just growth or decay, but due to the network effect over time and there is also another rate that is itself increasing the risk.
is statistical core of the phenomena played an additional part in the underlying phase transitions across a wide range of the system in this model.

Part 4: Summary of the Results
. We tested the model we created by conducting simulations on random network graphs generated as follows: the network property for each occupies a vertex (drawn as a node); the edge (illustrated as a line connecting two vertices) marks the nearby sites where a reproducing individual can place an offspring. With respect to the property of this network, the rest of the graph is organized as follows: the size of the node denotes the number of edges incident on a node (drawn as a degree); the width of the edges represents an influence based on that connection (drawn as an eigenvector centrality) which randomly chooses each neighbor. e potential of the systemic risk then propagates through failure dynamics as does the artificially designated probability with the initial event of the invading failure. Proper protection against the systemic risk within which the system components evolves heuristically through strategy dynamics (social learning and exploration) as a potential for the absence of the failure.

Results of Part 1. Notice how it all came about and what
the results mean to us. e model mechanism assumes that any individual that has a failure would devastate the whole system. Every one of them is in the situation of being what is called systemically essential, and all these will systemically cascade. Let us specify what the simulation mechanism would say to reflect this point. First, there is a (a) contagion. If one individual (�node) fails because of its interconnected relationship in a network (for example, an industry), its failure will have an impact on other individuals in the industry. ere is thus a contagion type of effect (reflected by links). Second, there is a (b) concentration. In the industry, even if only one player or small entity with a massive potential is more extensive than all the others, there will be a significant concentration (reflected by the eigenvector centrality). ird, there is a (c) context, namely, what usually happens to an individual's function in an ordinarily operable environment; if the circumstance takes a turn for the worse, perhaps so too will all other individuals, as all of these individuals and institutions are in context (reflected by social learning and imitation). Any individual in the industry functions in the same way. It also takes the same kinds of risks as others take. us, if one goes down, they all go down. We observe a significant correlation between capital and failure at both the microscale and macroscale, which is another critical fact. Note that if an individual without protection goes in the direction eliminating the risk through investment; every single individual in that context will go to fail.

Results of Part 2.
Another potentially important factor in these results is a cultural evolution and the dilemma it poses for imitation and exploration through social learning. e random probability controls the influence, and the impact depends on the choice between arbitrary designated strategies (f p0 and f p1 ) among the individuals through social learning and exploration. e output suggests that failure might be a critical driver for bias with a given centrality, social learning, and exploration. Evolutionary heuristics remains the dominant measurable unit of credit in any dynamics. Given the reliance of most plausible principles on the network, the dynamics of accumulated strategies has been scrutinized by generations of explorations. From fundamental work connected with this model, we know that the normally distributed increment of explorations with respect to protection level is highly skewed. Many studies are never proved, and this uneven exploration distribution is a robust, emergent property of the dynamics of the systemic risk propagation. is means that we can compare the impact of protection level biased in different strategies by looking at their relative exploration rates.

Results of Part 3.
We have used the same parameters to picture the combined dynamics of trait substitution sequences in two coevolving strategies originating from different initial conditions. At any moment in time, two coefficients of variation were calculated from the simulations by dividing the standard deviations of f p0 and f p1 by their respective means. We plot these as functions of time, which yield two curves for one initial condition in the plot, one for f p0 and the other for f p1 , with time on the horizontal axis and the coefficients of variation on the vertical axis. We added some additional simulations from other initial conditions. e main point of interest here is whether they all converge to the stationarity. Of course, each new random seed produces a different evolutionary trajectory, as we mentioned above, but as our previous time series of failure proportions appear to converge, so the update with more initial conditions has converged to an asymptotic value in a certain parameter setting. e results show that individual outcomes (failure and bias) at the stationarity follows a power law whose tails can be called a preferential attachment in network dynamics [35].

Discussion
We present a simple general model to quantify the protection that will be used to mitigate systemic risk, with results as follows: (i) First, the model introduces the nature of network property. (ii) en, it suggests a prototypical failure impact which is needed for projecting the common risk exposure onto the set of individuals. (iii) In a last step, the model implies how protection can be applied to the network; (iv) then, it simultaneously combines these coevolutionary features. As detailed in Supplementary Information, the present model treated a couple of results as factors in a fully undirected random design specified in the model structure and the dynamics of these simulations together with corresponding diversifications. Based on the simple set of property, the observations from this model highlight the fact that the probability describes the portion of protection which, from between nodes in the networks, can Complexity be characterized by how systemic risk should be coped with, rather than it being predicted by the probability of failure [36]. e broad spectrum of emergent behavior encapsulated in networks that have connectedness and spreading may be explained as follows.

Systemic Risk.
First, the simulations show that there is a vast variety of phenomena with random connections that arbitrarily act as robust amplifiers for the failure initialization. Many of those original properties are structurally simple (specifically, there are certain subset of vertices that we could call a small world because of their topology) but also strong (due to the influence based on those connections), and these could be realizable in other network structures such as regular, complete, small-world, and cycle in a fixation time of mutants [37]. e result provides an explicit procedure for their properties, proving the existence of those structures. e arrangement guarantees that, with high probability, the fundamental properties or influences spread along a branch corresponding to their universal characteristics at the macroscale. e connection of all edges is then intensified so that the spreading repeatedly invades, and eventually, failure spreads through the links one-by-one to all the branches until a cascade occurs at the microscale. Intuitively, the degree assignment creates a sense of primary flow, directed toward the initial failure which demonstrates that once the failure reaches the high centrality, the agents are highly likely to persist in invading more neighbors. us, if we know the starting links and finishing links of the node, we might be able to assess the risk in the network system. We can suggest that the risk emerges through common conditions caused by the relations of the node. In this perspective, we argue that systemic failures are consequences of the highly interconnected systems (connection probability) and networked risks (failure probability) that individuals have created. Such interdependencies will inevitably get out of control, and a network-oriented view can understand the instabilities.

Protection Potential.
Second, the simulation model explicitly regulates the potential of the protection to the systemic risk by interconnected dynamics. As can be seen in the result part of 1 (a feature of systemic risk), it is not possible to predict or control the potential for catastrophic failure even though all the information may be embedded in the system at the macroscale (�universal application of the parameters). Such problems might be solved by suitable management applications [and proper (re)design of the structure]. We constructed a suite of plausible dynamics, decentralized bottom-up mechanisms, by establishing appropriate "rules of the interaction," within which the system components can self-organize, including mechanisms ensuring rule compliance [vectorized microscale implementation]. Evolutionary dynamics, for example, can often promote a wellbalanced situation with respect to the interactions. When the investment into protection is weak (low investment), a pattern of strong systemic risk emerges as agent failure in networked conditions. In contrast, when the investment into protection is strong (high investment), a pattern of protection emerges, with a little diversification against all challenges. e results cast light on the modes of propagation. Observed contagion and persistence patterns should be viewed not as a direct causal link, but rather as due to an accumulated rational driven by interconnectedness [20]. Given the evolutionary mechanisms implemented in this simulation model, we observed the evolutionary response many times in order to obtain a critical value for the plausible protection potential. We demonstrated that although the structures have high failure potential in terms of the systemic risk, the function of the interconnection turns them into weak amplifiers, where profitable investment with high protection is gained.

Strategy Dynamics.
ird, the simulation shows that cultural evolutions are key features of investment for protection against the spread of failure. Namely, the results confirmed that without either exploration through social learning, no strategy is a robust amplifier under the propagation of failure and no investment is high enough to achieve an absence of failure under the contagion. We notice that clear potential, such as cultural evolution, can be turned into arbitrarily powerful amplifiers, and the results of experiments on the strategy dynamic vary the fitness advantages for the protection. Depending on the investment into protection through the bottom-up dynamics, it is possible to find individuals' strong bias [38], who alter their competitive strategy [14], which have a large potential for protection and that reconcile the networked agent's broad range of the systemic risk values as basics of their interconnected interactions. In particular, inspired by the plausible scenarios which we presented, we focused on the parameter of (f p c) with more individuals and time steps. e results give us some insights into the fact that the regime in the strongly centralized but weakly with interactions between individuals might be not recommendable in the current model mechanism and that biases might be mitigated by exploration if there is enough social learning. Following the observations, we can assume that successful outcomes need to be a delicate balance across the essential components (structure, centrality, social learning, and exploration). Observation in this simulation may lead to a subsequent beneficial network structure being engendered in social relations that is jointly supported by governance.

Coevolutionary Features.
Finally, there has been some interest in the general question as to whether the phenotypes evolve as an evolutionarily stable strategy [39,40]. According to research [41], the interaction between strategies prevents the attainment of a converging point, such that there is a continuous evolutionary change in their phenotypes [42]. Inspired by the investigation process of this model, we thus utilized the three dynamics of coevolution (payoff, failure, and strategy) to investigate the variety of possible evolutionary traits in a random network. In particular, we focused on the potential for stationarity and observed that this mode of coevolution is a feasible outcome. In part 3 of the results, it is seen that the adaptive trait values tend to a converging trend, as, once this is reached, no further fluctuations occur around the fixed mean value.
is finding corroborates speculations put forward regarding the necessity for such interactions to motivate a variety of phenotypic coevolution. Such dynamics are interpreted as indicating the continuous deterioration of a strategy's environment owing to the continual evolution of other strategies [43]. Analysis of these interpretations suggests that, through evolution, the phenotypes could either tend to convergent or to divergent asymptotic states [44]. In the case of the simulations, we can immediately infer from observations that there is a region in the monomorphic trait space where the strategies can coexist [45]. Such volatility can be governed not only through quantified nonevolutionary part but also by identifying the evolutionary part of the strategies, such as individuals' imitation and exploration as they aim to increase their protection [46]. Furthermore, emerging phenomena measured by the interconnected contributions obey power law like other network systems follow [47], in which a mathematical formula fits well to plausibly explain the likely outcomes [35], and the probability distributions based on the individual characteristics can play a prominent role in discourses about their potentials [48].

Concluding Remarks.
Research on systemic risk has yielded many remarkable findings. Our observations indicated that successful outcomes do indeed seem to require a balance across objects. From empirical results, there is also a consensus that embeddedness in a network of interrelations matters for the network's payoff and performance [49]. In contrast, only a partially oriented confirmation was conducted for a network dynamic, although the report alone does not tell the full story [50]. Critically scanned uncertainty needs to be addressed to answer questions as to how content network relationships, governance, and structure emerge over time [51]. With a random network-agent model, we notice that the proposed "protection dynamics" leads to a restructuring of the systemic risk that is practically free of failure. e principle, being simple but fundamental, does not ordinarily change across individual; hence, we suggest that this is a straightforward and easy model which can be used for finding invariant properties and provide an example so that it can be used when needed. Rules and process implemented in this study would provide a fresh way for decision makers (or social planner) to gain a better   perspective in the dynamics of systemic risk. With a simple network behind this model consisting of two types of agents, an interesting direction for future research will be motivated by whether comparable results can be achieved for systemic risk. We could enhance the prospects of systemic risk as a whole to more effectively address its related problems [12] by attempting to eradicate an infection on the optimal actions of decision making.

Data Availability
e data used to support the findings of this study are available from the corresponding author upon request.

Ethical Approval
All of them provided written informed consent to the study approved by the local ethics committee (SNUIRB No. 1509/ 002-002) and conformed to the ethical standards of the 1964 Declaration of Helsinki (Collaborative Institutional Training Initiative Program, report ID 20481572).  Figure S1: schematic representation of the nodes and lines. Figure S2: schematic representation of the connectivity. Figure S3: schematic representation of the structure. Table 1: evolutionary part (four parameters). Table 2: nonevolutionary part (seven parameters).