The standard assumptions that underlie many conceptual and quantitative frameworks do not hold for many complex physical, biological, and social systems. Complex systems science clarifies when and why such assumptions fail and provides alternative frameworks for understanding the properties of complex systems. This review introduces some of the basic principles of complex systems science, including complexity profiles, the tradeoff between efficiency and adaptability, the necessity of matching the complexity of systems to that of their environments, multiscale analysis, and evolutionary processes. Our focus is on the general properties of systems as opposed to the modeling of specific dynamics; rather than provide a comprehensive review, we pedagogically describe a conceptual and analytic approach for understanding and interacting with the complex systems of our world. This paper assumes only a high school mathematical and scientific background so that it may be accessible to academics in all fields, decision-makers in industry, government, and philanthropy, and anyone who is interested in systems and society.
How can we scientifically approach the study of complex systems—physical, biological, and social? Empirical studies, while useful, are by themselves insufficient, since all experiments require a theoretical framework in which they can be interpreted. While many such frameworks exist for understanding particular components or aspects of systems, the standard assumptions that underlie most quantitative studies often do not hold for systems as a whole, resulting in a mischaracterization of the causes and consequences of large-scale behavior.
This paper provides an introduction to complex systems science, demonstrating a few of its applications and its capacity to help us make more effective decisions in the complex systems of our world. It focuses on some general properties of complex systems, rather than on the modeling of specific dynamics as in the subfields of dynamical systems, agent-based modeling and cellular automata, network science, and chaos theory. Section
Complex systems science considers systems with many components. These systems could be physical, biological, or social. Given this diversity of systems, it may seem strange to study them all under one framework. But while most scientific disciplines tend to focus on the components themselves, complex systems science focuses on how the components within a system are related to one another [
Each column contains three examples of systems consisting of the same components (from left to right: molecules, cells, and people) but with different relations between them. Each row contains systems representing a certain kind of relationship between components. For random systems, the behavior of each component is independent from the behavior of all other components. For coherent systems, all components exhibit the same behavior; for example, the behavior (location, orientation, and velocity) of one part of the cannonball completely determines the behavior of the other parts. Correlated systems lie between these two extremes, such that the behaviors of the system’s components do depend on one another, but not so strongly that every component acts in the same way; for example, the shape of one part of a snowflake is correlated with but does not completely determine the shape of the other parts. Implicit in these descriptions is the necessity of specifying the set of behaviors under consideration, as discussed in Section 2.2. (Image source: [
Systems may differ from each other not because of differences in their parts but because of differences in how these parts depend on and affect one another. For example, steam and ice are composed of identical water molecules but, due to differences in the interactions between the molecules, have very different properties. Conversely, all gases share many behaviors in common despite differences in their constituent molecules. The same holds for solids and liquids. The behaviors that distinguish solids from liquids from gases are examples of
A full description of all the small-scale details of even relatively simple systems is impossible; therefore, sound analyses must describe only those properties of systems that do not depend on all these details. That such properties exist is due to
We define the complexity of a behavior as equal to the length of its description. The length of a description of a particular system’s behavior depends on the number of possible behaviors that system could exhibit [
It is important to note that one must carefully define the space of possible behaviors. For instance, if we are interested in a light bulb already in a socket, the light bulb has two possible behaviors, as above, but if we are instead interested in the complexity of building a light bulb, the space of possible behaviors might include all of the ways in which its parts could be arranged. As another example, consider programming a computer to correctly answer a multiple-choice question with four choices. At first glance, this task is very simple: since there are four possible behaviors, only two bits are required. Nonetheless, we have the sense that programming a computer to score perfectly on a multiple-choice test would be quite difficult. This apparent paradox is resolved, however, when we recognize that such a task is difficult only because we do not
Consider a human, and then consider a gas containing the very same molecules that are in the human but in no particular arrangement. Which system is more complex? The gas possesses a greater number of possible arrangements of the molecules (i.e., has more entropy, or disorder) and thus would take longer to describe at a microscopic level. However, when we think of a complex system, we think of the behaviors arising from the ordered arrangement of molecules in a human, not the behaviors arising from the maximally disordered arrangement of molecules in a gas. It therefore may be tempting to conclude that complex systems are those with reduced disorder. But the systems with the least disorder are those in which all components exhibit the same behavior (coherent systems in Figure
To resolve this apparent paradox, we must consider that the length of a system’s description depends on the level of detail used to describe it. Thus, complexity depends on scale. On a microscopic scale, it really is more difficult to describe the positions and velocities of all the molecules of the gas than it is to do the same for all the molecules of the human. But at the scale of human perception, the behaviors of a gas are determined by its temperature and pressure, while the behaviors of a human remain quite complex. Entropy corresponds to the amount of complexity at the smallest scale, but characterizing a system requires understanding its complexity across multiple scales. A system’s
As shown in Figure
Representative complexity profiles for random, coherent, and correlated systems (see Figure
The intuition that complex systems require order is not unfounded: for there to be complexity at larger scales, there must be behaviors involving the coordination of many smaller-scale components. This coordination suppresses complexity at smaller scales because the behaviors of the smaller-scale components are now limited by the interdependencies between them. The tension between small-scale and large-scale complexity can be made precise: given a fixed set of components with a fixed set of potential individual behaviors, the area under the complexity profile will be constant, regardless of the interdependencies (or lack thereof) between the components. More precisely, the sum of a system’s complexity at each scale (i.e., the area under its complexity profile) will equal the sum of each individual component’s complexity [
For instance, consider a factory consisting of many workers [
The complexity profile of a factory that can produce a large number of copies of a few types of goods, and the complexity profile of a factory that can produce many types of goods but not in large numbers. The number of copies of a good produced is a proxy for scale since, given a fixed technology, mass production requires larger-scale coordinated action in the factory (e.g., an assembly line). The number of different types of goods that can be produced at a given scale is a proxy for the number of different possible behaviors of the factory—and thus its complexity—at that scale.
A corollary of the tradeoff between complexity and scale is the tradeoff between adaptability and efficiency [
Due to the tradeoff between complexity and scale, any mechanism that creates larger-scale complexity—whether market or government or otherwise—will necessarily reduce individual complexity. This is not to say that larger-scale complexity is always harmful; it is often worth trading some individual-level freedoms for larger-scale cooperation. When, then, is complexity at a particular scale desirable?
A determination of when complexity is desirable is provided by the
Since complexity is defined only with respect to a particular scale, we can refine the Law of Requisite Variety: to be effective, a system must match (or exceed) the complexity of the environmental behaviors to which it must differentially react at all scales for which these behaviors occur [
Schematic complexity profiles of militaries in conflict. (a) If two armies are operating with the same number of possible behaviors but at different scales, the larger-scale one is favored. (b) If two armies are operating at the same scale but with different numbers of possible behaviors, the higher-complexity one is favored. (c) If two armies are operating at different scales and with different numbers of possible behaviors, which one is favored depends on the terrain (see text). Note that these profiles are simplified to highlight the key concepts; actual militaries operate at multiple scales. More generally, (a) and (b) depict conflicts in which one army has at least as much complexity as the other at every scale.
As another example, healthcare involves both small-scale tasks with high overall complexity such as case management and large-scale, lower-complexity tasks such as manufacturing and delivering vaccines [
The eurozone provides a potential illustration of a multiscale complexity mismatch. Fiscal policy is made predominantly at the scale of individual countries and thus has a higher complexity at the country scale but relatively little complexity at the scale of the entire eurozone, while monetary policy is made at the scale of the entire eurozone and thus has some complexity at the scale of the eurozone but lacks the ability to vary (i.e., lacks complexity) at the scale of individual countries. Many have argued that economic difficulties within the eurozone have arisen because this mismatch has precluded effective interactions between fiscal and monetary policy [
Problems arise not from too much or too little complexity (at any scale) per se but rather from mismatches between the complexities of a task to be performed and the complexities of the system performing that task. (Incidentally, human emotions appear to reflect this principle: we are bored when our environment is too simple and overwhelmed when it is too complex [
Even if the complexity of the system matches that of its environment at the appropriate scales, there is still the possibility of a complexity mismatch. Consider two pairs of friends—four people total, each of whom can lift 100 pounds—and consider two 200-pound couches that need to be moved. Furthermore, assume that each person is able to coordinate with her friend but not with either of the other two people. Overall then, the system of people has sufficient complexity at the appropriate scales to move both couches since each pair of friends can lift one of the 200-pound couches. However, were one person from each pair of friends to be assigned to each couch, they would not be able to lift the couches because the two people lifting each couch would not belong to the same pair of friends and thus would not be able to coordinate their actions. The problem here is that while the pairs of friends possess enough overall complexity at the right scales to lift the couches, the subdivision within the system of friends is not matched to the natural subdivision within the system of couches. The mismatch in complexity can be seen if we focus our attention on just a single couch: while the couch requires coordinated action at the scale of 200 pounds, the two people lifting it are capable only of two independent actions, each at the scale of 100 pounds.
The way in which academic departments are organized provides a more realistic example of the potential of subdivision mismatch. Academia has multiple levels of subdivision (departments, subfields, etc.) in order to organize knowledge and coordinate people, resulting in a high overall degree of complexity across multiple scales, where scale could refer to either the number of coordinated people or the amount of coordinated knowledge, depending on which aspect of the academic system is under consideration. Similarly, there are multiple levels of natural subdivision in the set of problems that academia can potentially address, with each subdivision of problems requiring particular types of coordinated knowledge and effort in order to be solved. Academia’s complexity across multiple scales allows it to effectively work on many of these problems. However, there may exist problems that academia, despite having sufficient overall multiscale complexity, is nonetheless unable to solve because the subdivisions within the problem do not match the subdivisions within academia. The increase in interdisciplinary centers and initiatives over the past few decades suggests the perception of such a mismatch; however, the structure of the academic system as a whole may still hinder progress on problems that do not fall neatly within a discipline or subdiscipline [
The above examples provide an illustration of the principle that in order for a system to differentially react to a certain set of behaviors in its environment, not only must the system as a whole have at least as much complexity at all scales as this set of environmental behaviors (as described in Section
A common way in which systems are organized is through hierarchies. In an idealized hierarchy, there are no lateral connections: any decision that involves multiple components of the hierarchy must pass through a common node under whose control these components all (directly or indirectly) lie. The complexity profile of such a hierarchy depends on the rigidity of the control structure (Figure
Complexity profiles of two hierarchies, each with the same number of people. Here, the scale is the number of coordinated man-hours. In one hierarchy, all decisions, regardless of the scale, are made by a single person, while in the other, different decisions are made at various levels of the hierarchy.
No type of hierarchy is inherently better than any other. For a particular environment, the best hierarchy is one for which the complexity profile matches that of the tasks needed to be performed. A tightly controlled (top-heavy) hierarchy is not well suited to environments in which there is a lot of variation in the systems with which the lower levels of the hierarchy must interact; neither is a very loosely controlled hierarchy well suited to environments that require large-scale coordinated action. For example, centralizing too much power within the US governance system at the federal (as opposed to the local or state) level would not allow for sufficient smaller-scale complexity to match the variation among locales; too decentralized a governance system would not allow for sufficient larger-scale complexity to engage with problems that require nationally coordinated responses. Assigning decisions to higher levels in hierarchies allows for more efficiency and scale but less adaptability and variation.
We should also consider not just the overall complexity profile of governance systems but how well the subdivisions in governance systems match those within their territories (Section 2.6.). Metropolitan areas are in some ways more similar to one another than they are to the rural areas of their respective states. So while dividing the US into 50 states provides substantial lower-scale governmental complexity, this complexity is not necessarily well matched to natural urban-rural divides. To the extent that such a mismatch exists, there may be issues currently handled at the state level that would be better handled at the local level, thereby allowing for different policies in urban and rural areas (and likewise, perhaps some of the powers that some argue should be devolved from the federal to the state level should in fact be devolved to the local level).
It is important to distinguish between the complexity of a hierarchy and the complexity of the decisions that the people within the hierarchy are capable of making. For instance, one could design a tightly controlled hierarchy that could take a large number of large-scale actions (i.e., high complexity at its largest scale), but since the decision-making abilities of even the most capable humans are of finite complexity, the individuals at the top may be fundamentally unable to correctly choose from among these actions. This brings us to an important limitation of hierarchies: the complexity of the decisions concerning the largest-scale behaviors of a hierarchy—the behaviors involving the entire organization—is limited by the complexity of the group of people at the top [
We began by considering idealized hierarchies with only vertical connections, but lateral connections provide another mechanism for enabling larger-scale behaviors. For instance, cities can interact with one another (rather than interacting only with their state and national governments) in order to copy good policies and learn from each other’s mistakes. Through these sorts of evolutionary processes (described further in Section
The previous section has examined some of the general properties of systems with many components. But how do we study particular systems? How do we analyze data from complex systems, and how do we choose which data to analyze?
In a sense, it is surprising that we can understand any macroscopic system at all, as even a very simple mechanical system has trillions upon trillions of molecules. We are able to understand such systems because they possess a
A complexity profile of a system with a separation of scales. A separation of scales implies that the behaviors occurring below a certain scale (
More generally, the approach described above is an example of a
The systems for which mean-field theory applies exhibit large-scale behaviors that are the average of the behaviors of their components. They must possess a separation of scales, which arises when the statistical fluctuations of their components are sufficiently independent from one another above a certain scale. Mean-field theory may hold even in the presence of strong interactions, so long as the effect of those strong interactions can be captured by the average behavior of the system—that is, so long as each component of the system can be modeled as if it were interacting with the average (i.e., mean field) of the system. For example, the large-scale motion of solids is well described by mean-field theory, even though the molecules in a solid interact with one another quite strongly, because the main effect of these interactions is to keep each molecule at a certain distance and orientation from the average location (center of mass) of the solid. Likewise, under some (but certainly not all) conditions, economic markets can be effectively described by modeling each market actor as interacting with the aggregate forces of supply and demand rather than with other individual market actors.
However, when there are sufficiently strong correlations between the components of the system, i.e., when the interactions between a component of the system and a specific set of other components (as opposed to its general interaction with the rest of the system) cannot be neglected, mean-field theory will break down. These systems will instead exhibit large-scale behaviors that arise not solely from the properties of individual components but also from the relationships between components. For example, while the behavior of a muscle can be roughly understood from the behavior of an individual muscle cell, the behavior of the human brain is fundamentally different from that of individual neurons, because cognitive behaviors are determined largely by variations in the synapses
Because their small-scale random occurrences are not statistically independent, complex systems often exhibit large-scale fluctuations not predicted by mean-field theory, such as forest fires, viral content on social media, and crashes in economic markets. Sometimes, these large-scale fluctuations are adaptive: they enable a system to collectively respond to small inputs [
When the components of a system are independent from one another above a certain scale, then at much larger scales, the magnitudes of the fluctuations of the system follow a normal distribution (bell curve), for which the mean and standard deviation are well defined and for which events many standard deviations above the mean are astronomically improbable. Interdependencies, however, can lead to a distribution of fluctuations in which the probability of an extreme event, while still small, is not astronomically so. Such distributions are characterized as
A normal distribution (thin-tailed) and a distribution with a power-law decay (fat-tailed). The fat-tailed distribution may appear more stable, due to the lower probability of small-scale fluctuations and the fact that samples from the distribution may not contain any extreme events. However, sooner or later, a fat-tailed distribution will produce an extreme event, while one could wait thousands of lifetimes of the universe before a normal distribution produces a similarly extreme event. Note that the axes of this graph are truncated; the illustrated fat-tailed distribution can, with small but nonnegligible probability (0.04%), produce events with a scale of one million or more.
One danger of interdependencies is that they may make systems appear more stable in the short term by reducing the extent of small-scale fluctuations, while actually increasing the probability of catastrophic failure [
Because it is usually easier to collect data regarding components of a system than it is to collect data regarding interactions between components, studies often fail to capture the information relevant to complex systems, since complex large-scale behaviors critically depend on such interactions. Furthermore, as discussed in Section
How can we understand the systems for which these standard approaches do not apply? Our understanding of
Sound is another example: all materials, regardless of their composition, allow for the propagation of sound waves. Sound behaves so similarly in all materials because at the length scales relevant to sound waves, which are far larger than the sizes of individual atoms and molecules, the effect of the microscopic parameters is merely to set the speed of the sound. Note that sound waves cannot be understood as a property of the average behavior—in this case, average density—of a material, since it is precisely the systematic correlations in the deviations from that average that give rise to sound. Nor is sound best understood by focusing on the small-scale details of atomic motion: scientists understood sound even before they learned what atoms are. The key to understanding sound waves is to recognize that they have a multiscale structure—with larger-scale fluctuations corresponding to lower frequencies and smaller-scale fluctuations corresponding to higher frequencies—and to model them accordingly.
Lim et al. apply this approach to studying ethnic violence [
Understanding all the details of any complex system is impossible, just as it is for most systems with a separation of scales; there is just too much complexity at the smallest scale. However, unlike the behaviors of systems with a separation of scales, the important large-scale behaviors of complex systems are not simply the average of their small-scale behaviors. The interdependencies at multiple scales can make it difficult or impossible to precisely understand how small-scale behaviors give rise to larger-scale ones, but even for complex systems, there is much less complexity at the larger scales than there is at the smaller scales. Thus, there will always be large-scale behaviors that do not depend on most of the system’s details (see Figure
A figure from Lim et al.’s paper on ethnic violence [
A representative complexity profile of a complex system. Understanding all the details (i.e., all of the small-scale behaviors) is impossible and unnecessary; the most important information is contained in the large-scale behaviors. However, for systems for which mean-field theory does not apply, characterizing these behaviors will involve more than a simple average.
Although the principles discussed throughout Sections
Given the absence of perfect knowledge, how can the success of systems we design or are part of be assured? While the success of many systems rests on the assumption that good decisions will be made, some systems do not depend on individual understanding and can perform well in spite of the fallibility of decision-makers (whether due to corruption, subconscious bias, or the fundamental limitations of human minds). The study of complex systems approaches this observation scientifically by (implicitly or explicitly) considering the decision-makers themselves as part of the system and of limited complexity/decision-making ability. The question thus becomes: how do we design systems that exceed the complexity of the decision-makers within them?
While uncertainty makes most systems weaker, some systems benefit from uncertainty and variability [
Competitive market economies provide another example of how systems can thrive on uncertainty. Due to our ignorance of which will succeed, many potential innovations and businesses must be created and improved upon in parallel, the successful ones expanding and the unsuccessful ones failing. The successful among these can then be improved upon in the same manner—with many approaches being applied at once—and so on. (However, without effectively regulated multiscale cooperative frameworks—see Section 4.2.—large-scale parts of the economic system may optimize for the wrong goals, settling into harmful societal equilibria [
Likewise, the internal processes of large organizations may follow an evolutionary pattern in which small parts of the organization can fail and thus be improved upon; without such flexibility, the entire organization may fail at once in the face of a changing internal or external environment. In some cases, the failure of the entire organization makes room for more effective organizations to take its place (assuming the economy is sufficiently decentralized and competitive so that the organization in question is not “too big to fail”). The collapse of government is generally not one of those cases, however [
In order to thrive in uncertainty and exceed the complexity of individual decision-making, systems can incorporate evolutionary processes so that they, even if very limited at first, will naturally improve over time. The first step is to allow for enough variation in the system, so that the system can explore the space of possibilities. Since a large amount of variation means a lot of complexity and complexity trades off with scale (Section
The second step is to allow for a means of communication between various parts of the system so that successful choices are adopted elsewhere and built upon (e.g., cities copying the successful practices of other cities). Plans will always have unintended consequences; the key is to allow unintended consequences to work for rather than against the system as a whole. Systems can explicitly design only systems of lesser complexity since an explicit design is itself a behavior of the first system. However, systems that evolve over time can become more complex than their designers. The desire for direct control must therefore be relinquished in order to allow complexity to autonomously increase over time.
Successful evolutionary processes generally do not consist of unbridled competition but rather contain both competition and cooperation, each occurring at multiple scales [
An illustration from Chapter 7 in [
In order to promote effective group cooperation, competition must be properly structured. A soccer team in which the players compete with their own team members to score goals will not be effective, but one in which the players compete for the title of the most fit may be. The framework in which competition occurs must be structured so that the competitors are incentivized to take actions that are net good for the group; otherwise, a kind of tragedy-of-the-commons situation occurs. The potential for competition to go awry highlights the importance of having a multiscale structure with competition occurring on multiple levels, rather than having everyone in the system compete with everyone else. With the multiscale structure, groups with unhealthy evolutionary dynamics are selected against, while groups with a healthy mix of competition and cooperation that benefits the entire group are selected for. There is evidence that the geographic nature of evolution—in which organisms evolve in somewhat separated environments and mean-field theory does not apply—has resulted in precisely this multiscale structure and has therefore allowed for the evolution of genuine (e.g., not reciprocal) altruistic behavior [
Complex systems science, also known as complexity science, contains many subfields. One starting point for exploring complex systems more broadly is this clickable map [
Systems with many components often exhibit emergent large-scale behaviors that cannot be directly inferred from the behaviors of their components. However, an early insight of statistical physics is that in spite of the impossibility of describing the details of trillions of molecules, the macroscopic properties of the molecules can be well understood by analyzing their space of possible behaviors, rather than their specific configurations and motions. While many macroscopic properties can be described in terms of the average behaviors of the molecules, the macroscopic properties of certain physical phenomena, such as phase transitions, cannot be understood by averaging over system components; accordingly, physicists were forced to develop new, multiscale methods. Likewise, while standard statistical methods—which infer the average properties of a system’s many components—can successfully model some biological and social systems, they fail for others, sometimes spectacularly so.
Taking a systemic view by considering the space of possible behaviors can yield insights that cannot be gleaned by considering only the proximate causes and effects of particular problems or crises. A system’s complexity—which depends on its number of distinct potential behaviors (i.e., on the space of possibilities)—is a starting point from which to get a handle on its large-scale properties, in the same way that entropy is the starting point for statistical physics. Because the number of distinct behaviors of a system depends on the level of detail (behaviors that appear the same at lower resolution may be distinct at higher resolution), complexity depends on scale. Interdependencies between components reduce complexity at smaller scales by restricting the freedom of individual components while creating complexity at larger scales by enabling behaviors that involve multiple components working together. Thus, for systems that consist of the same components, there is a fundamental tradeoff between the number of behaviors at smaller and larger scales. This tradeoff among scales is related to the tradeoff between a system’s adaptability, which depends on the variety of different responses it has to internal and external disturbances, and its efficiency, which depends on its operating scale. There is no ideal scale at which a system should possess complexity; rather, the most effective systems are those that at each scale match the complexity of their environments.
When analyzing data or creating organizational structures, standard methods fail when they underestimate the importance of interdependencies and the complexity that arises from these interdependencies. To some extent, these problems can be mitigated by matching the data analysis or organizational structure to natural divisions within the system of interest. Since complex systems are those for which behaviors occur over multiple scales, successful organizations and analyses for complex systems must also be multiscale in nature. However, even when armed with all the proper information and tools, human understanding of most complex systems will inevitably fall short, with unpredictability being the best prediction. To confront this reality, we must design systems that are robust to the ignorance of their designers and that, like evolution, are strengthened rather than weakened by unpredictability. Such systems are flexible with multiple processes occurring in parallel; these processes may compete with one another within a multiscale cooperative framework such that effective practices are replicated. Only these systems—that grow in complexity over time from trial and error and the input of many—exhibit the necessary complexity to solve problems that exceed the limits of human comprehension.
The authors declare that they have no conflicts of interest.
This material is based upon work supported by the National Science Foundation Graduate Research Fellowship Program under Grant no. 1122374 and by the Hertz Foundation. The authors thank Uyi Stewart for discussions that led to the writing of this paper, Gwendolyn Towers for editing early drafts of the manuscript, and Robi Bhattacharjee for helpful discussions regarding complexity and scale.