A complex network framework to model cognition: unveiling correlation structures from connectivity

Several approaches to cognition and intelligence research rely on statistics-based models testing, namely factor analysis. In the present work we exploit the emerging dynamical systems perspective putting the focus on the role of the network topology underlying the relationships between cognitive processes. We go through a couple of models of distinct cognitive phenomena and yet ﬁnd the conditions for them to be mathematically equivalent. We ﬁnd a non-trivial attractor of the system that corresponds to the exact deﬁnition of a well-known network centrality and hence stress the interplay between the dynamics and the underlying network connectivity, showing that both of the two are relevant. The connectivity structure between cognitive processes is not known but yet it is not any. Regardless of the network considered, it is always possible to recover a positive manifold of correlations. However, we show that diﬀerent network topologies lead to diﬀerent plausible statistical models concerning correlations structure, ranging from one to multiple factors models and richer correlation structures.


Introduction
Individuals differ from one another in their ability to learn from experience, to adapt to new situations and overcome challenges, to understand simple to complex ideas, to solve real-world and abstract problems and to engage in different forms of reasoning and thinking. Such differences in performance occur even in the same person, in different domains, across time and distinct yardsticks [1][2][3][4].
These complex cognitive processes are intended to be clarified and put together by the concept of intelligence. Despite many advances have been made, there are still open questions regarding its building blocks and nature yet to be solved [5][6][7].
There is a fair amount of research carried out and still going on about the theory of intelligence and few statements have been unequivocally been established. Furthermore, it should be noted the easiness that ongoing research gives rise to society policies, despite prevailing disputes and great unknowns are present. [6,8,9]. For this reason, there is an urgent need to understand the most important root causes, validate existing theories and shed light to people who are responsible of educational and even social and health decision-making.
Nowadays, there is a significant number of approaches to intelligence. Developmental psychologists are often more concerned about intelligence as a subset of evolving processes throughout life, rather than about individual differences [10]. Several theorists stress the role of culture in the very conceptualization of intelligence and its influence in individuals [11], while others point to the existence of different intelligences, either measurable or not [12]. There is also an increasing interest in contributions coming from biology and neuroscience [13][14][15][16][17]. Yet, the most influential approach so far is based on psychometric testing [18][19][20][21][22][23][24] Psychometrics has enabled successful and systematic measures of a wide range of cognitive abilities like verbal, visual-spatial, fluid reasoning, working memory, processing speed among others through standardized tests [25,26]. Even if distinct, these assessed abilities turn out to be intercorrelated rather than autonomous prowesses. That is, people who perform well in a given test tend to obtain higher scores on the others as well. This well-documented evidence concerning positive correlations between tests, regardless of its nature, is called the positive manifold. And precisely because of the existence of such complex relations, one of the main aim of this approach is to unveil the structure which best describes the relationships between a number of distinguishable factors or aptitudes that may exist. On this basis, many studies use exploratory and confirmatory factor analysis techniques, starting off from between-tests correlation matrices.
Furthermore, there exists a complex correlation structure between abilities which may unveil the underlying connection between cognitive processes. Factor analysis might help clearing up such patterns and yet bring about discussion on the meaning of the outcome.
A brief historical overview since the early days of intelligence research and its development may help us understand the spectrum of existing models. Some theorists relied on the shared variance among abilities, which Charles Spearman, pioneer of factor analysis, called the g factor or general intelligence [27], i.e one common factor which explains most of the variance within a population and source of improvement or decline of all other abilities, and it is still cause for controversial.
Alternatively, hierarchical models of intelligence where each layer accounts for the variations in the correlations within the previous one were also well accepted. [18,28,29].
Nevertheless, a fairly number of scholars argued against theories of cognitive abilities or intelligence drawn upon the concept, measure and meaning of general intelligence. Namely, Howard Gardner, stated that an individual has a number of relatively autonomous intellectual capacities, with a degree of correlation empirically yet to be determined, called multiple intelligences, among which non-cognitive abilities are included [12].
Two different approaches with reference to the relationship between observable variables and attributes or constructs prevail in present research and theorizing in psychology, but also clinical psychology, sociology and business research amongst others: formative and reflective models [30]. In the first of this conceptualization, observed scores define the attribute, whereas in the latter, the attribute is considered as the common cause of all observables. As an example, the classic definition of general intelligence could fall into a reflective model. But also, in clinical psychology, a mental disorder may be thought to be a reflective construct that brings about its observable symptoms [31]. Possible correlation between observables might be therefore due to its underlying common cause. Conversely, the aggregate outcome of education, job, neighbourhood and salary leads to socio-economic status (SES), a standard example of formative model.
A more recent approach aims to combine distinct possible factor models by only using the information about the factorial structure found by each study. [32].
Both formative and reflective models, along with similar alternatives, may elicit discussion regarding two different issues: one first source of debate is rooted in the meaning and interpretation of such models, while a second cause stems from disregarding the role of time, that is, the dynamics of the system is not explicitly considered.
The above-mentioned problems can potentially be overcome if we consider that variables, i.e, observables, scores or indicators, are the characteristics of nodes in a network. These latter are directly connected through edges, which reflect the coupling between variables. Dynamical systems theory is therefore the proper framework to formalize and study the behaviour of such systems [33]. Starting from an initial state, the system evolves in time according to a system of coupled differential equations and eventually reaches an attractor state of the system. Noteworthy, a substantive piece is prevalently missing: the topology of the network on where the process is taking place may be a determinant fact which enables nodes to communicate between each other and brings about correlations not explicitly enforced in the model. Therefore, the objective and contribution of the present work is exploring the significant role of network topology or connectivity structure between the variables deemed meaningful to the case of cognitive abilities or intelligence models.
In this work we evince the tight connection between a centrality measure of the network and the stable solution of the studied models. Moreover, we show that distinct network topologies may explain different correlation structures.
The paper is organized as follows: Section 2 introduces basic notions of networks and explored topologies. Section 3 describes and formalizes the two studied models of cognition. Section 4 and Section 5 go through the main results, concerning dynamics and correlations, respectively. Final discussion and conclusions are presented in last section. Further mathematical methods can be found in the Appendix.

Network topology
A network, G(V, E), is a collection of vertices or nodes, V (G), linked by edges, E(G), which are given meaning and attributes. Networks can describe complex interconnected systems such as social relationships, transportation maps, economic, biological and ecological systems among others. We consider networks that have neither self-edges nor multiedges, called simple networks [34].
The adjacency matrix of G, written A(G), is the N -by-N matrix which entry A ij equals 1 if node i is linked to node j, and 0 otherwise. Networks can be directed or undirected, although we stay on the latter case.
The topology of a network characterizes its shape or structure and the distribution of connections between nodes. Besides the attributes of nodes and edges, the topology of a network determines its main properties and makes it distinguishable from others. One main property is the degree of a node i, k i , which is the number of edges connected to it. Although networks may describe particular real systems, regardless of its nature, they can be classified to one of the most well-known families of networks. Right after we briefly describe the four network models explored in the present work.

(a) Complete network
Within the family of deterministic networks, a complete network is characterized by its nodes being fully connected, that is, each node is connected to the others, such that all off-diagonal elements of the adjacency matrix are equal to 1, A ij = 1 ∀i = j.

(b) Erdös-Rényi network
One of the most renowned random network is generated by the Erdös-Rényi (ER) model [35]. Given the number of nodes, N , and the probability of an edge, p, this model, G(N, p), chooses each of the possible edges with probability p. However, generally, real networks are better described by heterogeneous rather than ER networks. Therefore, ER networks are often used as null hypothesis to reject or accept models concerning more complex situations.

(c) Heterogeneous network
There is a wide range of networks coming from real systems (either found in nature or human driven) which topology is far from being homogeneous, but it rather entails degree distributions which are characterized by a power law, also called scale-free when the networks are large enough. [36]. The Internet network, protein regulatory networks, research collaborations, on-line social networks, airline systems, cellular metabolism, companies and industries interlinks are few examples of them [37,38] .

(d) Newman modular network
In addition to the degree distribution, another important feature is the presence of communities or modules within a network, mainly in social, but also in metabolic or economic networks [39,40]. A module or community can be defined as a subset of nodes which is more densely linked within it than with other subsets of nodes.
One particular method to generate such modules within a network is the Newman model, which distributes the nodes in a number, N modules , of modules not necessarily isolated from the others. [41][42][43]. Similarly as ER networks, with a probability p in , an edge between pairs of nodes belonging to the same community is created, whereas pairs belonging to different communities are linked with probability p out . In the model, the number of nodes, N , the total average degree, k and k in , which stands for the average degree within a community, are fixed. Hence, p in and p out are given by: where n in ≡ N/N modules and n out ≡ N − n in .
As k in grows the network modularity increases [44], that is, the communities become easier to identify.

Models of cognition
Within the framework of dynamical systems and network theory, there exists a one-to-one map between variables and nodes, such that variable i is represented by node i. The value of variable i, x i is set as an attribute of its corresponding node. In this new space, the adjacency matrix, which maps the interactions between variables on a network, can lump exogenous effects together in a very compact way:ẋ Therefore, (2) is the most general expression which integration determines the temporal evolution of each variable, x i (t). F i (x i , t) accounts for endogenous effects, i.e, a function that depends only on variable x i . G ij (x i , x j , t) takes into account exogenous effects on i, i.e, a function that describes the influence of its neighbouring variables, x j . The intensity of such individual interactions are included in G ij in the form of weights. The adjacency matrix, A, determines whether variables are coupled between them: if variables i and j are directly linked, then the corresponding element A ij = 1.
Otherwise, A ij = 0. In addition, A, F and G can, in general, depend explicitly on time.
Two models are addressed in this work: a networked dynamical model to explain the development of excellent human performance [45] and a dynamical model of general intelligence [46], both sharing great resemblance (Section 4).

Model A: a networked dynamical model to explain the development of excellent human performance
Ruud J.R.Den Hartigh et al. [45] were interested in the excellent level of performance of some individuals across different domains. They argued that the key to excellence does not reside in specific underlying components, but rather in the ongoing interactions among them and hence leading to the emergence of excellence out of the network integrated by genetic endowment, motivation, practice and coaching inter alia. They attempted to render well-known characteristics of abilities leading to excellence: the absence of early indicators of ultimate exceptional abilities, the fact that a similar ability level may be shifted in time between individuals, the change of abilities during a person's life span and the existence of unique pathways leading to excellence, that is, individuals may have diverse ways to achieve it.
They considered a networked dynamical model which can be mathematically defined as a set of coupled logistic growth equations, each of which represents the growth of a single variable, one of which being the domain-specific ability. The growth of the variable depends on the level already attained, available resources that remain relatively constant during development (K i ), resources that vary on the time scale of ability development, the degree in which a variable profits from the constant resources (r i ) and a general limiting factor (C): the ultimate carrying capacity, which captures the physical limits of growth. Moreover, W ij accounts for the effect of variable x i on x j .
Using (2), model A can be written as: Equation (3) is better understood as a modified logistic growth: Figure 1 shows 3 different possible temporal evolution of the system, determined by both the topology of the connection between variables and the parameters of the dynamical model.

Model B: dynamical model of general intelligence
Han L.J.van der Maas et al. [46] were concerned with the conceptualization and models of intelligence or cognitive abilities system by means of a general latent factor, as widely stated. They proposed an alternative explanation to the positive manifold based on a dynamical model built upon mutualistic interactions between cognitive processes, such as perception, memory, decision and reasoning, which are captured by psychometric tests scores to some extent. Such connections between items bring about another plausible explanation to the existence of one common factor, and thus need not correspond to an imposed latent process or actual quantitative variable, such as speed of processing or brain size. Inspired by Lotka-Volterra models commonly used in population dynamics [47,48], they proposed to model the cognitive system as a developing ecosystem with primarily cooperative relations between cognitive processes. Variables (x i ) represent the distinct cognitive processes, which growth function is parametrized by the steepness of the growth (r i ) and the limited resources for each process (K i ). Matrix W contains the relation between pair of processes, which they assume positive, i.e involved cognitive processes have mutual beneficial interactions.
Starting from uncorrelated initial conditions and parameters, that is, following uncorrelated random distributions, the dynamical connections between variables gradually lead the system to specific correlation patterns.
Using (2), model B can be written as: Equation (5) is better understood as a modified logistic growth: Section 4 puts stress on the resemblance between (4) and (6).

Interplay between dynamics and network topology
A dynamical model which captures the network structure of the connection between variables using an expression similar to (2) enables further analysis of the process considering the effect of topology, embodied in the adjacency matrix, A. Neither of the two described models is geared toward a particular cognitive architecture or brain model with regard to connectivity structure. Rather, much effort is devoted to understanding the effect of non-zero correlations between the parameters of the models, or an heterogeneous landscape of parameters. The former approach, however, requires certain constrains which, in general, may not be easy to proof. Alternatively, we consider the dynamical model to be parametrized by an homogeneous configuration, i.e. all nodes with equally fixed parameters, and explore the role of different connectivity structures between variables, which can be mapped on a network. Although a dynamical model describes the temporal evolution of several variables, we are usually concerned with the final state.

Mapping between models through weights rescaling
The space of parameters is large and hence so is the number of possible stable states. However, we focus our interest on solutions given by one unique analytical expression. Therefore, depending on the stability conditions (Section 4.3) we can distinguish two of such stable states. For model A, an optimal stable solution, x(C), is achieved when all variables reach the maximum allowed value: where we assume C < k i . Otherwise, final state, x(W d ), is determined by matrix expression (8): (8) captures the entanglement between network topology, W , and the parameters of the dynamical model. The influence of variable i on variable j is thus rescaled by its carrying capacity, K i and growing rate, r i , as follows: All intermediate states, which lay in the transition between metric and optimal stable states, are called mixed stable state. Analogously, for model B, there is one unique stable state, x(W ): Expressions (8) and (10), referred to stable states, for model A and B, respectively, are equivalent under rescaling (9). Moreover, we highlight the absence of initial conditions in the attractor state. The case when parameters are constant throughout variables, i.e, when K i ≡ K ∀i, r i ≡ r ∀i and W ij ≡ w ∀(i, j), may enable an explicit average solution to (10). An individual can be characterized by the average of the achived values of all variables. We follow the notation: For a complete network: For an Erdös-Rényi network (Appendix A): where V ar [k] is the degree of nodes variance. Average solutions to (8) for a complete network and an Erdös-Rényi network are equivalent to (12) and (13), respectively, with w d ≡ K r w.

Katz-Bonacich centrality as stable state
Centrality measures seek the most important or central nodes in a network [34]. Among many possible centralities, the generalized Katz-Bonacich centrality [49] is given by the solution of: Solving (14), the vector x of centralities is given by: Unlike eigenvector centrality, Katz-Bonacich centrality solves the issue of zero centrality values for acyclic or not strongly-connected networks by introducing a constant term β i for each node. Therefore, Katz-Bonacich centrality gives each node a score proportional to the sum of the scores of its neighbours plus a constant value. α parameter rules the balance between the first term in (14), which is the normal eigenvector centrality [44], and the second. The longest walks become more significant as α increases and hence the global topology of the network is considered, resembling eigenvector centrality. On the contrary, small values of α make Katz-Bonacich centrality a local measure which approaches degree centrality. When α → 0, x = β and as α increases so do the centralities until they diverge when: where λ(A) max is the maximum eigenvalue of A matrix. Hence, Katz-Bonacich centrality is defined as long as α < λ(A) −1 max [34]. Equivalently, for weighted networks, generalized Katz-Bonacich centrality is defined as: and α < λ(W ) −1 max . Equation (17) can also be expanded to: (18) stands for the number of walks of length p from node j to node i taking the strength of connections into account. This value is attenuated by a factor α p and hence [ p=∞ p=0 (αW T ) p ] ij accounts for the strength of all walks from node j to node i, with greater weakening as p gets larger.
Comparing (17) with (8) or (10) we conclude that generalized Katz-Bonacich centrality vector is the exact solution of the stable state of model B with α ≡ 1 and β ≡ K. Furthermore, when rescaling (9) is considered, so it is of model A or any other model which dynamics can be included in the weighted adjacency matrix in a similar way.
Therefore, variables which score greater according to generalized Katz-Bonacich centrality, achieve optimal values on the long run. For this reason we call stable state (8) as "metric" stable state.
For further discussion, we recall that a subset of centrality measures can also be interpreted as the stable state of a random walk along a network. Namely, generalized Katz-Bonacich centrality is the stable state of a biased random walk on a network for non-conservative processes [50,51]

Stability conditions
For model A, stability analysis is rather complex since many different stable states may exist, depending on a sizeable number of parameters which characterize both the topology and the dynamics. Nevertheless, we focus our interest on the most extreme situations: the optimal stable state, x(C), given by (7) and the metric stable state, x(W d ), given by (8). All other configurations are described by a mixed pattern which falls between optimal and metric stable states.
x(C) solution is stable when (See Appendix C): Or, using rescaled weighted matrix W d defined in (9), when: Provided that a given node i does not meet condition (20), stable state is no longer x(C) and hence, starting from node i, nodes will start getting values lower than the C threshold.
On the other hand, x(W d ) solution is stable when (Appendix C): where λ max (S) is the maximum eigenvalue of matrix S, which is defined as follows: Eigenvalues of S depend explicitly on x(W d ) and therefore can only be computed numerically (See Equation C.49). However, Perron-Frobenius theorem [52] allows us to obtain an upper threshold for λ max (S) analytically: Equations (21) and (23) imply the existence of an upper bound for the stability condition of metric stable state (Appendix C): For model B, stability analysis concerns only the metric stable state (10) and stability conditions are given by (21) with: From Figure 2, we conclude that network topology is enough to obtain information about the stability of the system. Due to the fact that we are in a situation of homogeneous configuration, λ(S) max is tightly connected to λ(W ) max , although it is not the same as seen in (22) and (25). Concerning Erdös-Rényi network (Figure 2 (upper left)), there is little variability in λ(S) max and therefore the critical value of the p parameter for which stability changes, p C , is confined within a narrow range. Using (24) and assuming a Poisson degree distribution we can obtain an approximate value for p C in case of a homogeneous configuration: The critical value of p obtained from (26) when the parameters are the same as for the Erdös-Rényi network of Figure 2 gives p C ≈ 0.56, which is in agreement with the numerical solution.
Conversely, stability with regard to heterogeneous network is rather diffuse (Figure 2 (upper right)). The broad spectrum of λ(W ) max [53] is somehow captured by the variability on λ(S) max . Consequently, there exist outlier networks which eventually break stability condition. Nevertheless, they become less frequent as α exponent increases. The effect of k in in a Newman modular network is completely different (Figure 2 (below)): in spite of being a parameter which rules the modularity of the network, the average total degree, k total = k in + k out , is still fixed and hence, both S matrix and solutions (8) and (10)

Unveiling correlation structures from network topology
Provided that condition (21) is met, the stable state is given by (8) and (10), for model A and B, respectively. This latter result provides one individual with as many values as existing variables. However, studies based on large batteries of psychometric tests rely on a sample from a population, made up of tens to thousands of individuals, from where inter-variables correlations are inferred. Section 5.1 describes the generation of distinct individuals from the models and handling of the correlation matrix out of them.

From dynamics to correlation matrix
These models have been studied by assuming that parameters corresponding to different variables are correlated. If, in addition, all variables are considered to be equally interconnected in a mutualistic scenario, that is, positive interactions and a complete network topology is considered, it is possible to recover well-known correlation structures [46]. Alternatively, our hypothesis is that the network topology of variables is sufficient to recover equivalent results concerning correlations and also enables us to avoid stronger constraints on parameters.
Since we are interested in studying the effect of topology, each individual is considered to be one random instance of the same network class. In other words, a new individual is obtained out of the possible random G (N, {µ}) and the corresponding correlation matrix is characterized by fixed values of N and {µ}, where N is the number of variables or nodes and {µ} is the set of characteristic attributes of a given network model. For instance, a network generated by Erdös-Rényi model has only one parameter, i.e, {µ} = p (Section 2).
Despite each variable holding its own set of parameters, we make all variables equivalent in what we call homogeneous configuration, such that individual differences result only from topological properties: Independently of others, each individual reaches a stable state given by (8) or (10), for model A or B, respectively, as long as condition (21) Figure 3: Flowchart to generate correlation matrix of variables out of a population of individuals Once stable states of the entire population are obtained, the computation of pairwise correlations between variables is straightforward. In line with most studies conveyed by psychometrics, Pearson standard correlation coefficient is used to finally get the correlation matrix [54], although other methods may excel this procedure [55,56].
In conclusion, the variability within one same family of networks {G(N, {µ})} brings about individual differences concerning variables which entail particular patterns or structures of the correlation matrix (28), with no explicit constrain on parameters, but considering them as homogeneous both between variables and individuals (See Figure 3).
From figure 4 we conclude that the positive manifold, i.e positive correlations, R ij > 0, can come out regardless of the topology, as long as {µ} of the network is properly set. Thus, connectivity by itself allows positive interactions, regardless of the structure and independently of further constrains on dynamic parameters. However, the structure of the correlation matrix indeed relies on the topology: Erdös-Rényi network's correlations histogram displays a narrow symmetric peak which captures the homogeneity of the topology, i.e all nodes being equivalent. On the contrary, heterogeneous network's correlations histogram follows a much wider asymmetric distribution, with a longer tail for larger values. This behaviour captures the presence of few hubs, which lead to non-trivial correlations structures, as we will see in Section 5.2. Finally, Newman modular network presents two characteristic peaks corresponding to intracluster, i.e within the same cluster, and intercluster, i.e between different clusters, correlations. As k in increases, both peaks become undistinguishable and pattern more closely resembles an Erdös-Rényi network. For large k in , intercluster correlations tend to 0 value. So far, in addition to considering an homogeneous configuration of parameters, we have constrained ourselves to solutions within the regime where metric stable solution is stable. Nevertheless, we expect richer phenomena when neither of the two restrictions exists. If we allow K parameter to be a random variable distributed among variables following a non-correlated normal distribution, N (µ K , σ K ), increasing values of σ K lead to lower values of R ij , the average correlation between variables. Figure 5 not only captures the positive manifold, but also the effect of network topology on the correlations. R ij is indeed modulated by the characteristic parameter of the network, {µ}. For Erdös-Rényi and Newman modular networks R ij peaks around the transition between metric and optimal stable states, with a profile shaped by the network topology. Average correlation increases as connectivity does so, until nodes are recurrently being absorbed by optimal state and, consequently, R ij is scaled down. Namely, Erdös-Rényi network peaks around p C , that is when stability changes landscape. As p increases more and more variables reach the optimal value and become independent from others. The impact of more variability in K parameter across variables is the decrease on the correlations and the peak shift towards lower values of p, since stability condition (21) is more likely to be broken. The peak generated by an heterogeneous network is more diffuse, owing to a wider distribution of the spectra. Moreover, the effect of increasing variability on K has a stronger attenuation effect. On the contrary, the absence of a peak coming out from Newman modular network is explained in Section 4.3. Nevertheless, a clear pattern emerges if we split intracluster from intercluster mean correlations. While intracluster correlations increase as k in does so, intercluster correlations decrease. However, the total mean correlation remains unchanged, as long as condition (21) is true.
For model B, R ij continuously increases as connectivity does so, until divergence conditions are met, and solutions do not longer exist.

From correlations matrix to statistical models
When it comes to describe variability among observed, correlated variables, a bunch of statistical models comes along, aiming to approximate and understand reality. Factor analysis is a statistical method developed and widely used in psychometrics [54,[57][58][59], inter alia. Observed variables are described as linear combinations of unobserved latent variables or factors, plus individual error terms, such that covariance or correlation matrix may be explained by fewer latent variables.
Historically, the most widely held theories of cognition and intelligence are built upon factor analysis raising from large batteries of conducted psychometric tests. There is still no agreement on the proper model and the underlying process which bring about the observed outcome. We list the models stood for the most outstanding theories: one factor models, multifactorial models, hierarchical models and other more complex structural models [60]. Section 5.1 evinces that connectivity structure between variables, mapped on a particular network topology, gives rise to different correlation matrices, even no explicit constrains on correlations between parameters are imposed. We show that certain factor models can be explained by a mutualistic dynamical model running on a particular network of variables.
In order to avoid subjective criteria when deciding the number of factors, several methods have been developed: Horn's parallel analysis, Velicer's map test, Kaiser criteria, Cattell scree plot or variance explained criteria [61][62][63][64]. We proceed to compute the scree plot so as to obtain the number of main principal components and hence potential latent factors. Thereupon, the statistical significance of the factor model is assessed by means of the p-value, which enables us to explore the effect of changing the parameter which best characterizes the network, {µ}.
In Figure 6 we have selected one specific network for each topology, given by the value of the characteristic parameter. The choice is such that solution is given by metric stable state (8).
The number of retained components, or factors, can be obtained according to different criteria. 6 different values of σ K account for the variability of K parameter. As σ K increases, R ij is scaled down, being the effect much larger in heterogeneous networks. In the case of ER network, R ij grows with increasing p until metric state eventually becomes unstable and nodes are gradually absorbed by optimal state. Larger dispersion on K shifts the peak towards lower values of p. Conversely, for an heterogenous network, an increase on α exponent leads to more stability of metric state, although stable conditions are more fragile. Larger dispersion on K shifts the peak towards much lower values of α. Finally, in the case of Newman modular network, stability conditions of metric state, (21), are always true for this parameters and hence the absence of the peak. For a better understanding of change of stability landscape, see Figure 2. The parameters of the model are set as Figure 4.
We look at values which are much larger than 1 and describe rather straight angles with the successive values.
With these criteria and from Figure 6, we hypothesize that the correlation matrix of variables which are connected following an Erdös-Rényi network are well described by a 1-factor model. In the case their connectivity structure is better captured by a Newman modular network with n clusters, then an n-factor model can not be rejected. Conversely, considering an heterogeneous network, a factorial model is no longer proposed, as eigenvalues are not clearly separated according to former criteria, but rather they follow a smooth decreasing curve. Alternatively, a statistical model which accounts for hierarchy between variables or more complex structural modelling and path analysis may be more realistic. However, this latter analysis is out of the scope of this paper.
For the cases of Erdös-Rényi and Newman modular networks with 4 clusters we show that a 1- factor model and a 4-factor model, respectively, can be accurate models to obtained results. To do so, we compute the p-value of such models for different numbers of retained factors. p-value in this case is testing the hypothesis that the model fits the results perfectly and hence, we seek values > 0.05. In the case of an Erdös-Rényi network, p value ≈ 1 ≫ 0.05 for n f actors ≥ 1 and hence, a 1-factor model is highly likely. Similarly, in the case of a Newman modular network with n = 4, p value > 0.05 only when n f actors ≥ 4, as suggested.
Once we have figured out the number of factors with respect to each network, we explore the effect of connectivity on the reinforcement of the statistical models, by looking at the explained variance of the most important components: We make use of (29), which gives the proportion of explained variance of R correlation matrix for each principal component of PCA. Although PCA and FCA are not equivalent statistical models [65,66], after having sustained the validity of a factor model, the former approach is acceptable in order to justify the suitability of the number of factors or components considered. Figure 7 confirms the assumptions underlying the models for both networks: in the case of an Erdös-Rényi network, the first eigenvalue increases as connectivity, characterized by p, does so and hence, a 1factor model is being reinforced. Similarly, in the case of a Newman modular network, besides the first eigenvalue, which is always large, second to fourth eigenvalues increase as modularity, characterized by k in , does so. In contrast to eigenvalues on further positions, which remain unchanged or smaller.
Covariance matrix can be analytically computed in the case of a complete network and an heterogeneous parameter landscape, K ∼ N (µ K , σ K ) (Appendix B).

Conclusions
Despite mainstream approaches to cognition and intelligence research are built on static and statistics based models, we explore the emerging dynamical systems perspective putting a greater emphasis on the role of the network topology underlying the relationships between cognitive processes. We go through a couple of models of distinct cognitive phenomena and yet find the conditions for them to be mathematically equivalent. Both models meet the requirements set out by empirical observation and established theories regarding the corresponding cognitive phenomena to which they aim to provide an explanation. Furthermore, the applied mathematical formulation may well enlight models of many real mutualistic systems, other than cognitive.
The topology of the network defined by the dynamical influence between processes indeed underlays further analysis of the results. We find the principal attractor of the system to be the exact definition of Katz-Bonacich centrality, a measure of a node importance which can also be understood as a nonconservative biased random walked along a network. We propose that heterogeneities in the dynamical parameters can be absorbed by a rescaling of the adjacency matrix weights and hence leading to the same result.
Individuals may differ in the genetic-environmental markers captured by the parameters of the model, but also in the connectivity structure between brain regions, either structural or functional. Two individuals might achieve the same performance through different neuronal routes and cognitive strategies when solving cognitive tasks. Although certain brain structures and functional pathways may be more likely to be involved in intelligence than others, there is also considerable heterogeneity which can lead to similar outcomes [14]. For instance, as Newman modular network increases its modularity, the corresponding correlation matrix becomes more and more likely to be well described by a factorial model with as many factors as communities has. However, whereas the inner structure gradually changes, the average result of its stable state remains unchanged.
The connectivity structure between cognitive processes is not known but yet it is not any. We show that network topology by its own leads to different plausible statistical models. Regardless of the network considered, it is always possible to set a parameter configuration such that the positive manifold results from the dynamical model. However, the correlations structure is determined by the network topology. Complete and Erdös-Rényi networks are constrained to bring about a one-factor model, more clearly defined as connectivity increases. Newman modular network enables higher order factor models, depending on the number of defined communities. Latent factors turn to be more distinguishable as modules grow to be more isolated. Conversely, heterogeneous networks lead to more complex statistical models, namely with richer and more correlated structures.
In the present article we exploit the interplay between the dynamics and the underlying network topology to model cognitive abilities and we conclude that both of the two are relevant. Although scholars are not yet sure of the relationship between cognitive processes and of the nature of intelligence we can shed a bit of light by proposing an alternative framework which captures the real meaning of 'process' and 'relationship': a dynamic complex network framework to model cognition.
This work is an open door to further research: we show that different network topologies lead to different correlation structures. Still, richer topologies can be considered and may bring about other interesting and eventually more realistic structures. We have restricted ourselves to static networks, though the more general definition of a network is not time constrained. What if the network which captures the connection between cognitive processes or brain modules was time dependent? What if cognitive processes could be modelled as a multilayer network, from generalists to specialists layers? Moreover, when looking only at the attractors of the system we are missing the temporal evolution of such processes and its real causality. Therefore, are these models able to explain evolving properties of the considered variables? We finally highlight that there exist several limitations in models based on ecologic systems, as exhaustively studied in population dynamics research, namely inconsistent results coming from unbounded models and discrepancy with the behaviour of some real systems [67][68][69][70]. Hence, alternative mathematical models which overcome some of these problems shall be investigated.
We define homogeneous configuration as follows: In this case, metric stable solution (10) for a complete network is straighforward: In the case of a complete network, as solutions are exactly the same for all nodes, the variance of stable state is null.
For an Erdös-Rényi network, we compute the average of metric stable state: Using (18) and considering W is a symmetric matrix, though the expression is equivalent: Taking the average on (A.33): We proceed to calculate each term of the average: where k ≡ 1 N i k i is the mean degree of a node. Second and following terms account for the average number of m next-nearest neighbours, denoted with z m . The general expression for any network is given by [71]: where z 1 = k.
In the case of an Erdös-Rényi network following a Poisson distribution, we get: x * ER = 1 N i j (I − W T ) −1 ij K j = 1 + wk + w 2 k 2 + w 3 k 3 + · · · = K 1 − wk ∀i (A.39) We point out the fact that a Poisson distribution is not always a good model for an Erdös-Rényi network and hence (A.39) is considered an approximation forx * ER . The variance of x * i , V ar[x * i ] ER , is a rather difficult computation and therefore we explore the behaviour at the second order of w, ∼ O(w 2 ): 40) where · corresponds to the average of the ensamble.
The second term, x i is already known (A.39), as it is given byx * in the thermodynamic limit. Using (10), the first term can be expanded as: Using (A.37) and splitting the cases i = j and i = j: Therefore, in order to obtain further information of the structure of the covariance and correlation matrices we ought to compute higher orders on w.

B Covariance matrix for a complete network and heterogeneous parameters configuration
Although the solution when connectivity structure is captured by a complete network is trivial, we can go a step further when taking into account variability in the dynamic parameters. Let us consider K parameter taken from a normal uncorrelated distribution, K i ∼ N (µ K , σ K ): If we separate x i x j according to the contribution regarding K parameter: x i x j = a 2 K i K j + ab K i q =j K q + ab p =i From (B.45) it turns out that a 1-factor model is indeed valid for the description of the covariance matrix. In cases where Cov(x) ∼ V ar(x) then adequacy decreases. The particular case Cov(x) = V ar(x) is achieved whenw = 1 N −3 . However, such condition is never reached: As seen from (12), stable state diverges when w > 1 N − 1 .

C Stability conditions
To obtain the stability conditions of optimal fixed point (7), we expand (3) around this fixed point If we linearize (C.47) keeping only terms ∼ O(ǫ i ), we obtain: On the other hand, to obtain the stability conditions of metric fixed point (8), we expand (3) around this fixed point x i ≡ x(W d ) i + ǫ i : 49) can be written in matrix forms as (22). We can derive a threshold for the stability condition using the Perron-Frobenius theorem, (23), applied to (22) which enables us to write: Looking into the extreme conditions of (C.50) we conclude: