JPSJournal of Probability and Statistics1687-95381687-952XHindawi Publishing Corporation15294210.1155/2011/152942152942Research ArticleGaussian Covariance Faithful Markov TreesMaloucheDhafer1,2RajaratnamBala2GaoJunbin B.1Unité de Recherche Signaux et Systémes (U2S)Ecole Supérieure de la Statistique et de l'Analyse de l'Information (ESSAI)Ecole Nationale d'Ingénieurs de Tunis (ENIT)6 Rue des Métiers Charguia II 2035, Tunis Carthage, Ariana, Tunis 1002Tunisiaenit.rnu.tn2Department of Statistics, Department of Environmental Earth System Science, Woods Institute for the Environment, Stanford University, Standford, CA 94305USAstanford.edu201114082011201130052011090820112011Copyright © 2011 Dhafer Malouche and Bala Rajaratnam.This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Graphical models are useful for characterizing conditional and marginal independence structures in high-dimensional distributions. An important class of graphical models is covariance graph models, where the nodes of a graph represent different components of a random vector, and the absence of an edge between any pair of variables implies marginal independence. Covariance graph models also represent more complex conditional independence relationships between subsets of variables. When the covariance graph captures or reflects all the conditional independence statements present in the probability distribution, the latter is said to be faithful to its covariance graph—though in general this is not guaranteed. Faithfulness however is crucial, for instance, in model selection procedures that proceed by testing conditional independences. Hence, an analysis of the faithfulness assumption is important in understanding the ability of the graph, a discrete object, to fully capture the salient features of the probability distribution it aims to describe. In this paper, we demonstrate that multivariate Gaussian distributions that have trees as covariance graphs are necessarily faithful.

1. Introduction

Markov random fields and graphical models are widely used to represent conditional independences in a given multivariate probability distribution (see , to name just a few). Many different types of graphical models have been studied in the literature. Concentration graphs encode conditional independence between pairs of variables given the remaining ones. Formally, let us consider a random vector X=(Xv,vV) with a probability distribution P where V is a finite set representing the random variables in X. An undirected graph G0=(V,E0) is called the covariance graph (see [1, 611]) associated with the probability distribution P if the set of edges E0 is constructed as follows: (u,v)E0XuXv. Note that (u,v)E means that the vertices u and v are not adjacent in G.

The concentration graph associated with P is an undirected graph G=(V,E), where V is the set of vertices and each vertex represents one variable in X. The set E is the set of edges (between the vertices in V) constructed using the pairwise rule: for pair (u,v)V×V,  uv,(u,v)EXuXvXV{u,v}, where XV{u,v}=(Xw,  wu  and  wv).

Note that the subscript zero is invoked for covariance graphs (i.e., G0 versus G) as the definition of covariance graphs does not involve conditional independences.

Both concentration and covariance graphs not only are used to encode pairwise relationships between pairs of variables in the random vector X, but as we will see below, these graphs can also be used to encode conditional independences that exist between subsets of variables of X. First, we introduce some definitions.

The multivariate distribution P is said to satisfy the “intersection property” if for, any subsets A, B, C, and D of V which are pairwise disjoint,XAXBXCD  and    XAXCXBD  then  XAXBCXD.

We will call the intersection property (see ) in (1.3) above the concentration intersection property in this paper in order to differentiate it from another property that is satisfied by P when studying covariance graph models. Though this property can be further relaxed, we will retain the terminology used in .

We first define the concept of separation on graphs. Let A, B, and S denote a pairwise disjoint set of vertices. We say that a set S separates A and B if all paths connecting A and B in G intersect S, that is, AGBS. (This is not to be confused with stochastic independence which is denoted by      as compared to G.) Now, let P satisfy the concentration intersection property. Then, for any triplet (A,B,S) of subsets of V pairwise disjoint, if S separates A and B in the concentration graph G associated with P, then the random vector XA=(Xv,vA) is independent of XB=(Xv,vB) given XS=(Xv,vS). This latter property is called concentration global Markov property and is formally defined asAGBSXAXBXS. Kauermann  shows that if P satisfies the following property: for any triplet (A,B,S) of subsets of V pairwise disjoint,if  XAXB  and  XAXC  then  XAXBC, then, for any triplet (A,B,S) of subsets of V pairwise disjoint, if S separates A and B in the covariance graph G0 associated with P, then XAXBXV(ABS). This latter property is called the covariance global Markov property and can be written formally as follows:AG0BSXAXBXV(ABS). In parallel to the concentration graph case, property (1.5) will be called the covariance intersection property and is sometimes also referred to as the composition property. Even if P satisfies both intersection properties, the covariance and concentration graphs may not be able to capture or reflect all the conditional independences present in the distribution; that is, there may exist one or more conditional independences present in the probability distribution that does not correspond to any separation statement in either G or G0. Equivalently, a lack of a separation statement in either G or G0 does not necessarily imply a conditional independence. On the contrary case when no other conditional independence exists in P except the ones encoded by the graph, we classify P as a faithful probability distribution to its graphical model (see ). More precisely, we say that P is concentration faithful to its concentration graph if, for any triplet (A,B,S) of subsets of V pairwise disjoint, the following statement holds:AGBSXAXBXS. Similarly, P is said to be covariance faithful to its covariance graph G0 if, for any triplet (A,B,S) of subsets of V pairwise disjoint, the following statement holds:AG0BSXAXBXV(ABS). A natural question of both theoretical and applied interest in probability theory is to understand the implications of the faithfulness assumption. This assumption is fundamental since it yields a bijection between the probability distribution P and the graph G in terms of the independences that are present in the distribution. In this paper, we show that when P is a multivariate Gaussian distribution, whose covariance graph is a tree, it is necessarily covariance faithful, that is, such probability distributions satisfy property (1.8). Equivalently, the associated covariance graph G is fully able to capture all the conditional independences present in the multivariate distribution P. This result can be considered as a dual of a previous probabilistic result proved by Becker et al.  for concentration graphs that demonstrates that Gaussian distributions having concentration trees (i.e., the concentration graph is a tree) are necessarily concentration faithful to its concentration graph (implying that property (1.7) is satisfied). This result was proved by showing that Gaussian distributions satisfy two types of conditional independence properties: the intersection property and the decomposable transitivity property. The approach in the proof of the main result of this paper is vastly different from the one used for concentration graphs (see ). Indeed, a naïve or unsuspecting reader could mistakenly think that the result for covariance trees follows simply by replacing the covariance matrix with its inverse in the result in Becker et al. . This is of course incorrect and, in some sense, equivalent to saying that a matrix and its inverse are the same. The covariance matrix encodes marginal independences whereas the inverse covariance matrix encodes conditional independences. These are very different models. Moreover, the former is a curved exponential family model whereas the latter is a natural exponential family model.

The outline of this paper is as follows. Section 2 presents graph theory preliminaries. Section 2.2 gives a brief overview of covariance and concentration graphs associated with multivariate Gaussian distributions. The proof of the main result of this paper is given in Section 3. Section 4 concludes by summarizing the results in the paper and the implications thereof.

2. Preliminaries2.1. Graph Theoretic Concepts

This section introduces notation and terminology that is required in subsequent sections. An undirected graph G=(V,E) consists of two sets V and E, with V representing the set of vertices, and E(V×V){(u,u),uV} the set of edges satisfying, for  all  (u,v)E(v,u)E. For u,vV, we write u~G  v when (u,v)E and we say that u and v are adjacent in G. A path connecting two distinct vertices u and v in G is a sequence of distinct vertices (u0,u1,,un), where u0=u and un=v, where, for every i=0,,n-1, ui~G  ui+1. Such a path will be denoted p=p(u,v,G) and we say that p(u,v,G) connects u and v or alternatively u and v are connected by p(u,v,G). We also denote by 𝒫(u,v,G) the set of paths between u and v. We now proceed to define the subclass of graphs known as trees. Let G=(V,E) be an undirected graph. The graph G is called a tree if any pair of vertices (u,v) in G is connected by exactly one path; that is, |𝒫(u,v,G)|=1  for  all  u,vV. A subgraph of G  induced by a subset UV is denoted by GU=(U,EU), UV and EU=E(U×U). A connected component of a graph G is the largest subgraph GU=(U,EU) of G such that each pair of vertices can be connected by at least one path in GU. We now state a lemma, without proof, that is needed in the proof of the main result of this paper.

Lemma 2.1.

Let G=(V,E) be an undirected graph. If G is a tree, then any subgraph of G induced by a subset of V is a union of connected components, each of which are trees (or what we will refer to as a “union of tree connected components”).

For a connected graph, a separator is a subset S of V such that there exists a pair of nonadjacent vertices u and v such that u, vS andpP(u,v,G),pS. If S is a separator, then it is easily verified that every SS such that SV{u,v} is also a separator.

2.2. Gaussian Concentration and Covariance Graphs

In this section, we present a brief overview of concentration and covariance graphs in the case when the probability distribution P is multivariate Gaussian. Consider a random variable X~𝒩(0,Σ), where μIR|V| and Σ=(σuv)+, where + denotes the cone of positive definite matrices. Without loss of generality, we will assume that μ=0. Gaussian distributions can also be parameterized by the inverse of the covariance matrix Σ denoted by K=Σ-1=(kuv). The matrix K is called the precision or concentration matrix. It is well known (see ) that for any pair of variables (Xu,Xv), where uv, XuXvXV{u,v}kuv=0. Hence, the concentration graph G=(V,E) can be constructed simply using the precision matrix K and the following rule: (u,v)Ekuv=0. Furthermore, it can be easily deduced from a classical result (see ) that for any Gaussian concentration graph model the pairwise Markov property in (1.2) is equivalent to the concentration global Markov property in (1.4).

As seen earlier in (1.1) covariance graphs on the other hand are constructed using pairwise marginal independence relationships. It is also well known that, for multivariate Gaussian distributions, XuXvσuv=0. Hence, in the Gaussian case, the covariance graph G0=(V,E0) can be constructed using the following rule: (u,v)E0σuv=0. It is also easily seen that Gaussian distributions satisfy the covariance intersection property defined in (1.5). Hence, Gaussian covariance graphs can also encode conditional independences according to the following rule: for any triplet (A,B,S) of subsets of V pairwise disjoint, if S separates A and B in the covariance graph G0, then XAXBXV(ABS).

3. Gaussian Covariance Faithful Trees

We now proceed to study the faithfulness assumption in the context of multivariate Gaussian distributions and when the associated covariance graphs are trees. The main result of this paper, presented in Theorem 3.1, proves that multivariate Gaussian probability distributions having tree covariance graphs are necessarily faithful to their covariance graphs; that is, all of the independence and dependences in P can be read by using graph separation. We now formally state Theorem 3.1. The proof follows shortly after a series of lemmas/theorem(s) and an illustrative example.

Theorem 3.1.

Let XV=(Xv,vV) be a random vector with Gaussian distribution P=𝒩|V|(μ,Σ). Let G0=(V,E0) be the covariance graph associated with P. If G0 is a disjoint union of trees, then P is covariance faithful to G0.

The proof of Theorem 3.1 requires, among others, a result that gives a method to compute the covariance matrix Σ from the precision matrix K using the paths in the concentration graph G. The result can also be easily extended to show that the precision matrix K can be computed from the covariance matrix Σ using the paths in the covariance graph G0. We now formally state this result.

Lemma 3.2.

Let XV=(Xv,vV) be a random vector with Gaussian distribution P=𝒩|V|(μ,Σ), where Σ and K=Σ-1 are positive definite matrices. Let G=(V,E) and G0=(V,E0) denote, respectively, the concentration and covariance graph associated with the probability distribution of XV. For all (u,v) in V×V, kuv=pP(u,v,G0)(-1)|p|+1|σ|p  |Σp||Σ|,σuv=pP(u,v,G)(-1)|p|+1|k|p|Kp||K|, where, if p=(u0,,un), |σ|p=σu0u1σu1u2σun-1un,  |k|p=ku0u1ku1u2kun-1un, Kp=(kuv,(u,v)(Vp)×(Vp))  and  Σp=(σuv,(u,v)(Vp)×(Vp)) denote, respectively, K and Σ with rows and columns corresponding to variables in path p omitted. The determinant of a zero-dimensional matrix is defined to be 1.

The lemma above follows immediately from a basic result in linear algebra which gives the cofactor expression for the inverse of a square matrix. In particular, for an invertible matrix A, its inverse A-1 can be expressed as follows:A-1=1det(A)adj(A),where  adj(A)  denotes  the  adjoint  of  A.

A simple proof can be found in Brualdi and Cvetkovic . The result has been rediscovered in other contexts (see ), but, as noted above, it follows immediately from the expression for the inverse of a matrix.

The proof of our main theorem (Theorem 3.1) also requires the results proved in the lemma below.

Lemma 3.3.

Let XV=(Xv,vV) be a random vector with Gaussian distribution P=𝒩|V|(μ,K=Σ-1). Let G0=(V,E0) and G=(V,E) denote, respectively, the covariance and concentration graphs associated with P, then

G and G0 have the same connected components,

if a given connected component in G0 is a tree, then the corresponding connected component in G is complete and vice versa.

Proof.

Proof of (i): the fact that G0 and G have the same connected components can be deduced from the matrix structure of the covariance and the precision matrix. The connected components of G0 correspond to block diagonal matrices in Σ. Since K=Σ-1, then, by properties of inverting partitioned matrices, K also has the same block diagonal matrices as Σ in terms of the variables that constitute these matrices. These blocks correspond to distinct components in G and G0. Hence, both matrices have the same connected components.

Proof of (ii): let us assume now that the covariance graph G0 is a tree, hence it is a connected graph with only one connected component. We will prove that the concentration graph G is complete by using Lemma 3.2 and computing any coefficient kuv (uv). Since G0 is a tree, there exists exactly one path between any two vertices u and v. We will denote this path as p=(u0=u,,un=v). Then, by Lemma 3.2, kuv=(-1)n+1σu0u1σun-1un|Σp||Σ|. First, note that the determinants of the matrices in (3.3) are all positive since principal minors of positive definite matrices are positive. Second, since we are considering a path in G0, σui-1ui0, for  all    i=1,,n. Using these two facts, we deduce from (3.3) that kuv0 for all (u,v)E. Hence, u and v are adjacent in G for all (u,v)E. The concentration graph G is therefore complete. The proof that when G is assumed to be a tree implying that G0 is complete follows similarly.

We now give an example illustrating the main result in this paper (Theorem 3.1).

Example 3.4.

Consider a Gaussian random vector X=(X1,,X8) with covariance matrix Σ and its associated covariance graph (which is a tree) as given in Figure 1(a).

Consider the sets A={1,2}, B={5}, and S={4,6}. Note that S does not separate A and B in G0 as any path from A and B does not intersect S. Hence, we cannot use the covariance global Markov property to claim that XA is not independent of XB given XV(ABS). This is because the covariance global Markov property allows us to read conditional independences present in a distribution if a separation is present in the graph. It is not an “if and only if” property in the sense that the lack of a separation in the graph does not necessarily imply the lack of the corresponding conditional independence. We will show however that in this example XA is indeed not independent of XB given XV(ABS). In other words, we will show that the graph has the ability to capture this conditional dependence present in the probability distribution P.

Let us now examine the relationship between X2 and X5 given X{3,7,8}. Note that in this example V(ABS)={3,8,7}, 2A, and 5B. Note that the covariance graph associated with the probability distribution of the random vector (X2,X5,X{3,8,7}) is the subgraph represented in Figure 1(b) and can be obtained directly as a subgraph of G0 induced by the subset {2,5,3,7,8}.

Since 2 and 5 are connected by exactly one path in (G0){2,5,3,7,8}, that is, p=(2,3,5), then the coefficient k25387, that is, the coefficient between 2 and 5 in inverse of the covariance matrix of (X2,X5,X{3,8,7}), can be computed using Lemma 3.2 as follows: k25387=(-1)2+1σ23σ35|Σ({8,7})||Σ({2,5,3,8,7})|, where Σ({7,8}) and Σ({2,5,3,8,7}) are, respectively, the covariance matrices of the Gaussian random vectors (X7,X8) and (X2,X5,X{3,8,7}). Hence, k253870 since the right hand side of the equation in (3.4) is different from zero. Hence, X2X5X{3,8,7}.

Now, recall that, for any Gaussian random vector XV=(Xu,uV), XAXBXSiff  (u,v)A×B,XuXvXS, where A, B, and S are pairwise disjoint subsets of V. The contrapositive of (3.5) yields X2X5X{3,7,8}X{1,2}X5X{3,7,8}.

Hence, we conclude that since {4,6} does not separate {1,2} and {5}, X{1,2} is not independent of X5 given X{3,7,8}. Thus, we obtain the desired result: {1,2}G0{5}{4,6}X{1,2}X5X{3,7,8}.

Covariance graph in Example 3.4.

An 8-vertex covariance tree G0

Sub-graph (G0){2,5,3,8,7}

We now proceed to the proof of Theorem 3.1.

Proof of Theorem <xref ref-type="statement" rid="thm2">3.1</xref>.

Without loss of generality, we assume that G0 is a connected tree. Let us assume to the contrary that P is not covariance faithful to G0, then there exists a triplet (A,B,S) of pairwise disjoint subsets of V, such that XAXBXV(ABS), but S does not separate A and B in G0, that is, XAXBXV(ABS),AG0BS.

As S does not separate A and B and since G0 is a connected tree, then there exists a pair of vertices (u,v)A×B such that the single path p connecting u and v in G0 does not intersect S; that is, Sp=. Hence, pVS=(AB)(V(ABS)). Thus, two cases are possible with regard to where the path p can lie: either pAB or p(V(ABS)). Let us examine both cases separately.

Case 1 (<inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" id="M330"><mml:mi>p</mml:mi><mml:mo>⊆</mml:mo><mml:mi>A</mml:mi><mml:mo>∪</mml:mo><mml:mi>B</mml:mi></mml:math></inline-formula>).

In this case, the entire path between u and v lies in AB and hence we can find a pair of vertices (u,v) belonging to p and (u,v)A×B such that u~G0v. (As an illustration of this point, consider the graph presented in Figure 1(a). Let A={1,2}, B={3,5}, and S={4,6}. We note that the path p=(1,2,3,5) lies entirely in AB and hence we can find two vertices, namely, 2A and 3B, belonging to path p that are adjacent in G0).

Recall that since G0 is a tree, any induced graph of G0 by a subset of V is a union of tree connected components (see Lemma 2.1). Hence, the subgraph (G0)W of G0 induced by W={u,v}V(ABS) is a union of tree connected components. As u and v are adjacent in G0, they are also adjacent in (G0)W and belong to the same connected component of (G0)W. (In our example in Figure 1(a) with W={2,3,8,7}, (G0)W consists of a union of two connected components with its respective vertices being {2,3} and {8,7}.) Hence, the only path between u and v is precisely the edge (u,v). Using Lemma 3.2 to compute the coefficient kuvV(ABS), that is, (u,v)th coefficient in the inverse of the covariance matrix of the random vector XW=(Xw,wW)=(Xu,Xv,XV(ABS)), we obtain, kuvV(ABS)=(-1)1+1σuv|Σ(W{u,v})||Σ(W)|, where Σ(W) denotes the covariance matrix of XW and Σ(W{u,v}) denotes the matrix Σ(W) with the rows and the columns corresponding to variables Xu and Xv omitted. We can therefore deduce from (3.9) that kuvV(ABS)0. Hence, Xu  XvXV(ABS). Now, since P is Gaussian, uA, and vB, we can apply (3.5) to arrive at a contradiction to our initial assumption in (3.8).Note in the case that V(ABS) is empty the path p has to lie entirely in AB. This is because by assumption p does not intersect S. The case when p lies in AB is covered in Case 1 and hence it is assumed that V(ABS). (As an illustration of this point, consider once more the graph presented in Figure 1(a). Consider A={1,2}, B={7,8}, and S={4,6}. Here, V(ABS)={3,5} and the path p=(1,2,3,5,7,8) connecting A and B intersects V(ABS).) Case 2 (<inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" id="M396"><mml:mi>p</mml:mi><mml:mo>∩</mml:mo><mml:mo stretchy="false">(</mml:mo><mml:mi>V</mml:mi><mml:mo>∖</mml:mo><mml:mo stretchy="false">(</mml:mo><mml:mi>A</mml:mi><mml:mo>∪</mml:mo><mml:mi>B</mml:mi><mml:mo>∪</mml:mo><mml:mi>S</mml:mi><mml:mo stretchy="false">)</mml:mo><mml:mo stretchy="false">)</mml:mo><mml:mo>≠</mml:mo><mml:mi>∅</mml:mi></mml:math></inline-formula> and <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" id="M397"><mml:mi>V</mml:mi><mml:mo>∖</mml:mo><mml:mo stretchy="false">(</mml:mo><mml:mi>A</mml:mi><mml:mo>∪</mml:mo><mml:mi>B</mml:mi><mml:mo>∪</mml:mo><mml:mi>S</mml:mi><mml:mo stretchy="false">)</mml:mo><mml:mo>≠</mml:mo><mml:mi>∅</mml:mi></mml:math></inline-formula>).

In this case, there exists a pair of vertices (u,v)A×B with u,vp, such that the vertices u and v are connected by exactly one path pp in the induced graph (G0)W of G0 by W={u,v}V(ABS) (see Lemma 2.1). (In our example in Figure 1 with A={1,2}, B={7,8}, and S={4,6}, the vertices u and v will correspond to vertices 2 and 7, respectively, and p=(2,3,5,7), which is a path entirely contained in {u,v}(V(ABS).)

Let us now use Lemma 3.2 to compute the coefficient kuvV(ABS), that is, the (u,v)-coefficient in the inverse of the covariance matrix of the random vector XW=(Xw,wW)=(Xu,Xv,XV(ABS)). We obtain that kuvV(ABS)=(-1)|p|+1|σp|  |Σ(Wp)||Σ(W)|, where Σ(W) denotes the covariance matrix of XW and Σ(Wp) denotes Σ(W) with the rows and the columns corresponding to variables in path p omitted. One can therefore easily deduce from (3.10) that kuvV(ABS)0. Thus, Xu is not independent of Xv given XV(ABS). Hence, once more we obtain a contradiction to (3.5) since uA and vB.

Remark 3.5.

The dual result of the theorem above for the case of concentration trees was proved by Becker et al. . We note however that the argument used in the proof of Theorem 3.1 cannot also be used to prove faithfulness of Gaussian distributions that have trees as concentration graphs. The reason for this is as follows. In our proof, we employed the fact that the subgraph (G0){u,v}S of G0 induced by a subset {u,v}SV is also the covariance graph associated with the Gaussian subrandom vector of XV as denoted by X{u,v}S=(Xw,w{u,v}S). Hence, it was possible to compute the coefficient kuvS which quantifies the conditional (in)dependence between u and v given S, in terms of the paths in (G0){u,v}S and the coefficients of the covariance matrix of X{u,v}S=(Xw,  w{u,v}S). On the contrary, in the case of concentration graphs the sub-graph G{u,v}S of the concentration graph G induced by {u,v}S is not in general the concentration graph of the random vector X{u,v}S=(Xw,  w{u,v}S). Hence our approach is not directly applicable in the concentration graph setting.

4. Conclusion

In this note we looked at the class of multivariate Gaussian distributions that are Markov with respect to covariance graphs and prove that Gaussian distributions which have trees as their covariance graphs are necessarily faithful. The method of proof used in the paper is also vastly different in nature from the proof of the analogous result for concentration graph models. Hence, the approach that is used could potentially have further implications. Future research in this area will explore if the analysis presented in this paper can be extended to other classes of graphs or distributions.

Acknowledgments

D. Malouche was supported in part by a Fullbright Fellowship Grant 68434144. B. Rajaratnam was supported in part by NSF grants DMS0906392, DMS(CMG)1025465, AGS1003823, NSA H98230-11-1-0194, and SUFSC10-SUSHSTF09SMSCVISG0906.

CoxD. R.WermuthN.Multivariate Dependencies199667London, UKChapman & Hallxii+255Monographs on Statistics and Applied Probability1456990LauritzenS. L.Graphical Models199617New York, NY, USAThe Clarendon Press Oxford University Pressx+298Oxford Statistical Science Series1419991WhittakerJ.Graphical Models in Applied Multivariate Statistics1990Chichester, UKJohn Wiley & Sons Ltd.xiv+448Wiley Series in Probability and Mathematical Statistics: Probability and Mathematical Statistics1112133EdwardsD.Introduction to Graphical Modelling20002ndNew York, NY, USASpringerxvi+333Springer Texts in Statistics1880319RajaratnamB.MassamH.CarvalhoC. M.Flexible covariance estimation in graphical Gaussian modelsThe Annals of Statistics200836628182849248501410.1214/08-AOS619ZBL1168.62054KauermannG.On a dualization of graphical Gaussian modelsScandinavian Journal of Statistics19962311051161380485ZBL0912.62006BanerjeeM.RichardsonT.On a dualization of graphical Gaussian models: a correction noteScandinavian Journal of Statistics20033048178202155485WermuthN.CoxD. R.MarchettiG. M.Covariance chainsBernoulli200612584186210.3150/bj/11616149492265345ZBL1134.62031MaloucheD.Determining full conditional independence by low-order conditioningBernoulli20091541179118910.3150/09-BEJ1932597588ZBL1206.60009KhareK.RajaratnamB.Covariance trees and Wishart distributions on conesAlgebraic Methods in Statistics and Probability II2010516Providence, RI, USAAmerican Mathematical Society215223Contemporary Mathematics2730751KhareK.RajaratnamB.Wishart distributions for decomposable covariance graph modelsThe Annals of Statistics2011391514555279785510.1214/10-AOS841StudenýM.Probabilistic Conditional Independence Structures2004New York, NY, USASpringerBeckerA.GeigerD.MeekC.Perfect tree-like Markovian distributionsProbability and Mathematical Statistics20052522312392282523ZBL1122.60013BrualdiR. A.CvetkovicD.A Combinatorial Approach to Matrix Theory and Its Applications2008New York, NY, USAChapman & Hall/CRCJonesB.m.b.jones@massey.ac.nzWestM.mw@stat.duke.eduCovariance decomposition in undirected Gaussian graphical modelsBiometrika200592477978610.1093/biomet/92.4.779