JAMJournal of Applied Mathematics1687-00421110-757XHindawi Publishing Corporation46481510.1155/2010/464815464815Research ArticleSome Remarks on Diffusion DistancesGoldbergMaxim J.1KimSeonja2PickeringAndrew1Theoretical and Applied ScienceRamapo College of NJ505 Ramapo Valley RoadMahwah, NJ 07430USAramapo.edu2Mathematics DepartmentSUNY Rockland Community College145 College RoadSuffern, NY 10901USAsunyrockland.edu20101309201020101406201031072010030920102010Copyright © 2010 Maxim J. Goldberg and Seonja Kim.This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

As a diffusion distance, we propose to use a metric (closely related to cosine similarity) which is defined as the L2 distance between two L2-normalized vectors. We provide a mathematical explanation as to why the normalization makes diffusion distances more meaningful. Our proposal is in contrast to that made some years ago by R. Coifman which finds the L2 distance between certain L1 unit vectors. In the second part of the paper, we give two proofs that an extension of mean first passage time to mean first passage cost satisfies the triangle inequality; we do not assume that the underlying Markov matrix is diagonalizable. We conclude by exhibiting an interesting connection between the (normalized) mean first passage time and the discretized solution of a certain Dirichlet-Poisson problem and verify our result numerically for the simple case of the unit circle.

1. Introduction

Several years ago, motivated by considering heat flow on a manifold, R. Coifman proposed a diffusion distance—both for the case of a manifold and a discrete analog for a set of data points in n. In the continuous case, his distance can be written as the L2 norm of the difference of two specified vectors, each of which has unit L1 norm. (An analogous situation holds in the discrete case.) Coifman's distance can be successfully used in various applications, including data organization, approximately isometric embedding of data in low-dimensional Euclidean space, and so forth. See, for example, . For a unified discussion of diffusion maps and their usefulness in spectral clustering and dimensionality reduction, see .

We see a drawback in Coifman's diffusion distance in that it finds the L2 norm of the distance between two L1 unit vectors, rather than L2 unit vectors. As shown by a simple example later in this paper, two vectors (representing two diffusions), which we may want to consider to be far apart, are actually close to each other in L2, even though the angle between them is large, because they have small L2 norm, while still having unit L1 norm. Additionally, applying Coifman's distance to heat flow in n, a factor of a power of time t remains, with the exponent depending on the dimension n. It would be desirable not to have such a factor.

Our main motivation for this paper is to propose an alternate diffusion metric, which finds the L2 distance between two L2 unit vectors (with analogous statements for the discrete case). Our distance is thus the length of the chord joining the tips, on the unit hypersphere, of two L2 normalized diffusion vectors, and is therefore based on cosine similarity (see (4.4) below). Cosine similarity (affinity) is popular in kernel methods in machine learning; see for example, [5, 6] (in particular, Section 3.5.1—Document Clustering Basics) and for a review of kernel methods in machine learning, .

In the case of heat flow on n, our proposed distance has the property that no dimensionally dependent factor is left. Furthermore, for a general manifold, our diffusion distance gives, approximately, a scaled geodesic distance between two points x and y, when x and y are closer than t, and maximum separation when the geodesic distance between x and y, scaled by t, goes to infinity.

We next give two proofs that the mean first passage cost—defined later in this paper as the cost to visit a particular point for the first time after leaving a specified point—satisfies the triangle inequality. (See Theorem 4.2 in  in which the author states that the triangle inequality holds for the mean first passage time.) We give two proofs that do not assume that the underlying Markov matrix is diagonalizable; our proofs do not rely on spectral theory.

We calculate explicitly the normalized limit of the mean first passage time for the unit circle S1 by identifying the limit as the solution of a specific Dirichlet-Poisson problem on S1. We also provide numerical verification of our calculation.

The paper is organized as follows. After a section on notation, we discuss R. Coifman's diffusion distance for both the continuous and discrete cases in Section 3. In Section 4, we define and discuss our alternate diffusion distance. In Section 5, we give two proofs of the triangle inequality for mean first passage cost. We conclude the section by exhibiting an interesting connection between the (normalized) mean first passage time and the discretized solution of a certain Dirichlet-Poisson problem and verify our result numerically for the simple case of S1.

2. Notation and Setup

In this paper, we will present derivations for both the continuous and discrete cases.

In the continuous situation, we assume there is an underlying Riemannian manifold M with measure d  x;x,y,u,z, will denote points in M. For t0, ρt(x,y) will denote a kernel on M×M, with ρt(x,y)0 for all x,yM, and satisfying the following semigroup property:Mρt(x,u)ρs(u,y)du=ρt+s(x,y), for all x,yM, and s,t0. In addition, we assume the following property:Mρt(x,y)dx=1, for all yM and all t0. The latter convention gives the mass preservation propertyMTtf(x)dx=Mf(y)dy, whereTtf(x)Mρt(x,y)f(y)dy.

We will often specialize to the case when ρt(x,y)=ρt(y,x) for all x,yM and t0, as in the case of heat flow. Note that when ρt(x,y) is the fundamental solution for heat flow, we have ρ0(x,u)=δx(u), where δx(u) denotes the Dirac delta function centered at x. We will sometimes assume (as in the case of heat flow on a compact manifold) that there exist 0λ1λ2, with each λj corresponding to a finite dimensional eigenspace, and a complete orthonormal family of L2 functions ϕ1,ϕ2,, such thatρt(x,y)=j=1e-λjtϕj(x)ϕj(y), for t>0. We will also frequently use the following fact: if ρt is symmetric in the space variables, then for any x,yM,Mρt(u,x)ρt(u,y)du=Mρt(x,u)ρt(u,y)du=ρ2t(x,y), where we have used the symmetry of ρt and its semigroup property.

For the discrete situation, the analog of ρt(x,y) is an N×N matrix A, with A=aiji,j=1N, every aij0. In keeping with the usual convention that A is Markov if each row sum equals 1, that is, jaij=1 for all i, the analog of Ttf(x)=Mρt(x,y)f(y)dy is ATv, where AT is the transpose of A, and v is an N×1 column vector. So the index i corresponds to the second space variable in ρt, the index j corresponds to the first space variable in ρt, and t=n, n=1,2,, corresponds to the nth power of A. The obvious analog of ρt symmetric in its space variables is a symmetric Markov matrix A, that is, A=AT.

For A as above, not necessarily symmetric, we think of aij as the probability of transitioning from state si to state sj in t=1 tick of the clock; S={s1,s2,,sN} is the underlying set of states. For X a subset of the set of states S, the N×N matrix PX will denote the following projection: all entries of PX are 0 except for diagonal entries (k,k), when skX; the latter entries are equal to 1.

Finally, 1 will denote the N×1 column vector where each entry is 1; ei will denote the N×1 column vector with the ith component 1, and all others 0, and, for a set of states X, X¯ will denote the complement of X with respect to S.

3. A Diffusion Distance Proposed by R. Coifman

Several years ago, R. Coifman proposed a novel diffusion distance based on the ideas of heat flow on a manifold or a discrete analog of heat flow on a set of data points (see, e.g, [1, 2] for a thorough discussion). In this section, we will describe Coifman's distance using our notation, and consider some of its good points, and what we see as some of its drawbacks.

Referring to Section 2, for the continuous case, the unweighted version of Coifman's distance between x,yM, which we will denote by dC,t(x,y), can be defined as follows:[dC,t(x,y)]2Tt(δx-δy),Tt(δx-δy)=Ttδx,Ttδx+Ttδy,Ttδy-2Ttδx,Ttδy. Here,Ttδz(v)=Mρt(v,u)δz(u)du=ρt(v,z), for zM. The   ,   is the usual inner product on L2(M). (In , the authors consider a weighted version of (3.1) which naturally arises when the underlying kernel does not integrate to 1 (in each variable). In terms of data analysis, this corresponds to cases where the data are sampled nonuniformly over the region of interest. For simplicity, we are just using Coifman's unweighted distance.)

Note that we thus have[dC,t(x,y)]2=Mρt(v,x)ρt(v,x)dv+Mρt(v,y)ρt(v,y)dv-2Mρt(v,x)ρt(v,y)dv=ρt(·,x)22+ρt(·,y)22-2ρt(·,x),ρt(·,y).

Although Coifman's original definition used a kernel symmetric with respect to the space variable, dC,t(x,y) as given above need not be based on a symmetric ρt. Note that, by the defining (3.1), dC,t(x,y) is symmetric in x and y (even if ρt is not), and satisfies the triangle inequality. If ρt is symmetric in the space variables, from (2.6) we see that:[dC,t(x,y)]2=ρ2t(x,x)+ρ2t(y,y)-2ρ2t(x,y), a form matching one of Coifman's formulations for the continuous case.

If, in addition to ρt being symmetric in the space variables, we have that (2.5) holds, as in the case of heat flow, we easily see that:[dC,t(x,y)]2=j=1e-2λjt(ϕj(x)-ϕj(y))2, the original form proposed by Coifman. Note that the latter expression again explicitly shows that dC,t(x,y) is symmetric in x and y and satisfies the triangle inequality (by considering, for example, the right-hand side as the square of a weighted distance in l2).

Referring again to Section 2, for the discrete situation, where we start with a set of data points S={s1,s2,,sN}, and A  is a Markov matrix specifying the transition probabilities between the “states" of S, the distance between two data points si and sj is given by[dC,1(si,sj)]2AT(ei-ej),AT(ei-ej)=ATei,ATei+ATej,ATej-2ATei,ATej=AATei,ei+AATej,ej-2AATei,ej=(AAT)ii+(AAT)jj-2(AAT)ij, where   ,   is the usual inner product in N, and for a matrix B, (B)ij denotes the i,j entry of B. Again, symmetry and the triangle inequality are easily verified. If A is symmetric,[dC,1(si,sj)]2=(A2)ii+(A2)jj-2(A2)ij. The “1" appearing in the subscript of dC,1(si,sj) refers to the fact that A1=A is used, corresponding to t=1 in the continuous case. As the diffusion along data points flows, after n ticks of the clock, we can successively consider[dC,n(si,sj)]2=(An(AT)n)ii+(An(AT)n)jj-2(An(AT)n)ij, which, for a symmetric A, equals(A2n)ii+(A2n)jj-2(A2n)ij.

An important benefit of introducing a diffusion distance as above can be illustrated by considering (3.5). If ρt is such that (3.5) holds for a complete orthonormal family {ϕj}, we see that as t increases, we are achieving an (approximate) isometric embedding of M into successively lower-dimensional vector spaces (with a weighted norm). More specifically, for λj>0, if t is large, the terms e-2λjt(ϕj(x)-ϕj(y))2 are nearly 0. So, as t increases, we see that the “heat smeared" manifold M is parametrized by only a few leading ϕj's. Thus, “stepping" through higher and higher times, we are obtaining a natural near-parametrization of more and more smeared versions of M, giving rise to a natural ladder of approximations to M.

Analogous considerations hold in the discrete situation for A symmetric, when we easily see that the eigenvalues of A2 are between 0 and 1 and decrease exponentially for A2n, as n increases (the “heat smeared" data points are now parametrized by a few leading eigenvectors of A, associated to the largest eigenvalues).

See  for more discussion and examples of the natural embedding discussed above, along with illustrations of its power to organize unordered data, as well as its insensitivity to noise.

We would now like to point out what we see some drawbacks of Coifman's distance, which led us to propose an alternative distance in Section 4.

Let us consider (3.4) for the case whereρt(x,y)=(4πt)-n/2e-|x-y|2/4t, the fundamental solution to the heat equation in n. Then,[dC,t(x,y)]2=2-2e-|x-y|2/8t(8πt)n/2. If |x-y|2/8t is small, then to the leading order in |x-y|2/4t,[dC,t(x,y)]2=1(8πt)n/2(|x-y|24t+𝒪((|x-y|24t)2)). Thus, if |x-y|t, we do recover the geodesic distance between x and y but, due to the 1/tn/2 term in front, normalized by a power of t which depends on the dimension n. As pointed out by the reviewer, for n itself, the normalization does depend on n, but is simply a global change of scale, for each t, and thus basically immaterial. Suppose, however, that the data we are considering come in two “clumps", one of dimension n and the other of dimension m, with nm. Let us also suppose these clumps are somehow joined together and, far away from the joining region, each clump is basically a flat Euclidean space of the corresponding dimension. Then, far away from the joint, heat diffusion in a particular clump would behave as if it were in n, respectively m (until the time that the flowing heat “hits" the joint region). Thus, in the part of each clump that is far from the joint, the diffusion distance would be normalized differently, one normalization depending on n and the other on m. An overall change of scale would not remove this difference, thus we would not recover the usual Euclidean distance in the two clumps simultaneously, as we would like.

The second point of concern is more general in nature. In the continuous case, Coifman's distance involves the L2 distance between Ttδz, when z=x, and Ttδz when z=y; see (3.1). The L1 norm of Ttδz is 1, sinceMTtδz(v)dv=M(Mρt(v,u)δz(u)du)dv=1, using the mass preservation assumption of (2.2). For the discrete case, 1TATei=1, where 1T is the 1×N vector of 1's.

So the diffusion distance proposed by Coifman finds the L2 (resp., l2) distance between L1 (resp., l1) normalized vectors. Let us illustrate by an example for the discrete situation, with N=10,000, in which this may lead to undesired results. Without specifying the matrix A, suppose that after some time has passed, we have the following two 1×10,000 vectors giving two different results of diffusion:v1=(1100,,1100,0,,0), where the first one hundred entries are each 1/100, and the rest 9,900 entries are 0, andv2=(110,000,110,000,,110,000), where each entry is 1/10,000.

Note that v1 and v2 both have l1 norm 1. Now, considering two canonical basis vectors eiT and ejT, ij, each of which has l1 norm 1, we see that eiT-ejT,eiT-ejT=2. So, a distance of 2 gives the (in fact, maximum) separation between two completely different (l1 unit) diffusion vectors. Return to v1 and v2, note that v2 corresponds to total diffusion, while v1 has only diffused over 1% of the entries. We would thus hope that v1 and v2 would be nearly as much separated as eiT and ejT, that is, have diffusion distance not much smaller than 2. But a trivial calculation shows thatv1-v2,v1-v2<.1, which seems much smaller than what we would like. The problem is that v1-v2,v1-v2 is small since the l2 norm of each of v1 and v2 is small, even though the l1 norm of each is 1.

In the next section, we propose a variant of the diffusion distance discussed in this section. Our version will find the L2 (resp., l2) distance between vectors which are normalized to have L2 (resp., l2) norm to be 1, rather than L1 (or l1) norm 1.

4. An Alternate Diffusion Distance

In this section, we propose a new diffusion distance. Let us first define our alternate diffusion distance for the continuous case. Refer to Section 2 for the definitions of functions and operators used below.

For any zM, letψz(u)δz(u)Ttδz2=δz(u)Mρt(v,z)ρt(v,z)dv. Then,Ttψz(u)=Mρt(u,w)δz(w)Ttδz2dw=ρt(u,z)Ttδz2. Note that Ttψz(·) has L2 norm 1:M[Ttψz(u)]2du=Mρt2(u,z)Mρt2(v,z)dvdu=1.

For x,yM, we define our diffusion distance, d2,t(x,y), as follows:[d2,t(x,y)]2Tt(ψx-ψy),Tt(ψx-ψy)=2-2Mρt(u,x)ρt(u,y)duρt(·,x)2ρt(·,y)2=2-2ρt(·,x),ρt(·,y)ρt(·,x)2ρt(·,y)2, where we have used (4.3). Here again,   ,   is the usual inner product on L2(M). Note the analogy to (3.3).

As is clear from the defining equality in (4.4), d2,t(x,y) is symmetric in x and y and satisfies the triangle inequality:d2,t(x,z)d2,t(x,y)+d2,t(y,z), for all x,y,zM. Geometrically, d2,t(x,y) is the length of the chord joining the tips of the unit vectors Ttψx and Ttψy. We have that0d2,t(x,y)2, for all x,yM and t0.

If ρt is symmetric in the space variables, by (2.6), we have that[d2,t(x,y)]2=2-2ρ2t(x,y)ρ2t(x,x)ρ2t(y,y).

As an example, again consider the case where ρt(x,y)=(4πt)-n/2e-|x-y|2/4t, the fundamental solution to the heat equation in n. Then,[d2,t(x,y)]2=2-2e-|x-y|2/8t. Note that if |x-y|t, then[d2,t(x,y)]22-2(1-|x-y|28t)=|x-y|24t, so d2,t(x,y) gives (approximately) the geodesic distance in n, in the “near regime" where |x-y|t, and with scale t. Note that unlike (3.12), no tn/2 term appears. (Also see the discussion following (3.12).) Also note that if |x-y|2/8t is large, d2,t(x,y)2 (the greatest possible distance, see (4.6)), so for such t the points x and y are (nearly) maximally separated. Hence, d2,t(x,y), for the case of heat flow in n, gives a scaled geodesic distance when x is close to y, with 2t as the unit of length and near maximum separation when x is far from y at the scale t.

For any, say, compact Riemannian manifold M, if ρt(x,y) is the fundamental solution to the heat equation on M, we have thatρt(x,y)=(4πt)-n/2e-d2(x,y)/4t(1+𝒪(t)),ast0+, where d(x,y) is the geodesic distance on M (see ). Hence, repeating the expansion in (4.9) for a compact manifold M, with t small, and d(x,y)t, we have that d2,t(x,y)d(x,y)/2t, again recovering (scaled) geodesic distance. (The discussion following (3.12) gives an example for which it would be preferable not to have presented a normalization factor which depends on the dimension.) Exponentially decaying bounds on the fundamental solution of the heat equation for a manifold M (see [9, Chapter XII, Section 12]), suggest that x and y become nearly maximally separated, as given by d2,t(x,y), when d(x,y) (scaled by t) is large, just as in the Euclidean case.

In the discrete situation, where we start with a set of data points S={s1,s2,,sN}, and A  is a Markov matrix specifying the transition probabilities between the “states" of S, for n=1,2,, we letvi,n=ei(AT)nei, where ei is the ith canonical basis vector (see Section 2), and , is the l2 vector norm. For si,sjS and n=1,2,, we define d2,n(si,sj) by[d2,n(si,sj)]2(AT)n(vi,n-vj,n),(AT)n(vi,n-vj,n)=2-2(AT)nei,(AT)nej(AT)nei  (AT)nej=2-2(An(AT)n)ij(An(AT)n)ii  (An(AT)n)jj, where   ,   and , are, respectively, the usual inner product and norm in N, and, for a matrix B, (B)ij denotes the i,j entry of B.

If A is symmetric,[d2,n(si,sj)]2=2-2(A2n)ij(A2n)ii  (A2n)jj. As before, n represents the nth tick of the clock.

5. The Mean First Passage Cost Satisfies the Triangle Inequality: An Example of Its Normalized Limit

In this section, we consider a slightly different topic: the mean first passage cost (defined below) between two states as a measure of separation in the discrete situation. We give two explicit proofs showing that the mean first passage cost satisfies the triangle inequality (in , the author states this result when all costs are equal to 1 as Theorem 4.2, but the proof is not very explicit in our opinion).

In , as well as some of the references listed therein, it is shown that the symmetrized mean first passage time and cost are metrics (for mean first passage cost see, in particular, ; also, in the above sources the symmetrized mean first passage time is called the commute time). “Symmetrized" refers to the sum of the first cost (time) to reach a specified state from a starting state and to return back to the starting state. This symmetrization is necessary to ensure a quantity symmetric in the starting and destination states. In the sources cited above, the fundamental underlying operator is the graph Laplacian L, which, using the notation of , is defined as L=D-W. Here, W=wij is the adjacency matrix of a graph, and D is the diagonal degree matrix, with the ith entry on the diagonal equaling jwij. In addition to assuming the nonnegativity of the wij's, the authors in the above works assume that W is symmetric. The resulting symmetry (and positive semi-definiteness of L) implies the existence of a full set of nonnegative eigenvalues of L, and the diagonalizability of L is used heavily in the proofs that the commute time/cost is a distance. In the random walk interpretation, see, for example, , the following normalized Laplacian is relevant: Lrw=I-D-1W. To make a connection with the notation in the present paper, D-1W=A, a Markov matrix giving the transition probabilities of the random walk. Although D-1W is not necessarily symmetric, it is easy to see that D-1W=D-1/2{D-1/2WD-1/2}D1/2 (see the discussion in ). Hence D-1W, while not itself symmetric in general, is conjugate to the symmetric matrix D-1/2WD-1/2, and thus too has a full complement of eigenvalues.

In this section, as in the rest of the paper unless stated otherwise, we are not assuming that the Markov matrix A is symmetric or conjugate to a symmetric matrix; hence A may not be diagonalizable (i.e., A may have Jordan blocks of dimension greater than 1). We thus do not have spectral theory available to us. Furthermore, we do not wish to necessarily symmetrize the mean first passage time/cost to obtain a symmetric quantity; we are not actually going to get a distance, but will try to obtain the “most important" property of being a distance, namely, the triangle inequality.

A model example we are thinking about is the following. Suppose we have a map grid and are tracking some localized storm which is currently at some particular location on the grid. We suppose that the storm behaves like a random walk and has a certain (constant in time) probability to move from one grid location to another at each “tick of the clock" (time step). We can thus model the movements of the storm by a Markov matrix A, with the nth power of A giving the transition probabilities after n ticks of the clock. If there is no overall wind, the matrix A could reasonably be assumed to be symmetric, and we could use spectral theory. But suppose there is an overall wind in some fixed direction, which is making it more probable for the storm to move north, say, rather than south. Then the matrix A is not symmetric; there is a preferred direction of the storm to move in, from one tick of the clock to the next; spectral theory cannot, in general, be used. Furthermore, it may not be reasonable in this situation to consider the commute time—the symmetrized mean first passage time—since we may rather want to know the expected time to reach a certain population center from the current location of the storm, and may not care about the storm's return to the original location. Thus the mean first passage time would be the quantity of interest.

In the first part of this section, we give two proofs that the mean first passage cost/time, associated with a not-necessarily-symmetric Markov matrix A, does indeed satisfy the triangle inequality; our proofs do not rely on spectral theory. We think that satisfying the triangle inequality, while in general failing to be symmetric, is still a very useful property for a bilinear form to have.

We conclude the section by exhibiting a connection between the (normalized) mean first passage time and the discretized solution of a certain Dirichlet-Poisson problem and verify our result numerically for the simple case of the unit circle.

In this section, S={s1,s2,,sN} is a finite set of states and A is a Markov matrix giving the transition probabilities between states in one tick of the clock (see Section 2). C will denote an N×N matrix with non-negative entries, C=ciji,j=1N. We will think of each ci,j as the “cost" associated with the transition from state si to state sj. By a slight abuse of notation, for 1m,nN, Pn will be the N×N matrix in which all entries are 0, except the (n,n) entry which is 1 (this corresponds to X={sn} in Section 2). Also, Pmn will be the N×N matrix in which all entries are 0, except the (m,m) and (n,n) entries each of which is 1 (this corresponds to X={sm,sn} in Section 2).

Let Ymn be the random variable which gives the cost accumulated by a particle starting at state sm until its first visit to state sn after leaving sm. In other words, if a particular path of the particle is given by the states sm,sj1,sj2,,sjp,sn, the value of Ymn is cmj1+cj1j2++cjpn. We suppose A has the property that for every i,j, there exists an n such that (An)ij>0, that is, every state j is eventually reachable from every state i. Then, as is shown in  (using slightly different notation), we have the following formula for 𝔼(Ymn), which is the expected cost of going from state sm to state sn:𝔼(Ymn)=emTII-A(I-Pn)cpqapq1, where cpqapq is the N×N matrix with p,q entry equal to cpqapq. (In particular, it is shown in  that I-A(I-Pn) is invertible and (A(I-Pn))k0, as k.) See [14, 15] for discussion of related expected values, and [8, 1012, 1618] for discussion of mean first passage times and related concepts.

We will give two proofs that the expected cost of going from one state to another satisfies the triangle inequality.

Proposition 5.1.

𝔼(Yij)𝔼(Yik)+𝔼(Ykj).

We again note that this proposition, for the case all costs are 1, is stated as Theorem 4.2 in , but we feel the proof is not very explicit. (In our proofs below, we assume jk; if j=k, the inequality in Proposition 5.1 is immediate.)

Proof.

(1) Our first proof is probabilistic. Let a random walker start at state si and accumulate costs given by the matrix C as he moves from state to state. As soon as the walker reaches state sj, we obtain a sample value of Yij. Now, at this point of the walk, there are two possibilities. Either the walker has passed through state sk before his first visit to sj after leaving si, or he has not. In the first instance, we have obtained sample values of Yik and Ykj along the way, and Yij=Yik+Ykj for this simulation. In the second case, we let the walker continue until he first reaches sk, to obtain a sample value of Yik, and walk still more until he reaches sj for the first time since leaving sk, thus giving a sample value of Ykj (note that by the memoryless property, this sample value of Ykj is independent of the walker's prior history). In the second case, we thus clearly have that Yij<Yik+Ykj. Combining the two cases, we have YijYik+Ykj. Repeating the simulation, averaging, and taking the limit as the number of simulations goes to infinity, we obtain that 𝔼(Yij)𝔼(Yik)+𝔼(Ykj).

(2) Our second proof is via explicit matrix computations. Let us define the following two quantities:Q1eiTII-A(I-Pjk)cpqapq1,Q2{eiTII-A(I-Pjk)APk1}ekTII-A(I-Pj)cpqapq1. (See Section 2 and the paragraphs before the statement of Proposition 5.1.) Now, we have Q1eiTII-A(I-Pjk)cpqapq1=eiTm=0[A(I-Pjk)]mcpqapq1eiTm=0[A(I-Pk)]mcpqapq1=eiTII-A(I-Pk)cpqapq1=𝔼(Yik); see (5.1). Also, Q2{eiTII-A(I-Pjk)APk1}ekTII-A(I-Pj)cpqapq1={eiTII-A(I-Pjk)APk1}𝔼(Ykj). But eiTII-A(I-Pjk)APk1eiTII-A(I-Pk)APk1=eiTII-A(I-Pk)(I-A(I-Pk))1=1, where we have used a series expansion to show the first inequality (all entries are non-negative), and the fact that APk1=(I-A(I-Pk))1, since A1=1. Thus, Q2𝔼(Ykj).

We will finish our second proof by showing that Q1+Q2=𝔼(Yij). First note thatQ2{eiTII-A(I-Pjk)APk1}ekTII-A(I-Pj)cpqapq1=eiTII-A(I-Pjk)APkII-A(I-Pj)cpqapq1, using that Pk=ekekT=Pk1ekT. Thus, Q1+Q2=eiTII-A(I-Pjk){I-A(I-Pj)+APk}II-A(I-Pj)cpqapq1=eiTII-A(I-Pjk){I-A(I-Pj-Pk)}II-A(I-Pj)cpqapq1=eiTII-A(I-Pjk){I-A(I-Pjk)}II-A(I-Pj)cpqapq1=eiTII-A(I-Pj)cpqapq1=𝔼(Yij). Here we have used the fact that Pj+Pk=Pjk (as mentioned earlier, we are assuming jk; the triangle inequality we are proving holds trivially for the case j=k).

We would like to point out that the decomposition of 𝔼(Yij)=Q1+Q2 in the second proof above is not a “miraculous" guess. We arrived at this decomposition by writing 𝔼(Yij) as the derivative (evaluated at 0) of the characteristic function (Fourier transform) of Yij (see ), and breaking up the expression to be differentiated into a sum of terms: one term corresponding to the random walk going from si to sj without visiting sk first, and one term corresponding to visiting sk before reaching sj. After differentiation, the resulting six pieces, when suitably combined into two terms, yielded Q1 and Q2.

We conclude this section by considering certain (suitably normalized) limiting values of the expected cost of going from state si to state sj, for the first time after leaving si, given by (5.1). For this discussion, we will take all the costs to be identically 1, that is, cpq=1 for all p,q. Then, we see from (5.1) that𝔼(Yij)=eiTII-A(I-Pj)1=eiTII-A(I-Pj)A1=eiTII-A(I-Pj)A(I-Pj)1+eiTII-A(I-Pj)APj1=eiTII-A(I-Pj)A(I-Pj)1+eiTII-A(I-Pj)(I-A(I-Pj))1=eiTII-A(I-Pj)A(I-Pj)1+1, where we have used APj1=(I-A(I-Pj))1, since A1=1.

Now, let us digress a little to describe a stochastic approach to solving certain boundary value problems. The description below follows very closely parts of Chapter 9 in . Some statements are excerpted verbatim from that work, with minor changes in some labels. The background results below are well known and are often referred to as Dynkin's formula (see, e.g, ). We are presenting them for the reader's convenience and will use them to exhibit an interesting connection between the mean first passage time and the discretized solution of a certain Dirichlet-Poisson problem; see (5.16).

Let D be a domain in n, and let L denote a partial differential operator on C2(n) of the form:L=i=1nbi(x)xi+i,j=1naij(x)2xixj, where aij(x)=aji(x). We assume each aij(x)C2(D) is bounded and has bounded first and second partial derivatives; also, each bi(x) is Lipschitz. Suppose L is uniformly elliptic in D (i.e., all the eigenvalues of the symmetric matrix aij(x) are positive and stay uniformly away from 0 in D). Then, for gCα(D), some α>0, and g bounded, and for ϕC(D), the function w defined below solves the following Dirichlet-Poisson problem:Lw=-gin  D,limxy,xDw(x)=ϕ(y),for  all  regular  pointsyD. (Regular points in this context are defined in  and turn out to be the same as the regular points in the classical sense, i.e., the points y on D where the limit of the generalized Perron-Wiener-Brelot solution coincides with ϕ(y), for all ϕC(D).)

Now we define w. We choose a square root σ(x)n×n of the matrix 2aij(x), that is,12σ(x)σT(x)=aij(x). Next, for b=bi(x), let Xt be an Itô diffusion solvingdXt=b(Xt)dt+σ(Xt)dBt, where Bt is n-dimensional Brownian motion. Then,w(x)=𝔼x[ϕ(Xτ)]+𝔼x[0τg(Xu)du],for  xD, is a solution of (5.10). Here, the expected values are over paths starting from xD, and τ is the first exit time from D.

Let us transfer the above discussion to, say, a compact manifold M, rather than n. We sample M and let the “states" sk be the sample points. We construct a transition matrix A to give a discretized version of (5.12). Let ϵ>0 be the approximate separation between the sample points. Fix a sample point sj, and let D=MBϵ(sj)¯ be the domain in M consisting of the complement of the closure of the ball of radius ϵ in M, center sj. Let si be a sample point in D. For this situation, in (5.10), let ϕ be the 0 function, and g be the constant 1 function. Then (5.13) becomes:w(x)=𝔼si[0τdu],τ is the first exit time from D, that is, first visit time to the ϵ neighborhood of sj. (Compare with Proposition 8B in  which discusses the case of the Dirichlet-Poisson problem (5.10) with ϕ=0 and g=1 for a manifold.) As shown in  (with slightly different notation), a discrete version of (5.14) is(eiTII-A(I-Pj)cpq̃apq1)Δt, where cpj̃=0, all p, and cpq̃=1, for all q not equal to j, and all p. Thus, cpq̃apq=A(I-Pj). Combining (5.15) and (5.8), we see that:𝔼(Yij)Δt-Δt=(eiTII-A(I-Pj)A(I-Pj)1)Δtw(sj), for Δt small.

We thus see a connection between the (normalized) mean first passage time and the solution to the Dirichlet-Poisson problem discussed above.

Let us illustrate the preceding discussion by a simple example: M=S1, the unit circle. We will consider d2/dθ2, the Laplacian on S1, and sample S1 uniformly. We will let the transition matrix A take the walker from the current state to each of the two immediate neighbor states, with probability 1/2 for each. The variance is then (Δθ)2. Since dXt=2dBt, see (5.12), we must have (Δθ)2=2Δt, and we should use (Δθ)2/2 as our value of Δt in (5.16). Using symmetry, we can take sj=0, the 0 angle on S1, without loss of generality. Letwϵ(θ)=12(θ-ϵ)((2π-θ)-ϵ),ϵθ2π-ϵ. Note thatd2dθ2wϵ(θ)=-1,ϵ<θ<2π-ϵ,    wϵ(ϵ)=wϵ(2π-ϵ)=0. So wϵ is the unique solution satisfying (5.10) for our example, on the domain S1[-ϵ,ϵ].

To numerically confirm (5.16), we ran numerical experiments in which we discretized S1 into N equispaced points, with the transition matrix A taking a state to each of its 2 immediate neighbors with probability 1/2, and used (Δθ)2/2 as the value of Δt in (5.16) to calculate 𝔼(Yij)Δt-Δt. We took sj to be the angle 0, and si to be the closest sample point to the angle with radian measure 1, for example. Letting ϵ0 in (5.17), we compared the value of 𝔼(Yij)Δt-Δt with w(θ)=(1/2θ)(2π-θ). For instance, with N=1000, sj=0, and si the nearest sample point to the angle with radian measure 1, the relative error is less than 0.08%. Note that w(θ) is, for θ close to 0, essentially a scaled geodesic distance on S1 (from our base angle 0).

6. Conclusions

The authors have presented a diffusion distance which uses L2 unit vectors, and which is based on the well-known cosine similarity. They have discussed why the normalization may make diffusion distances more meaningful. We also gave two explicit proofs of the triangle inequality for mean first passage cost, and exhibited a connection between the (normalized) mean first passage time and the discretized solution of a certain Dirichlet-Poisson problem.

Acknowledgments

We thank Raphy Coifman for his continuous generosity in sharing his enthusiasm for mathematics and his ideas about diffusion and other topics. We would also like to thank the anonymous reviewer for his/her thorough critique of this paper and many helpful suggestions.

CoifmanR. R.LafonS.Diffusion mapsApplied and Computational Harmonic Analysis200621153010.1016/j.acha.2006.04.0062238665ZBL1103.60069LafonS. S.Diffusion maps and geometric harmonics, Ph.D. thesis2004Yale University2621040CoifmanR. R.MaggioniM.Diffusion waveletsApplied and Computational Harmonic Analysis2006211539410.1016/j.acha.2006.04.0042238667ZBL1099.65145NadlerB.LafonS.CoifmanR. R.KevrekidisI. G.Difusion maps, spectral clustering and reaction coordinates of dynamical systemsApplied and Computational Harmonic Analysis200621111312710.1016/j.acha.2005.07.0042238669BrandM.HuangK.BishopC. M.FreyB. J.A unifying theorem for spectral embedding and clusteringProceedings of the 9th International Workshop on Artificial Intelligence and Statistics2003Key West, Fla, USAhttp://research.microsoft.com/enus/ um/cambridge/events/aistats2003/proceedings/papers.htmGongY.XuW.Machine Learning for Multimedia Content Analysis2007New York, NY, USASpringerHofmannT.SchölkopfB.SmolaA. J.Kernel methods in machine learningThe Annals of Statistics20083631171122010.1214/0090536070000006772418654ZBL1151.30007HunterJ. J.j.hunter@massey.ac.nzStationary distributions and mean first passage times of perturbed Markov chainsResearch Letters in the Information and Mathematical Sciences200238598ChavelI.Eigenvalues in Riemannian geometry1984115Orlando, Fla, USAAcademic Pressxiv+362Pure and Applied Mathematics768584FoussF.francois.fouss@ucLouvain.bePirotteA.alain.pirotte@ucLouvain.beRendersJ. -M.jean-michel.renders@xrce.xerox.comSaerensM.marco.saerens@ucLouvain.beRandom-walk computation of similarities between nodes of a graph with application to collaborative recommendationIEEE Transactions on Knowledge and Data Engineering200719335536910.1109/TKDE.2007.46SaerensM.FoussF.YenL.DupontP.The principal components analysis of a graph, and its relationships to spectral clusteringMachine Learning: ECML 200420043201Berlin, GermanySpringer371383Lecture Notes in Artificial IntelligenceZBL1132.68589von LuxburgU.A tutorial on spectral clusteringStatistics and Computing200717439541610.1007/s11222-007-9033-z2409803GoldbergM.KimS.Applications of some formulas for finite state Markov chainsApplied and Computational Harmonic Analysis. In press10.1016/j.acha.2010.02.004HunterJ. J.On the moments of Markov renewal processesAdvances in Applied Probability19691188210025493610.2307/1426217ZBL0184.21303HunterJ. J.Generalized inverses and their application to applied probability problemsLinear Algebra and its Applications19824515719810.1016/0024-3795(82)90218-X660986ZBL0493.15003HunterJ. J.A survey of generalized inverses and their use in stochastic modellingResearch Letters in the Information and Mathematical Sciences200012536HunterJ. J.Generalized inverses, stationary distributions and mean first passage times with applications to perturbed Markov chainsResearch Letters in the Information and Mathematical Sciences2002399116KemenyJ. G.SnellJ. L.Finite Markov Chains1960Princeton, NJ, USAVan Nostrandviii+210The University Series in Undergraduate Mathematics0115196⌀ksendalB.Stochastic Differential Equations: An Introduction with applications20076thBerlin, GermanySpringerKulasiriD.VerwoerdW.“Stochastic Dynamics: Modeling Solute Transport in Porous Media200244Amsterdam, The NetherlandsElsevierxii+239North-Holland Series in Applied Mathematics and Mechanics2044846ElworthyK. D.Stochastic Differential Equations on Manifolds198270Cambridge, UKCambridge University Pressxiii+326London Mathematical Society Lecture Note Series675100