A Note on the Distribution of Multivariate Brownian Extrema

This paper presents a closed-form solution for the joint probability of the endpoints and minimums of a multidimensional Wiener process for some correlation matrices. This is the only explicit expressions found in the literature for this joint probability. The analysis can only be carried out for special correlation structures as it is related to the fundamentals regions of irreducible spherical simplexes generated by reflections and the link to themethod of images.This joint distribution can be used in financial mathematics to obtain prices of credit or market related products in high dimension.The solution could be generalized to account for stochastic volatility and other stylized features of the financial markets.


Introduction
The paper finds closed-form expressions for the joint density/distribution function of the endpoints and extrema of a -dimensional Wiener process. This targeted function represents a density function in terms of the endpoints of the Wiener process while it represents a distribution function of the minimums of the processes underlying. The results found in this paper can be applied to processes that can be derived from a Wiener process using suitable transformations like log-normal processes and Ornstein-Uhlenbeck processes with suitable parameters. The problem of finding such joint density/distribution has attracted research since the late nineteenth century, see, for example, [1]. Closed-form solutions are known for the cases of one and two dimensions; see [2,3] with either a minimum or a maximum or both. The problem was recently solved in three dimensions; see [4], via the method of images; therefore it only applies to a restricted set of correlation values.
In this paper we generalize the results in [4] to any dimension . The correlation structures for which a solution via the Image Method can be found depend on the dimension ( ). In principle there could be as many as 5 different correlation structures, without permutations, in dimension 4 or as little as three in dimensions 3, 5, and higher than 8. This is derived using the fundamental regions for irreducible groups generated by reflections as presented in Table IV page 297 in [5] together with the finding in [6,7].
The paper is organized as follows. Section 2 introduces the function of interest as the solution of a partial differential equation (PDE) with specific initial and boundary conditions. Then it simplifies this PDE to a heat equation. Section 3 provides the main results described per dimension. Section 4 comments on applications as well as possible extensions. Section 5 concludes.

Preliminaries and Simplification to Heat Equation
Let (Ω, F, F, Q) be a filtered probability space on the domain Ω with sigma algebra F, filtration F = {F } ≥0 , and a probability measure Q on (Ω, F). Consider = 1, . . . , , where represents the dimension. We introduce the following notation:  The underlying process can be expressed through the following stochastic differential equation (SDE): in order to simplify the notation, we will assume (0) = 0. Our main objective is to find the joint density/distribution function for the minimum, denoted as ( ) ≡ min 0< < ( ), and endpoints of ( ); this function is defined below: where we also require (0) > and therefore < 0. In general ( ) could be seen as a function of a stochastic process such that a transformation = ( ) leads to a Wiener process. For example, could be a log-normal process ( (⋅) = ln(⋅)); see [8] for the general type of transformations and processes allowed.
It is important to realize that if (2) is known then we could also derive the joint density/distribution of endpoints and maximums and all cases in between. To see this, first define the maximum of a process by ( ) ≡ max 0< < ( ), and let > 0, = 1, . . . , ; then note that in the case with no drift with * = − , = − , * = − , = 1, . . . , , and therefore correlation between * and , Corr( * , ) = −Corr( , ).
The following boundary conditions are added to match the extrema: ( 1 , . . . , = , . . . , , , ) = 0, = 1, . . . , . (6) The meaning of the above constraints, in terms of the Wiener process, is clear; as soon as any of the component equals we constraint the function (⋅) to be equal 0. Now, noticing the initial conditions (0) > and the continuity of the Wiener paths, we conclude that the constraints (6) are equivalent to enforcing that each path of and therefore ( ) remains above the barrier levels before and up to time . From the analytical point of view, that is, looking at the density ( , , ) as a function on , the constraint (6) enforces that the support of the density is inside the set Ω = { ∈ : ≥ }. Equations (4), (5), and (6) will be referred to as the PDE problem associated to the density (2). This function (⋅) will be obtained for a set of special correlations by solving (4), (5), and (6) by means of the method of images; the reader is referred to [6,7] for accounts of the method of images.
To obtain the solution of the system (4), (5), and (6), we need to simplify (4) to a heat equation. The simplification is performed in two steps: first, we use conventional methods to modify the probability function to one associated to processes without drift, unit volatility, and barriers at zero; then, in a second step, we use a Cholesky transformation to generate a new set of independent processes, hence uncorrelated, leading to a heat equation with new boundary conditions. Let us define where Σ is the covariance matrix with column replaced by column vector . It follows that with boundary conditions: ( 1 , . . . , = , . . . , , , ) = 0, = 1, . . . , , (11) and delta Dirac initial condition. We continue to simplify the above PDE, by eliminating the parameters and , = 1, . . . , . Consider the following change of variables and transformation: International Journal of Stochastic Analysis 3 leading to with boundary conditions and initial condition, Here 0 = − / , = 1, . . . , . The last step eliminates the correlations in matrix ; for this we perform a Cholesky decomposition.
Proof. This follows directly from noticing that the processes associated to are uncorrelated with zero drift.
The variables are independent so (16), (18), and (17) represent the Fokker-Planck equation of an uncorrelated Brownian motion with zero drift and constrained to a region bounded by hyperplane ( ). Moreover the vector represents the normal vector to the hyperplane , = 1, . . . , .

Solving the Heat Equation
The method of images (MofI) is now utilized to solve the system (16), (18), and (17). The nature of the method of images is to replace the boundary conditions by a set of fictitious source points. Then the solution of the original equation satisfying the given boundary conditions reduces to that of finding the solution without boundary conditions at the source points. In the case of linear differential equations, the process of obtaining the final solution divides into three distinct steps (see [7]): (1) checking that the differential equation is suitable for MofI and then solving this differential equation for a point source in an infinite medium, but with no boundary conditions except that of good behavior at infinity; (2) checking that the region of interest is suitable for MofI and then finding the set of image source at each of the reflecting regions; (3) summing the solution of (1) over the set of images obtained by step (2).
In general, step (1) is the simplest one. The Laplace and heat equations are well known to fit into MofI (see, e.g., [6,7]). The solution in step (1) is usually known in closed form. For example, in our case, the solution without boundary conditions is well known. Consider the following heat equation, for an arbitrary initial point 0 = ( 10 , . . . , 0 ): By means of an application of the Fourier transform, the solution of the above equation can be expressed as We will show next that the regions where MofI applies are connected to correlation matrices; therefore feasible regions imply feasible correlations. For the issue of whether the bounded region allows for the method of images, we rely on [6]. Note hyperplanes pass through zero so they cut out the surface of an -dimensional sphere centered at the vertex 0 in a convex polygonal domain also known as spherical simplexes. These spherical simplexes divide the sphere into symmetric parts where the method of images applies if and only if these simplexes are irreducible fundamental regions generated by reflections (see [5,6]).
The next result relates these regions and in particular the dihedral angle between these hyperplanes with the correlation matrix for the multivariate process ( ). Let us denote by the dihedral angle between hyperplanes and with representing the matrix of angles. The next proposition relates these angles and the correlation matrix .
Proof. The dihedral angle between hyperplanes and can be obtained from the normal vectors to the plane as cos ( − ) = ⋅ . (21) Here we use the fact that ‖ ‖ = 1 for all . In matricial form we have − cos ( ) = = .
Next we use the results in the seminal work of [5] in which a complete list of fundamental regions for irreducible groups generated by reflections in the case of spherical simplexes is provided (Table IV, page 297). We use the notation in Coxeter for irreducible groups for spherical simplexes in dimension : , , , , , , and . In particular we extract the feasible dihedral angles for each of these spherical simplexes and the dimension where they apply. These results are given next.  Proof. This follows from the dihedral angles provided in Table IV, page 297, in [5] together with Proposition 2. In this table, the regions are represented by graphs. Every node represents a bounding hyperplane, and the branches indicate pairs of hyperplanes inclined at angles / , > 2. If a value of is not provided then a = 3 is understood. Perpendicular hyperplanes are represented by nodes not joined by a branch. Note that all hyperplanes should intercept; therefore most dihedral angles are /2. The cases obtained in this proposition follow easily from reading these graphs out.
In particular, for dimensions 2 and 3, the number of feasible cases and source points reproduces those found in [2,4], respectively.  (17) can be found as follows: Note that (28) is basically a linear combination ofdimensional gaussian density functions with mean zero and (1/ ) covariance matrix. Substituting (28) into (16) and further into (9) would lead to the targeted joint density/distribution in (2) and this would be again a linear combination of -dimensional gaussian densities but now with nonzero means (depending on the image sources) and covariance matrix (1/ )Σ as in [4].
International Journal of Stochastic Analysis 5 3.1. Finding Source Points. In this section we describe a quasianalytical procedure to find all source points associated to a spherical simplex and therefore to a correlation matrix.
Let us denote (i) : normal vector to hyperplane , with = 1, . . . , ; (ii) 0 : initial point inside the region; (iii) : number of source points; (iv) ( , 1 , . . . , −1 ): normal vector to an hyperplane created after consecutive reflections, each across the hyperplane associated to the normal vector in the sequence ( , 1 , . . . , (1) New source point: reflecting a point 0 across a hyperplane passing through zero defined by the normal vector leads to a new point with equation: (2) New hyperplane: reflecting a hyperplane with normal vector across a hyperplane with normal vector leads to a new hyperplane with normal vector: Note the normal vector to both and must also be normal to ( ); this is why the latter is in the same hyperplane as the formers. The equation is obtained then using two facts: first the norm of ( ) is one; therefore ( 2 + 2 + 2 = 1) and, second, ( ) ⋅ (− ) = 2 ⋅ 1 ; therefore ( = − − ).
The algorithms are based on doing reflections across all originals hyperplanes and then repeating the procedure for all new hyperplanes. This method would lead to repeated values; therefore we also have to check if there are duplicates (hyperplanes or source points). The method stops after the known number of different sources has been detected but in principle few iterations should lead to a good approximation.

A Comment on Applications
One of the key application fields for our findings is multidimensional financial derivatives (see [10] or [11] for an introduction to financial markets and problems). Market derivatives are contracts payable at future times, called maturity, deriving their value from the performance of potentially several underlying tradeable stocks. There exists an extensive family of financial derivatives in the market which depends on first passage time (barriers) of the underlying stock price processes. In general, as pointed out by [12], adding barriers is a convenient method for reducing the cost of a derivative.
Two main families of such products are double lookback options (see [2]), in particular double barrier options (see [13]), and mountain range derivatives (see [14]). The latter are high dimensional ( ≥ 3) and were created by Société Générale in 1998. Examples of these products are Altiplano, Annapurna, and Atlas. The payoff, at maturity , of an Annapurna is of the form Here stands for the log stock price while 's are prespecified strike prices. Therefore the product pays a dollar at time if and only if all stocks remain above given strike prices ( , = 1, . . . , ) during the relevant period of the product, that is, (0, ]. The price of such a product, (0), is the expected value of this payoff. Therefore the price is related to our function 1 in the following manner: where Ω = { | > log , = 1, . . . , }.
In the past this expression could be evaluated either via Monte Carlo simulations or directly solving the PDE equations, which are highly time consuming and inaccurate approaches for dimensions higher than 3. This price, under the feasible correlations described in Proposition 3, could be now found as a linear combination of -dimensional Gaussian cumulative distribution functions.
Another family of products benefiting from this work are credit derivatives, in particular collateralized debt obligation (CDO) and a th to default product (see [15]). These were products at the very heart of the financial crisis in 2008. A CDO has a payoff similar to that of a market product under proper default assumptions. For instance, if a default is assumed to be triggered by the company's assets crossing its constant debt at any time prior to maturity of the companies' debt, as proposed in the seminal work of [16], then the key element in the price of a CDO is the default of prespecified companies simultaneously, which can be expressed as follows: where Ω = { | > log , = 1, . . . , }, stands for the asset value of the company, represents its constant debt, and = . This type of products was highly mispriced due, in part, to the mathematical complexity of handling multidimensions and first passage time. Simpler approaches like those assuming default only at maturity (avoiding first passage time) or noneconomical approaches like modelling default as an exogenously given process were more prone to oversight.

Conclusions and Possible Generalizations
This paper describes the correlation matrices for which a closed-form solution for the joint density/distribution of the endpoints and the minimum of a Wiener process can be found. The results are also applicable to other processes like log-normal. The general solution requires a detailed geometrical analysis of certain partitions of the -dimensional sphere; therefore the technique can only provide the closedform solutions for a specific set of correlations. The resulting densities can be used in several applications, in particular analytical expressions for prices of financial products, therefore validating the accuracy of numerical simulations.
The method developed in the present paper could also be extended to allow for stochastic volatility and random correlation. It can be applied to maximums and minimums combined, as long as one extreme per dimension is considered. Finally, the solution could be the basis for further approximations like those based on perturbation theory (see [17]) which currently work under the assumption of independence among the processes underlying.