On a Model for the Storage of Files on a Hardware : Statistics at a Fixed Time and Asymptotic Regimes

We consider a version in continuous time of the parking problem of Knuth. Files arrive following a Poisson point process and are stored on a hardware identified with the real line, in the closest free portions at the right of the arrival location. We specify the distribution of the space of unoccupied locations at a fixed time and give asymptotic regimes when the hardware is becoming full.


Introduction
We consider a generalized version in continuous time of the original parking problem of Knuth.Knuth was interested in the storage of data on a hardware represented by a circle with n spots.Files arrive successively at locations chosen uniformly at random and independently among these n spots.They are stored in the first free spot at the right of their arrival point (at their arrival point if it is free).Initially Knuth worked on the hashing of data (see e.g.[10,12,13]) : he studied the distance between the spots where the files arrive and the spots where they are stored.Later Chassaing and Louchard [9] have described the evolution of the largest block of data in such coverings when n tends to infinity.They observed a phase transition at the stage where the hardware is almost full, which is related to the additive coalescent.Bertoin and Miermont [6] have extended these results to files of random sizes which arrive uniformly on the circle.We consider here a continuous time version of this model where the hardware is large and now identified with the real line.A file labelled i of length (or size) l i arrives at time t i ≥ 0 at location x i ∈ R. The storage of this file uses the free portion of size l i of the real line at the right of x i as close to x i as possible (see Figure 1).That is, it covers [x i , x i + l i [ if this interval is free at time t i .Otherwise this file can be splitted into several parts which are then stored in the closest free portions at the right of the arrival location.We require uniformity of the location where they arrive and identical distribution of the sizes and we model the arrival of files by a Poisson point process (PPP) : {(t i , x i , l i ) : i ∈ N} is a PPP with intensity dt ⊗ dx ⊗ ν(dl) on R + × R × R + .We denote m := ∞ 0 lν(dl) and assume m < ∞.So m is the mean of the sum of sizes of files which arrive during a unit interval time on some interval with unit length.
We begin by constructing this random covering (Section 2.1).The first questions which arise and are treated here concern statistics at a fixed time for the set of occupied locations.What is the distribution of the covering at a fixed time ?At what time the hardware becomes full ?What are the asymptotics of the covering at this saturation time ?What is the length of the largest block on a part of the hardware ?
It is quite easy to see that the hardware becomes full at a deterministic time equal to 1/m.In Section 3.1, we give some geometric properties of the covering and characterize the distribution of the covering at a fixed time by giving the joint distribution of the block of data straddling 0 and the free spaces on both sides of this block.The results given in this section will be useful for the problem of the dynamic of the covering considered in [2], where we investigate the evolution in time of a typical data block.Moreover, using this characterization, we determine the asymptotic regimes at the saturation time, which depend on the tail of ν, as in [6,8,9] .More precisely, we give the asymptotic of C(t) when t tends to 1/m (Theorem 2) and the asymptotic of C(t) restricted to [0, x] when x tends to infinity and t tends to 1/m (Theorem 3).We derive then the asymptotic of the largest block of the hardware restricted to [0, x] when x tends to infinity and t tends to 1/m.As expected, we recover the phase transition observed by Chassaing and Louchard in [9].Results are stated in Section 3 and proved in Section 4.
It is easy to check that for each fixed time t, C(t) does not depend on the order of arrival of files before time t.Thus, if ν is finite, we can view the files which arrive before time t as customers : the size of the file l becomes the service time of the customer and the location x where the file arrives becomes the arrival time of the customer.We are then in the framework of the M/G/1 queue model in the stationary regime and the covering C(t) becomes the union of busy periods (see e.g.Chap 3 in [11] or [24]).Thus, results of Section 3.1 and Section 3.3 for finite ν follow easily from known results on M/G/1.When ν is infinite, results are similar though busy cycle is not defined.Thus the approach is different and proving asymptotics on random sets requires results about Lévy processes and regenerative sets.Moreover, as far as we know, the longest busy period and more generally asymptotic regimes on [0, x] when x tends to infinity and t tends to the saturation time (Section 3.4) have not been considered in queuing model.

Preliminaries
Throughout this paper, we use the classical notation δ x for the Dirac mass at x and N = {1, 2, . ..}.If R is a measurable subset of R, we denote by | R | its Lebesgue measure and by R cl its closure.For every x ∈ R, we denote by R − x the set {y − x : y ∈ R} and By convention, sup ∅ = −∞ and inf ∅ = ∞.
If I is a closed interval of R, we denote by H(I) the space of closed subsets of I.For every x ∈ R and A ⊂ R we define and we endow H(I) with the Hausdorff distance d H defined for all A, B ∈ H(I) by : The topology induced by this distance is the topology of Matheron [20] : It is also the topology induced by the Hausdorff metric on a compact set using arctan(R ∪ {−∞, ∞}) or the Skorokhod metric using the class of 'descending saw-tooth functions' (see [20] and [14] for details).

Construction of the covering C(t)
First, we present a deterministic construction of the covering C associated with a given sequence of files labelled by i ∈ N. The file labelled by i ∈ N has size l i and arrives after the files labelled by j ≤ i − 1, at location x i on the real line.Files are stored following the process described in the Introduction and C is the portion of line which is used for the storage.We begin by constructing the covering C (n) obtained by considering only the first n files, so that C is obtained as the increasing union of these coverings.A short thought (see Remark 1 p 5) enables us to see that the covering C does not depend on the order of arrival of the files.This construction of C will then be applied to the construction of our random covering at a fixed time C(t) by considering files arrived before time t.This construction and results for finite ν are classical in queuing theory (see e.g.[11]) and storage systems (see e.g.[24]).Thus we do not give details here and we refer to [1] for complete proof.
We define C (n) by induction.We set C (0) := ∅, and introduce the complementary set R (n) of C (n) (i.e. the free space of the real line).Let so y n+1 is the right-most point which is used for storing the (n + 1)-th file.Define then Now we consider the quantity of data over x, R x , as the quantity of data which we have tried to store at the location x (successfully or not) when n files are stored.These data are the data fallen in [g x (R (n) ), x] which could not be stored in x is defined by Note that in queuing systems, R (n) is the workload.This quantity can be expressed using the function Y (n) , which sums the sizes of the files arrived at the left of a point x minus the drift term x.It is defined by Figure 1.Arrival and storage of the 5-file and representation of Y (5) .The first four files have been stored without splitting and are represented by the black rectangles.
Introducing also its infimum function defined for x ∈ R by I (n) x := inf{Y (n) y : y ≤ x}, we get the following expression.
As a consequence, the covered set when the first n files are stored is given by ( We are now able to investigate the situation when n tends to infinity under the following mild condition ∀ L ≥ 0, which means that the quantity of data arriving on a compact set is finite.Introduce the function Y defined on R by Y 0 = 0 and l i for a < b, and its infimum I defined for x ∈ R by I x := inf{Y y : y ≤ x}. As expected, letting n → ∞ in (2) and (3), the covering C := ∪ n∈N C (n) is given by (see Section 2.1 in [1] for the proof) : Remark 1.This result ensures that the covering at time t just depend on (Y x − I x ) x∈R .Thus it does not depend on the order of arrival of files.
Finally, we can construct the random covering associated with a PPP.As the order of arrival of files has no importance, the random covering C(t) at time t described in Introduction is obtained by the deterministic construction above by taking the subfamily of files i which verifies t i ≤ t.
When files arrive according to a PPP, (Y x ) x≥0 is a Lévy process, and we recall now some results about Lévy processes and their fluctuations which will be useful in the rest of this work.

Background on Lévy processes
The results given in this section can be found in the Chapters VI and VII in [5] (there, statements are made in terms of the dual process −Y ).We recall that a Lévy process is càdlàg process starting from 0 which has iid increments.A subordinator is an increasing Lévy process.
We consider in this section a Lévy process (X x ) x≥0 which has no negative jumps (spectrally positive Lévy process).We denote by Ψ its Laplace exponent which verifies for every ρ ≥ 0 : We stress that this is not the classical choice for the sign of the Laplace exponent of Lévy processes with no negative jumps and a negative drift such as the process (Y x ) x≥0 introduced in the previous section.However it is the classical choice for subordinators, which we will need.It is then convenient to use this same definition for all Lévy processes which appear in this text.
First, we consider the case when (X x ) x≥0 has bounded variations.That is, We call ν the Lévy measure and d ∈ R the drift.Note that (Y x ) x≥0 is a subordinator iff d ≥ 0.
Writing ν for the tail of the measure ν, the Lévy-Khintchine formula gives Second, we consider the case when Ψ has a right derivative at 0 with meaning that E(X 1 ) < 0. And we consider the infimum process which has continuous path and the first passage time defined for x ≥ 0 by As −Ψ is strictly convex and −Ψ (0 and so is strictly positive on ]0, ∞].We write κ : [0, ∞[→ R for the inverse function of −Ψ and we have (see [5] Theorem 1 on page 189 and Corollary 3 on page 190) : Theorem 1. (τ x ) x≥0 is a subordinator with Laplace exponent κ.

Moreover the following identity holds between measures on
Note that if (X x ) x≥0 has bounded variations, using (9), we can write where Π is a measure on R + verifying (use ( 9) and Wald's identity or ( 8)) : Now we introduce the supremum process defined for x ≥ 0 by and the a.s.unique instant at which X reaches this supremum on [0, x] : By duality, we have (S where g x denotes the a.s unique instant at which (X x − ) x≥0 reaches its overall infimum on [0, x] (see Proposition 3 in [5] or [4] on page 25).If T is an exponentially distributed random time with parameter q > 0 which is independent of X and λ, µ > 0, then we have (use [5] Theorem 5 on page 160 and Theorem 4 on page 191) : which gives

Tail of ν and Lévy processes indexed by R
We give here several definitions which will be useful for the study of the asymptotic regimes.Following the notation in [6], we say that ν Then, for α ∈]1, 2[, we put : We denote by (B z ) z∈R a two-sided Brownian motion, i.e. (B x ) x≥0 and (B −x ) x≥0 are independent standard Brownian motions.For α ∈]1, 2[ , we denote by (σ z ) z∈R a càdlàg process with independent and stationary increments such that (σ x ) x≥0 is a standard spectrally positive stable Lévy process with index α : Finally, we introduced the following processes indexed by z ∈ R for all λ ≥ 0 and and the infimum process defined for x ∈ R by I α,λ x 3 Properties at a fixed time and asymptotics regimes

Statistics at a fixed time
Our purpose in this section is to specify the distribution of the covering C(t) using the characterization of Section 2.1 and results of Section 2.2.This characterization will be useful to prove asymptotics results (Theorem 2, Theorem 3 and Corollary 3) and for the dynamic results given in [2].In that view, following the previous section, we consider the process (Y l i for a < b, which has independent and stationary increments, no negative jumps and bounded variation.Introducing also its infimum process defined for x ∈ R by x := inf{Y (t) y : y ≤ x}, we can give now a handy expression for the covering at a fixed time and obtain that the hardware becomes full at a deterministic time equal to 1/m (see Section 4 for the proof).
Proposition 2. For every t < 1/m, we have x > I (t) x } = R a.s.For every t ≥ 1/m, we have C(t) = R a.s.Indeed, in queueing system, tm is the charge and C(t) = R ⇔ tm < 1 is the standard claim of stability for tm < 1.The complete argument is defered to Section 4.
To specify the distribution of C(t), it is equivalent and more convenient to describe its complementary set, denoted by R(t), which corresponds to the free space of the hardware.By the previous proposition, there is the identity : We begin by giving some geometric properties of this set.These properties are classical (see [17] for storage systems and [18] for queueing theory).
Proposition 3.For every t ≥ 0, R(t) is stationary, its closure is symmetric in distribution and it enjoys the regeneration property : For every Stationarity is plain from the construction of the covering and regeneration property is a direct consequence of Lemma 2. Symmetry is then a consequence of Lemma 6.5 in [25] or Corollary (7.19) in [26].Computation of P(x ∈ C(t)) is then derived from Theorem 1 in [25].See Section 3.1 in [1] for the complete proof.
Even though for each fixed t the distribution of R(t) cl is symmetric, the processes (R(t) cl : t ∈ [0, 1/m]) and (−R(t) cl : t ∈ [0, 1/m]) are quite different.For example, we shall observe in [2] that the left extremity of the data block straddling 0 is a Markov process but the right extremity is not.
We want now to characterize the distribution of the free space R(t).For this purpose, we need some notation.The drift of the Lévy process (Y x ) x≥0 is equal to −1, its Lévy measure is equal to tν and its Laplace exponent Ψ (t) is then given by (see (6)) For sake of simplicity, we write, recalling (1), which are respectively the left extremity, the right extremity and the length of the data block straddling 0, B 0 (t).Note that Introducing also the processes ( x ) x≥0 and ( x ) x≥0 defined by → τ (t) x := inf{y ≥ 0 : x := inf{y ≥ 0 : enables us to describe R(t) in the following way (see Section 4 for the proof).(iii) The distribution of (g(t), d(t)) is specified by : where U is an uniform random variable on [0, 1] independent of l(t) and Π (t) is the Lévy measure of κ (t) .Remark 2. Such results are classical for regenerative sets (see e.g.[17,19,23]).But we need this particular characterization and expressions given in the proof below for forthcoming results.
We can then estimate the number of data blocks on the hardware.If ν has a finite mass, we write N (t) x the number of data blocks of the hardware restricted to [−x, x] at time t.This quantity has a deterministic asymptotic as x tends to infinity which is maximum at time 1/(2m).And the number of blocks of the hardware reaches a.s.its maximal at time 1/(2m).More precisely, Moreover, we can describe here the hashing of data.We recall that a file labeled by i is stored at location x i .In the hashing problem, one is interested by the location where the file i is stored knowing x i .By stationarity, we can take x i = 0 and consider a file of size l which we store at time t at location 0 on the hardware whose free space space is equal to R(t).The first point (resp.the last point) of the hardware occupied for the storage of this file is equal to d(t) (resp.to d(t) l ).This gives the distribution of the extremities of the portion of the hardware used for the storage of a file.

Observations and examples
First, we have for every ρ ≥ 0 (use ( 12)), and using ( 13) Using (11), we have also the following identity of measures on [0, Finally, we give the distribution of the extremities of B 0 :

Let us consider three explicit examples
Example 2.
(3) For the exponential distribution ν(dl) = 1l {l≥0} e −l dl, we can get : Finally, we specify two distributions involved in the storage of the data.
Writing −g(t) = γ(t) (see ( 28) and ( 29)) and using the identity of fluctuation (16) gives an other expression for the Laplace transform of g(t) : For all t ∈ [0, 1/m[ and λ ≥ 0, we have As a consequence, we see that the law of g(t) is infinitively divisible.Moreover this expression will give the generating triplet of the additive process (g(t)) t∈[0,1/m[ (see Theorem 2 in Section 4 in [2]).

Asymptotics at saturation of the hardware
We focus now on the asymptotic behavior of R(t) when t tends to 1/m, that is when the hardware is becoming full.First, note that if ν has a finite second moment, then Thus we may expect that if ν has a finite second moment, then (1 − mt) 2 l(t) should converge in distribution as t tends to 1/m.Indeed, in the particular case ν = δ 1 or in the conditions of Corollary 2.4 in [4], we have an expression of Π (t) (dx) and we can prove that (1 − mt) 2 l(t) does converge in distribution to a gamma variable.
More generally, we shall prove that the rescaled free space (1 − mt) 2 R(t) converges in distribution as t tends to 1/m.In that view, we need to prove that the process (Y (t) (1−mt) −2 x ) x∈R converges after suitable rescaling to a random process.Thanks to (17), (1 − mt) 2 R(t) should then converge to the set of points where this limiting process coincides with its infimum process.We shall also handle the case where ν has an infinite second moment and find the correct normalization, which depends on the tail of ν.Proofs are close to proofs of the next section and they are made simultaneously in the last section of this paper.
In queuing systems, asymptotics at saturation are known as heavy traffic approximation (ρ = tm → 1), which depend similarly on the tail of ν.And for ν finite, results given here could be directly derived from results in queuing theory (See III.7.2 in [11] or [18] if ν has a second moment order and [8] for heavy tail of ν).The main difference is that ν can be infinite in this paper.Then the busy cycle is not defined and we consider the whole random set of occupied locations.Moreover, as explained below, asymptotics of R(t) can not be directly derived from asymptotics of Y or the workload R.
We introduce now the following functions defined for every t ∈ [0, 1/m[ and α ∈]1, 2[ by Recalling Notations of Section 2.3, we have then the following weak convergence result for the Hausdorff metric defined in Section 2 (see Section 4 for the proof).
First we prove the convergence of the Laplace exponent Ψ (t) after suitable rescaling as t tends to 1/m, which ensures the convergence of the Lévy process Y (t) after suitable rescaling (see Lemma 3).These convergences will not a priori entail the convergence of the random set α (t).R cl (t) since they do not entail the convergence of excursions.Nevertheless, they will entail the convergence of κ (t) since κ (t) • (−Ψ (t) ) = Id (Lemma 4).Then we get the convergence of τ (t) as t tends to infinity and thus of its range α (t).R cl (t).
Remark 3.More generally, as in queuing theory and [6], we can generalize these results for regularly varying functions ν.If ν is regularly varying at infinity with index −α ∈ ] − 1, −2[, then we have the following weak convergence in H(R) : For instance, the case ν(x) If ν is regularly varying at infinity with index −2, there are many cases to consider.
Remark 4. The density of data blocks of size dx in α (t).R(t) cl is equal to mt 1−mt Π (t) (dx).By the previous theorem or corollary, this density converges weakly as t tends to 1/m to the density of data block of size dx of the limit covering {x ∈ R : Y α,1 x = I α,1 x } cl .This limit density, denoted by Π α,1 (dx), can be computed explicitly in the cases ν ∈ D α (α ∈ {2, 2+}), thanks to the last corollary : x .
Note that is also the Lévy measure of the limit covering {x ∈ R :

Asymptotic regime on a large part of the hardware
Here we look at the set of occupied locations C(t) in a window of size x.We consider the asymptotics of C(t) ∩ [0, x] when x tends to infinity at saturation time.As far as we know, results given here are new even when ν is finite.We introduce the following functions defined for all x ∈ R * + and α ∈]1, 2[ by And we have the following asymptotic regime (see Section 4 for the proof).
x tends to infinity and t to 1/m such that Thus as in [9], we observe a phase transition of the size of largest block of data in [0, x] as x → ∞ according to the rate of filling of the hardware.More precisely, denoting B 1 (x, t) =| I 1 (x, t) | where (I j (x, t)) j≥1 is the sequence of component intervals of C(t) ∩ [0, x] ranked by decreasing order of size, we have : x tend to infinity and t to 1/m : -If 1 − mt ∼ λf α (x) with λ > 0, then B 1 (x, t)/x converges in distribution to the largest length of excursion of (Y α,λ x The phase transition occurs at time t such that 1 − mt ∼ λf α (x) with λ > 0. The more data arrive in small files (i.e. the faster ν(x) tends to zero as x tends to infinity), the later the phase transition occurs.In [9,6], the hardware is a circle and processes required for asymptotics are the bridges of the processes used here.A consequence is that in our model, B 1 (t, x)/x tends to zero or one with a positive probability at phase transition, which is not the case for the parking problem in [9,6].More precisely, denoting by B α,λ the law of the largest length of excursion of (Y α,λ x − I α,λ x ) x∈[0,1] , we have :

Proofs
In this section, we provide rigorous arguments for the original results which have been stated in Section 3.
is a Lévy process.So we have (see [5] Corollary 2 on page 190) : Then Proposition 1 ensures that for every t < 1/m, x } = R a.s.
Similarly, we get that for every t ≥ 1/m, C(t) = R a.s.
For the forthcoming proofs, we fix t ∈ [0, 1/m[, which is omitted from the notation of processes for the sake of simplicity.
To prove the regeneration property and characterize the Laplace exponent of → τ , we need to establish first a regeneration property at the right extremities of the data blocks.In that view, we consider for every x ≥ 0, the files arrived at the left/at the right of x before time t : Lemma 2. For all x ≥ 0, P dx(R(t)) is independent of P dx(R(t)) and distributed as P 0 .
Proof.The simple Markov property for PPP states that for every x ∈ R, P x is independent of P x and distributed as P 0 .Clearly this extends to simple stopping times in the filtration σ P x x∈R and further to any stopping time in this filtration using the classical argument of approximation of stopping times by a decreasing sequence of simple stopping times (see also [21]).As d x (R(t)) is a stopping time in this filtration, P dx(R(t)) is independent of P dx(R(t)) and distributed as P 0 .
Proof of Proposition 4. (i) By symmetry of R(t) cl , −→ R(t) and ←− R(t) are identically distributed.The regeneration property ensures that The fact → τ is a subordinator will be proved below but could be also derived directly from the regeneration property of → R(t) (see [19]).Similarly the range of Then using again the definition of → τ given in Section 3.1 and that and Lemma 2 entails that P d(t) is distributed as a PPP on [0, t]×R + ×R + with intensity ds ⊗ dx ⊗ ν(dl).So (Y y+d(t) − Y d(t) ) y≥0 is a Lévy process with bounded variation and drift −1 which verifies condition (10) (use (8) and −1 + mt < 0).Then Theorem 1 entails that → τ is a subordinator whose Laplace exponent is the inverse function of −Ψ (t) .
τ is distributed as → τ by definition.
(iii) We determine now the distribution of (g(t), d(t)) using fluctuation theory, which enables us to get identities useful for the rest of the work.We write ( Y x ) x≥0 for the càdlàg version of (−Y −x ) x≥0 and Using (17) and the fact that Y has no negative jumps, we have Using again (17) and the fact that (Y x ) x≥0 is regular for ] − ∞, 0[ (see [5] Proposition 8 on page 84), we have also a.s.
Proof of Corollary 1.As ν(0) < ∞, then Π(0) = tν(0) < ∞ (see (21)).So → τ is the sum of a drift and a compound Poisson process.That is, there exists a Poisson process (N x ) x≥0 of intensity tν(0) and a sequence (X i ) i∈N of iid variables of law ν/ν(0) independent of (N x ) x≥0 such that by the law of large numbers (see [5] on page 92).This completes the proof.
Proofs of Theorem 2 and Theorem 3 are close and made simultaneously.For that purpose, we introduce now Ψ α,λ the Laplace exponent (see ( 5)) of Y α,λ given for y ≥ 0, λ ≥ 0 and α ∈]1, 2[ by We denote by D the space of càdlàg function from R + to R which we endow with the Skorokhod topology (see [15] on page 292).First, we prove the weak convergence of Y (t) after suitable rescaling.
Proof of Lemma 3. Using (7), we have We handle now the different cases : which proves the first part of the lemma using (32).
So we can apply the dominated convergence theorem to get As y α/2 = o(y −1 ν(1/y)) (y → 0), we can complete the proof with • Case ν ∈ D 2 .We split the integral.First, we have which proves the first part of the lemma using (32).
These convergences ensure the convergence of the finite-dimensional distributions of the processes.The weak convergence in D, which is the second part of the lemma, follows from Theorem 13.17 in [16].
In the spirit of Section 3.1, we introduce the expected limit set, that is the free space of the covering associated with Y α,λ , and the extremities of the block containing 0.
We have the following analog of Proposition 4. is the inverse function of −Ψ α,λ .Finally, using Ψ α,λ (0) = −λ, the counterpart of (30) gives for ρ, µ ≥ 0 and ρ = µ : The proof of these results follow the proof of Proposition 4, except for two points : 1) We cannot use the point process of files to prove the stationarity and regeneration property of R(α, λ) and we must use the process Y α,λ instead.The stationarity is a direct consequence of the stationarity of Y α,λ x − I α,λ x x∈R .The regeneration property is a consequence of the counterpart of Lemma 2 which can be stated as follows.For all x ∈ R, and distributed as Y α,λ y y≥0 .As Lemma 2, this property is an extension to the stopping time d x (R(α, λ)) of the following obvious result : Y 2) It is convenient to define directly (  Proof of Theorem 3. The argument is similar to that of the proof of Theorem 2 using the others limits of Lemma 4. We get that if x → ∞ and 1 − mt ∼ λf α (x) with λ > 0, then x −1 R(t) converges weakly in H(R) to {x ∈ R : Y α,λ x = I α,λ x } cl .The theorem follows by restriction to [0, 1].
To prove the corollary of Theorem 3, we need the following result.The right hand side converges weakly to B α,λ as x tends to infinity.Letting λ tend to infinity, the lemma above entails that B 1 (x, t)/x x→∞ −→ 0 in P.
Similarly if 1 − mt = o(f α (x)) (x → ∞) , then for every λ > 0 and x large enough, [ and we denote by R := n∈N [−b n , −a n [ the symmetrical of R with respect to 0 closed at the left, open at the right.We consider the positive part (resp.negative part) of R defined by

Example 1 .
For a given R represented by the dotted lines, we give below → R and ← R, which are also represented by dotted lines.Moreover the endpoints of the data blocks containing 0 are denoted by g 0 and d 0 .
)) is the free space at the right of B 0 (t) (resp.at the left of B 0 (t), turned over, closed at the left and open at the right).We have then the identity

Proposition 4 .
(i)  The random sets are independent, identically distributed and independent of (g(t), d(t)).) are the range of the subordinators → τ (t) and ← τ (t) respectively whose Laplace exponent κ (t) is the inverse function of −Ψ(t) .