Operator Self-Similar Processes On Banach Spaces

Operator self-similar (OSS) stochastic processes on arbitrary 
Banach spaces are considered. If the family of expectations of 
such a process is a spanning subset of the space, it is proved 
that the scaling family of operators of the process under 
consideration is a uniquely determined multiplicative group of 
operators. If the expectation-function of the process is 
continuous, it is proved that the expectations of the process have 
power-growth with exponent greater than or equal to 0, that is, their norm is less than a nonnegative constant times such a 
power-function, provided that the linear space spanned by the 
expectations has category 2 (in the sense of Baire) in its 
closure. It is shown that OSS processes whose expectation-function 
is differentiable on an interval (s0,∞), for some s0≥1, have a unique scaling family of operators of the form 
{sH:s>0}, if the expectations of the process span a dense 
linear subspace of category 2. The existence of a scaling family 
of the form {sH:s>0} is proved for proper Hilbert space 
OSS processes with an Abelian scaling family of positive 
operators.


Introduction
Let Ᏹ denote a Banach space and let ᏸ(Ᏹ) be the algebra of all linear bounded operators on Ᏹ. Let (Ω,,P) be a probability space. Throughout this paper, given a random variable X : Ω → Ᏹ, the measure PX −1 denotes the distribution of X, that is, the following Borel probability measure: (1.1) Definition 1.1. An operator self-similar process is a stochastic process {X(t) : t > 0} on Ᏹ with a scaling family of operators {A(s) : s > 0}, that is, a process such that there is a family {A(s) : s > 0} in ᏸ(Ᏹ) with the property that for each s > 0, (1. 2) The property above will be referred to as the self-similarity of the process {X(t) : t > 0} under the scaling family of operators {A(s) : s > 0}. The term operator self-similar will be designated by the acronym OSS throughout this paper. If the process has a scaling family of the particular form A(s) = s H I, s > 0, (1.3) where H is some fixed scalar and I denotes the identity operator, then it is called selfsimilar instead of OSS. Self-similar processes were introduced by Lamperti, in 1962 [10]. OSS processes appeared later [9]. In this paper we consider and study OSS processes valued in (possibly infinite-dimensional) Banach spaces. Our main idea is to obtain information about such processes by using the theory of one-parameter semigroups and groups of operators (see Definitions 1.2 and 1.3 below for these notions).
This section is dedicated to summarizing the main results, introducing the basic notions, setting up notation, and giving some examples. Examples 1.4 and 1.9 in this section emphasize why it is natural to think of groups of operators in connection with OSS processes. In Section 2 we study OSS processes with rich families of expectations. Theorem 2.3 in that section says that, if the linear space spanned by the expectations of such a process is a set having category 2 in the sense of Baire in its closure, then there exist constants a ≥ 0 and M ≥ 1 such that E X(t) ≤ Mt a , t ≥ 1, (1. 4) In order that the inequalities above hold we also require that the OSS processes have a norm-continuous expectation-function t → E(X(t)), t > 0. The main ingredient in the proof of the theorem is the fact that OSS processes whose expectations span a dense linear subspace of the whole space have a unique scaling family of operators which is necessarily a multiplicative group of operators. This is proved in Theorem 2.1 of Section 2 and we say that such processes have a spanning family of expectations. For those processes having expectation-function, differentiable on an interval of the form (s 0 ,∞), s 0 ≥ 1 we are able to show that the scaling family is necessarily of the form {s H : s > 0} for some H ∈ ᏸ(Ᏹ), (Theorem 2.4, Section 2). In Section 3 we consider OSS processes with scaling families of invertible operators. The main result in that section is Theorem 3.5 which states that proper Hilbert-space valued OSS processes with a scaling family of commuting, invertible, positive operators have an exponent, that is, have a scaling family of operators of the form {s H : s > 0} for some H ∈ ᏸ(Ᏹ). For the term proper OSS process we refer the reader to Definition 3.5 in Section 3.
In order to set up terminology we recall the following definition.
, for all t ≥ 0, with the following properties: If one can define T(t) for t < 0 such that relation (1.5) holds for all t,s ∈ R, then we say that {T(t) : t ∈ R} is an additive group of operators. The theory of semigroups of operators is customarily exposed in "additive notation" [7,14]. It can be easily translated into multiplicative notation as follows.
with the following properties: If one can define A(s) for 0 < s < 1 such that relation (1.6) holds for all t,s ∈ (0,∞), then we say that {A(s) : s > 0} is a multiplicative group of operators. It is easy to see that if {T(t)} is an additive semigroup (group) of operators, then {A(t)} given by A(t) := T(logt) is a multiplicative semigroup (group) of operators and conversely, if {A(t)} is a multiplicative semigroup (group) of operators, then {T(t)} given by T(t) := A(e t ) is an additive semigroup (group) of operators. For the purposes of this paper multiplicative notation is preferred and results taken from the theory of semigroups of operators and traditionally exposed in additive notation will be used in their multiplicative version by the mechanism exposed above. Indeed, for arbitrary, fixed s > 0 and {t} = {t 1 ,t 2 ,...,t n }, t 1 ,t 2 ,...,t n > 0, one can write The process  Proof. Indeed observe that for arbitrary, fixed s > 0 one can write It is worth observing that a converse construction is also true.  For R n -valued OSS processes, a connection between this class of processes and operator-stable probability measures has been observed by the authors of [4]. Their idea extends to Banach space valued processes as follows. Proof. According to [5] such a measure is infinitely divisible and hence μ t above makes sense for all t > 0. Furthermore, according to the same paper, there exist α > 0 and a subset {b t : t > 0} of Ᏹ such that  Lévy processes are random processes {X(t) : t ≥ 0} which are stochastically continuous (i.e., for each > 0, P({ X(s + t) − X(s) > }) → 0 as t → 0), start almost surely at the origin (i.e., X(0) = 0 a.s.), are time homogeneous (i.e., the distribution of {X(s + t) − X(s) : t ≥ 0} does not depend on s), have independent increments (i.e., X(t 0 ),X(t 1 ) − X(t 0 ),...,X(t n ) − X(t n−1 ), are independent for any choice 0 ≤ t 0 < t 1 < ··· < t n ), and have continuous sample-paths. {X(t) : t ≥ 0} are the proper Lévy processes having stationary, independent, operator-stable increments and the property that X(1) has null centering function.

Example 1.10. An important class of examples of OSS processes
We refer the reader to [4,Theorem 7] for a proof and include in the following some explanations on the notions in the previous example. Operator-stable measures are measures obtained as limits, like the measure μ in Example 1.8. Such measures, on arbitrary Banach spaces, are studied in [8,[18][19][20]. The fact that the class of operator-stable probability measures on a separable, infinite-dimensional Banach space and that the class of infinitely divisible laws coincide is established in [19]. In [8,18] the existence of exponents for full (i.e., not supported on a hyperplane), operator-stable, probability measures is proved. This means that a full probability measure μ on a real, separable Banach space is proved to be operator-stable if and only if there is some operator B so that μ t is a translation of the measure t B μ = e logtB μ, for each t > 0. More formally, there is a function t → b t called the centering function of μ such that μ t = t B μ * δ bt , for all t > 0.
The simple, yet important case of the joint limiting distribution of the sample mo- [11]. If X 1 ,X 2 ,... are i.i.d as X, then X is said to belong to the domain of attraction of Y if for some sequences {a n } and {b n } of scalars, a −1 n (X 1 + X 2 + ··· + X n − b n ) → Y . The authors of [11] show, among other things, that the random vector Z = (X,X 2 ,X 3 ,...) belongs to the generalized domain of attraction of some operator-stable law on R ∞ if and only if each X k belongs to the domain of attraction of some operator-stable law. The generalized domain of attraction of a random vector has a definition similar to that (given above) of the domain of attraction of a random variable, only that one substitutes the sequence {a −1 n } by a sequence of invertible operators and {b n } by a sequence of vectors. The results in [11] are summarized in the book [12,Chapter 10].
We considered important making the comments above relative to the existing literature on operator-stable laws on Banach spaces as those papers provide important context for this one.
We conclude the introductory part of this paper by noting that the definition used by some authors for both OSS processes and for self-similar processes is slightly different from ours. More exactly, a stochastic process (1.16) The function s → b(s) is called a drift-function. So in this paper we study OSS stochastic processes with null drift-function, and simply call them OSS processes. The same goes for self-similar processes. Also we do not include in the definition any continuity requirement and we consider the time interval to be (0, ∞) rather than [0,∞).

OSS processes with a spanning set of expectations
In this section the main idea is that OSS processes with a spanning set of expectations have uniquely determined scaling families of operators which are groups of operators. We prove this result and investigate its consequences. We begin by setting up some notations.
For each subset S of Ᏹ, Span(S) denotes the linear subspace of Ᏹ spanned by the vectors in S and Span(S) the closure of that subspace.
In this case, for each t > 0, E[X(t)] denotes the expectation of the random variable X(t).
Some popular examples of R n -valued OSS process have zero expectations (standard fractional brownian motions for instance). Since this is often a good assumption, the following remark is in order here. Proof. Observe that for each s,t > 0 one can write According to this computation one obtains Let us denote Ᏹ 0 = Span{E[X(t)] : t > 0}. This subspace has the following interesting properties.
For an arbitrary, fixed y 0 ∈ Ᏹ 0 and each fixed is an OSS process with a spanning family of expectations, that is, if Ᏹ 0 = Ᏹ, then its scaling family of operators is a uniquely determined multiplicative group of operators.
Proof. The invariance property is an immediate consequence of the equality which was used and proved in detail in the proof of the previous remark. Observe that because for each fixed s > 0 one can write which implies To show that {A(s) : s > 0} is a multiplicative group of operators, observe that , for all s,t > 0. Also  Keeping in mind that operators T on the field of scalars are functions of the form T(z) = cz, the operator norm T being |c|, one can write (2.14) The continuity of the expectation-function and the fact that E[X(u)] = 0 imply that the semigroup is uniformly continuous, hence it must be of the form Proof. Next we consider the multiplicative semigroup of operators obtained by taking indices in the interval [1, ∞) and show that it is a multiplicative C 0 -semigroup of operators, that is, for each x in Ᏹ 0 . Clearly, if one considers x of the form x = E[X(s)] for some s > 0, the condition above holds because  [16], this implies that the family {B(u) : u ∈ (1,1 + δ)} is norm-bounded. Based on that, the fact that property (2.17) holds for each x ∈ Span{E[X(t)] : t > 0} extends to the fact that it holds for each x ∈ Ᏹ 0 by a straightforward argument. Indeed, let > 0 be arbitrary and fixed. Consider any fixed x ∈ Ᏹ 0 . Since Ᏹ 0 is the closure of Span{E[X(t)] : t > 0}, we can choose y ∈ Span{E[X(t)] : t > 0} such that x − y < /3 and M x − y < /3. Also, since B(t)y → y as t → 1 + we can choose 0 < δ 1 < δ such that B(t)y − y < /3 for all t ∈ (1,1 + δ 1 ) ⊂ (1,1 + δ). Now

B(t)E X(s) = A(t)E X(s)
Proof. Indeed, in this case Ᏹ 0 = Ᏹ and hence B(t) = A(t), for all t > 0. So which in turn is less than or equal to respectively, where M 0 denotes the same constant as in the proof of the theorem above, and M = M 0 Ω X(1) dP. The existence of an H as above can be obtained on infinite-dimensional spaces if one considers processes with the property that the expectation-function is differentiable on an interval of the form (s 0 ,∞) for some fixed s 0 ≥ 1.
: t ∈ R}is an additive C 0 -group of operators. Setting t 0 = logs 0 we wish to show that for each x ∈ Ᏹ, the function T(·)x is differentiable at t for any t > t 0 . We begin by showing this fact for x of the form x = E[X(u)], u ≥ 1. Indeed, in such a case, the following limit exists for any s > s 0 : Taking s = e t and making the substitution h = logv, we obtain that the following limit exists, for any u ≥ 1 and hence for any u > s 0 : (2.34) By linearity, this implies that for any t > t 0 and any x ∈ Span({E[X(u)] : u > s 0 }), the function T(·)x is differentiable at t. To prove this for an arbitrary x ∈ Ᏹ, observe that the inequality where δ > 0 is chosen such that (e −δ ,e δ ) ⊆ (1 − δ,1 + δ). For an arbitrary sequence {h n } n in R such that h n → 0, we wish to show that the sequence {(T(t + h n ) − T(t))x/h n } n is Cauchy. To that aim, choose > 0 and y ∈ Span({E[X(u)] : u > s 0 }) such that x − y ≤ /(4M T(t) ). Since the sequence {(T(t + h n ) − T(t))y/h n } n is Cauchy because it is convergent, we can choose n 0 ∈ N such that for all m,n ≥ n 0 . In that case we can write This shows that the derivative of the function T(·)x exists at any t > t 0 for any choice of x ∈ Ᏹ. According to [14, Chapter 2, Lemma 4.2], this implies that, for t > t 0 sufficiently large, Then HT(t)T(−t) = H is also bounded. Hence we have that T(t) = e tH , for all t ∈ R, that is, A(s) = s H , for all s > 0. To show that H satisfies (2.30), observe that for each fixed s > s 0 , one can write (2.39)

OSS processes with scaling families of invertible operators
In this section we assume that for the OSS process {X(t) : t > 0} there exists a scaling family of operators {A(s) : s > 0} consisting of invertible operators on Ᏹ. For such processes we consider the classes of operators in the following definition.
Remark 3.2. By our assumptions G t = ∅, for any t > 0.
The following theorem (parts of which appear in [4] or [17] for the case of operators on R n ) has a straightforward proof. We include it in order to make this paper selfcontained. G is a group, G 1 is a normal subgroup of G, closed relative to G. For each t > 0, the class G t is an equivalence class modulo G 1 , that is, G t ∈ G/G 1 , for any t > 0, and the map ϕ(t) = G t , t > 0, is an onto group homomorphism of (0,∞) onto G/G 1 . Let p denote the canonical projection of G onto G/G 1 . The process {X(t) : t > 0} has a scaling family of operators which is a multiplicative group of operators if and only if the homomorphism ϕ lifts to a homomorphism ψ of (0,∞) into G, that is, if and only if there exists a group homomorphism ψ of the multiplicative group (0,∞) into G such that p • ψ = ϕ.
Proof. Indeed, for u,s > 0 consider A ∈ G s and B ∈ G u . We can write that is, AB ∈ G su which proves that G is a subgroup of the group of all invertible operators on Ᏹ and that ϕ is an onto group homomorphism, provided that we show that the sets G s , s > 0 are equivalence classes modulo G 1 . We will prove the latter below. In a similar way, one shows that G 1 is a subgroup of G. Let us check that G 1 is normal. Indeed, choose an arbitrary A in G. Then A ∈ G s for some s > 0. For any B ∈ G 1 we can write Conversely now, if T ∈ G s , then set B = A −1 T. According to what we proved above, B ∈ G 1 and AB = T ∈ AG 1 , proving that AG 1 = G s . A similar argument leads to G 1 A = G s , ending the proof of the normality of G 1 and of the fact that the sets G s are equivalence classes modulo G 1 . Finally the fact that G 1 is closed relative to G (not only norm-closed but closed even with respect to the strong operator topology) is a direct consequence of [6,Proposition 1.7.2]. Obviously, the process {X(t) : t > 0} has a scaling family of operators which is a multiplicative group of operators if and only if the homomorphism ϕ lifts to a homomorphism ψ of (0, ∞) into G.
In several of the previous papers on OSS processes [4,9,17], it is proved that such processes satisfying good continuity assumptions have an exponent, that is, have a scaling family of operators of the form {s H : s > 0} where H ∈ ᏸ(Ᏹ) is called an exponent of the process. Relative to that, we prove the following.
Proof. To prove the sufficiency observe first that G 1 is a subgroup of Ꮽ closed relative to Ꮽ, for each s > 0 G s is an equivalence class of Ꮽ, modulo G 1 , and ϕ(s) = G s is an onto group homomorphism of (0, ∞) onto Ꮽ/ G 1 . This homomorphism is continuous if the quotient topology is considered on Ꮽ/ G 1 . Since this is a homomorphism of topological groups, only continuity at 1 needs to be checked. To that aim, let ᏺ denote a neighborhood of the identity in Ꮽ/ G 1 . In that case there exist S ⊆ (0,∞) such that 1 ∈ S, ᏺ = { G s : s ∈ S}, and N = ∪ s∈S G s is a neighborhood of I. Therefore there is an > 0 such that, if A ∈ Ꮽ and A − I < , then A ∈ N. Associate to this a δ satisfying condition (3.5). For arbitrary, fixed t ∈ (1 − δ,1 + δ) consider an operator A as in condition (3.5). There must exist s ∈ S such that A ∈ G s and hence G t = G s so G t ∈ ᏺ, or in other words, ϕ(1 − δ,1 + δ) ⊆ ᏺ, that is, ϕ is continuous. By a theorem of Moskowitz (see [1] or [13]), ϕ lifts to a continuous group homomorphism ψ : is both a scaling family for the process under consideration and a norm-continuous multiplicative group of operators. Therefore it must be of the form {A(s) = s H : s > 0} for some H ∈ ᏸ(Ᏹ), which ends the proof of the sufficiency. The necessity is immediate. Indeed if an exponent H exists, then set Ꮽ = {s H : s > 0}.
Exponents and more generally, scaling families of operators for OSS processes need not be uniquely determined. See [4,9,17] or the following remark. (3.6) The image under ρ A of any scaling family of operators of the process {X(t) : t > 0} is also a scaling family of operators for the same process.
Proof. Assume A ∈ G s for some s > 0, then let {A(t) : t > 0} be a scaling family of operators for the given process. By the proof of Theorem 3.1, A −1 ∈ G s −1 and Next we will extend to arbitrary Hilbert spaces the early result by Laha and Rohatgi saying that proper R n -valued OSS processes with a scaling family of positive operators have exponents. Let Ᏹ be a Hilbert space. We need to introduce some terminology first.
Definition 3.4. Recall that a probability measure μ on the σ-algebra of the Borel subsets of Ᏹ, a separable Banach space, is called a full measure if the support of μ is not contained by a hyperplane.
Following [9], we introduce the notion of proper stochastic process.  Proof. Assume first that A is invertible and A = I. By the spectral mapping theorem for positive operators, [15, Theorem 1.6] it will be enough to prove the existence of x 0 for a multiplication operator M φ , acting on L 2 K (dμ) where K is a compact subset of (0,∞), μ a finite Borel measure on K, and φ an essentially bounded nonnegative function on K with essentially bounded inverse φ −1 . Indeed, each positive invertible operator A is unitarily equivalent to such a multiplication operator M φ by [15,Theorem 1.6]. If the set E = {x ∈ K : φ(x) < 1} has positive measure μ, then its characteristic function χ E is a nonzero element of L 2 K (dμ). By Lebesgue's dominated convergence theorem, one immediately proves M n φ χ E 2 → 0. If μ(E) = 0, then suppose μ(F) > 0, where F = {x ∈ K : φ −1 (x) < 1}. Again by Lebesgue's theorem, one gets M −n φ χ F 2 → 0. One of the sets E and F must have positive measure μ because if we suppose that both have measure 0, then φ = 1, μ− a.e., that is, M φ = I and hence A = I, contrary to our assumptions. Thus we established the existence of a nonzero x 0 with the required properties. To show the existence of x 0 when A is noninvertible, recall that K above is the spectrum of A and in this last case K contains 0. It is known that K coincides with the essential range of φ, [3]. Let us choose , 0 < < 1. Since 0 is in the essential range of φ, one obtains that the set E = φ −1 ([0, )) has positive measure μ. Therefore χ E is a nonzero element of L 2 K (dμ), and since φ n (x) → 0, for all x ∈ E, Lebesgue's theorem can be applied again in order to get M n φ χ E 2 → 0. Lemma 3.4. Let Ᏹ be a separable Hilbert space and X (1) : Ω → Ᏹ a random variable such that PX(1) −1 is a full probability measure. The only positive operator A on Ᏹ with the property P(AX(1)) −1 = P(X(1)) −1 is the identity.
Proof. If A is invertible then obviously P(A −1 X(1)) −1 = P(X(1)) −1 . Therefore we will not reduce generality by assuming that A n x 0 → 0 for some nonzero x 0 . Denote by ·, · the inner product of Ᏹ. The mapping ·, x 0 is obviously a continuous mapping on Ᏹ. Therefore one can write P X(1),x 0 −1 = P A n X(1),x 0 −1 = P X(1),A n x 0 −1 , ∀n ≥ 0. which is a contradiction because this means that PX(1) −1 is not full.