Probabilistic Basin of Attraction and Its Estimation Using Two Lyapunov Functions

We study stability for dynamical systems specified by autonomous stochastic differential equations of the form dX(t) = f(X(t))dt+ g(X(t))dW(t), with (X(t))t≥0 anR-valued Itô process and (W(t))t≥0 anR-valuedWiener process, and the functions f : R → R and g : R → R are Lipschitz and vanish at the origin, making it an equilibrium for the system. The concept of asymptotic stability in probability of the null solution is well known and implies that solutions started arbitrarily close to the origin remain close and converge to it. The concept therefore pertains exclusively to system properties local to the origin. We wish to address the matter in a more practical manner: Allowing for a (small) probability that solutions escape from the origin, how far away can they then be started? To this end we define a probabilistic version of the basin of attraction, the γ-BOA, with the property that any solution started within it stays close and converges to the origin with probability at least γ. We then develop a method using a local Lyapunov function and a nonlocal one to obtain rigid lower bounds on γ-BOA.


Introduction
Lyapunov theory of stability [1] for deterministic systems has been very successful as a tool to study qualitative system behavior.For mathematically oriented studies of the Lyapunov theory, compare, for example, [2][3][4] or [5][6][7] for a more modern treatment including complete Lyapunov functions.For more application oriented surveys, compare, for example, [8][9][10].The centerpiece of the theory is the Lyapunov function, a real function of state-space which is nonincreasing along solution trajectories.A deterministic Lyapunov function gives rigid estimates on the basin of attraction (BOA) of an equilibrium through its sublevel sets.The equilibrium under consideration can be assumed to be the origin without loss of generality and one then often speaks of the stability of the null solution.Its usefulness has sparked the development of methods to find Lyapunov functions.Breaking the system domain up and addressing each part separately can sometimes work in the deterministic case.
With 0 ∈ B ⊂ N ⊂ A ⊂ R  , one starts by generating a local Lyapunov function  on the small neighbourhood N of the origin and then finding a nonlocal Lyapunov function  : A\B → R + , using entirely different methods suited away from the equilibrium.
In this paper (see Section 2), we shall develop the theory necessary to enable such a splitting for a stochastic system.Advanced numerical methods taking advantage of the splitting are in the final stages of development by the authors and their collaborators; see [11] for first results on the local part at the equilibrium.An example using analytical solution at the origin and the shooting method further away is presented in Section 3 in this paper.
To set the stage, we shall start by discussing the local and nonlocal Lyapunov functions in the deterministic setting.For definition of the notations used, see Section 1.1.Consider a deterministic system given by an autonomous ordinary differential equation (ODE) ẋ = f(x), where f : R  → R  is sufficiently smooth and such that f(0) = 0. To 2 Complexity characterize asymptotic stability of the null solution it is convenient to resort to the function classes K ∞ and KL (defined in Section 1.1).The null solution is called (locally) asymptotically stable (AS) if there exists a KL function  and a neighbourhood A of the origin such that for every x ∈ A and  ≥ 0 we have ‖(, x)‖ ≤ (‖x‖, ), where   → (, x) is the solution to the ODE started at x at  = 0.In particular, A is a subset of the equilibrium's BOA defined as {x ∈ R  : lim sup →∞ ‖(, x)‖ = 0}.If one can even take A = R  the origin is called globally asymptotically stable (GAS).Obviously (local) AS is too weak a condition and GAS is an unnecessary strict condition for most applications.One is interested in AS where the set A is of reasonable size for the problem at hand.A rigorous lower bound on the BOA can be made if a Lyapunov function  for the system is known, that is, a continuously differentiable function  : U → R + defined on an open neighbourhood U ⊂ R  of the origin and such that  1 (‖x‖) ≤ (x) ≤  2 (‖x‖) and ∇(x)⋅f(x) ≤ − 3 (‖x‖) for some  1 ,  2 ,  3 ∈ K ∞ and all x ∈ U.For a Lyapunov function  all sublevel sets  −1 ([0, ]),  > 0, that are compact subsets of U are subsets of the BOA.
For the general deterministic setting, constructing a Lyapunov function  on a reasonably sized set U is usually a very difficult problem.Often, however, the construction of a Lyapunov function on a small neighbourhood N of the origin is simple.Indeed, if the spectrum of the Jacobian  fl f(0) of f at the origin does not have any purely imaginary points, then the origin is AS iff the equation  ⊤  +  = − has a positive definite solution  ∈ R × for a positive definite  ∈ R × .Indeed, then   (x) = x ⊤ x is a Lyapunov function for the ODE ẋ = x on R  and on some neighbourhood N of the origin it is a Lyapunov function for the nonlinear system ẋ = f(x).The size of N, however, depends strongly on the nonlinearities of f; compare, for example, [12, p. 659] for an explicit estimate.By using this or a different method to compute a local Lyapunov function for the system ẋ = f(x), one can compute a set B ⊂ N that is a subset of the equilibrium's BOA.To assert that a larger set A ⊃ B is in the BOA, it now suffices to show that ‖(, x)‖ ≤ (‖x‖, ) whenever x, (, x) ∈ A \ B, because the local Lyapunov function   and the semigroup property (, (, x)) = ( + , x) of the flow deliver the missing part.For many methods to numerically compute Lyapunov functions for the system ẋ = f(x) on a reasonably sized U ⊂ R  it is either advantageous or even necessary to use the fact that a (small) set B is already known to be in the BOA; compare, for example, [12][13][14][15][16][17][18] or [19,Section 2.11] and the references therein for an overview.Now, consider a set of stochastic differential equations (SDEs) of the form: X() = f(X()) + g(X())W(), f : R  → R  and g : R  → R × , with equilibrium at the origin; that is, f(0) = 0 and g(0) = 0.As in the deterministic case, a Lyapunov function is a function  defined on some neighbourhood U ⊂ R  of the origin, such that  1 (‖x‖) ≤ (x) ≤  2 (‖x‖) for some  1 ,  2 ∈ K ∞ .The condition for deterministic systems that  is decreasing along solution trajectories, that is, ∇(x) ⋅ f(x) ≤ −(‖x‖) for an  ∈ K ∞ , is replaced by the condition that the expectation of the process   → (X y ()) is a decreasing function of time.Here   → X y () is the stochastic process started at y at time  = 0.By Itô's formula with where [g(x)g(x) ⊤ ]  denotes the (, )-th element of the matrix g(x)g(x) ⊤ .The expectation of the second integral in the Itô formula is zero because the process   → ∇(X y ()) ⋅ g(X y ()) is adapted and therefore the decrease of the expectation, that is, E{(X y ())} < (y), is implied by (x) ≤ −(‖x‖).In order to find such a Lyapunov function for a stochastic system, one could go about as in the deterministic case and solve the partial differential equation (PDE) (x) = −(‖x‖) for an  ∈ K ∞ .However, in the deterministic case, the solution  to the PDE ∇(x) ⋅ f(x) ≤ −(‖x‖) will automatically have local minima at the AS equilibrium but this is not the case for a solution to (x) = −(‖x‖).The PDE is markedly different, and the condition (x) ≤ −(‖x‖) alone is no longer sufficient to ensure that the Lyapunov function  has a local minimum at the equilibrium at the origin.It is therefore a natural first approach to try and solve the PDE (x) = −(‖x‖) imposing boundary conditions (0) = 0 and (x) =  0 > 0 on the boundary U.The problem with this approach is that the PDE cannot be strictly elliptic at the equilibrium at the origin and the standard theory for existence and uniqueness of solutions to PDE does not apply.Making a cutout of some small neighbourhood B containing the origin from the domain A and solving the PDE on A\B instead can ensure strict ellipticity and hence existence and uniqueness of the solution; compare, for example, [20, Theorem 6.14], and better yet, solving (x) = 0 so that solutions satisfy both a minimum and a maximum principle (of elliptic PDE) will guarantee that the minima and the maxima of  are taken on the boundary and with  = 0 on B and  = 1 on A we automatically have Even if the PDE is not strictly elliptic, as will happen if  < , the condition (x) < 0 instead of (x) = 0 means that one has more room to actually compute numerically a true Lyapunov function, rather than a numerical approximation of a function we know is a Lyapunov function, as done in [21] where solutions to the PDE are interpreted in the viscosity sense.This will be discussed briefly in Section 4.
Despite these advantages, the cut-out of course removes the equilibrium itself from the domain of the Lyapunov function and one must take care off that the null solution is actually stable.In Section 2, we describe the details of how this can be achieved by using a different (local) Lyapunov function on a superset of the cut-out set B. In Section 3, we show an example of our approach and in Section 4, we draw a roadmap of how the results in Section 2 can be used to develop a general method to give rigid estimates of -BOA of reasonable size for practical applications.
1.1.Notation.We denote a (column)-vector x = ( 1 ,  2 , . . .,   ) ⊤ in R  in boldface and its transpose by x ⊤ .We also write the matrix-valued function g : R  → R × in boldface.We denote the Euclidian norm of a vector x by ‖x‖ and for a matrix ‖ ⋅ ‖ denotes the induced matrix norm.We denote the scalar-product of two vectors x, y ∈ R  by x ⋅ y.We write sets A ⊂ R  in calligraphic and denote by A ∘ its interior, by A its closure, by A its boundary, and by A  its complement.We define R + fl [0, ∞) and denote by N the set of the natural numbers larger than zero.
A function  : R + → R + is said to be of class K ∞ if it is continuous, monotonically increasing, (0) = 0, and lim →∞ () = ∞.A function  : R + → R + is said to be of class L if it is continuous, monotonically decreasing, and lim →∞ () = 0.A function  : We write P and E for probability and expectation, respectively.The underlying probability spaces should always be clear from the context.Conditional probabilities and expectations are denoted by P(⋅ | ⋅) and E(⋅ | ⋅), respectively.We denote the characteristic function of a set A by 1A.The abbreviation a.s.stands for almost surely, that is, with probability one, and a.s.

Stochastic Differential Equations and Stability
Stochastic differential equations (SDEs) often come up when modeling a physical system described by an ODE and perturbing this system with noise, corresponding to either uncertainty, measurement error, or unknown complications in the system.The stochastic integral as presented by Itô is in some sense the natural way to approach the situation when the underlying deterministic system responds causally to the noise.This is the case for example when describing financial markets when it is instrumental that the noise has no autocorrelation.In cases when the noise is an effective model of some unknown and/or complicated dynamic subsystem, then it can be more natural to represent this via the so-called Stratonovich stochastic integral.In this paper, we shall restrict our attention to the Itô stochastic integral since a Stratonovich approach is easily represented by a slightly modified Itô system.We study the stability of the origin of the SDE of Itô type where f : R  → R  and g : R  → R × are (globally) Lipschitz continuous functions; that is, there exists a  > 0 such that and such that f(0) = 0 and g(0) = 0. We consider strong solutions to (3), that is, given a filtered probability space (Ω, F, (F  ) ≥0 , P) satisfying the usual conditions (F complete, (F  ) ≥0 right continuous, F 0 containing all F null sets), a R  -valued Brownian motion W defined on [ 0 , ∞), and an (ii) The processes   → f(X()) and   → g(X()) belong to (iii) For every  ≥  0 , the equation holds a.s.The second integral is interpreted in the Itô sense.
Uniqueness.Any solution Y to (3) is indistinguishable from X; that is, for every  ≥ 0, we have X() a.s.
= Y().For deterministic initial solutions, that is, X( 0 ) = Z = x ∈ R  a.s., we write X x for the solution.
The following theorem provides the standard solution theory, the backdrop to our study of stability later.
Theorem 1.The SDE (3) with initial distribution Z has a unique solution X Z .Further,   → X Z () is a strong Markov process; that is, for every bounded Borel-measurable function  : R  → R, any F  -stopping-time  < ∞ a.s., and any  ≥ 0, we have For a proof of the last theorem, compare, for example, [22, Section 2.3 and Theorem 2.9.3].Note that since we have f(0) = 0 and g(0) = 0 the Lipschitz condition implies the so-called linear growth condition.
A plethora of concepts are in use concerning the stability of SDEs.Here we will be concerned with the so-called asymptotic stability in probability of the zero solution [24, (5.15)], also referred to as stochastic asymptotic stability [22, Definition 4.2.1].For a more detailed discussion of the Complexity stability of SDEs, see the books by Khasminskii [24] or Mao [22].For completeness, we recall the definitions of stability in probability and asymptotic stability in probability.
= 0 to the SDE (3) is said to be stable in probability (SiP) if for every  > 0 and every 0 <  < 1 there exists a  > 0 such that ‖x‖ ≤  implies P {sup = 0 to the SDE ( 3) is said to be asymptotically stable in probability (ASiP) if it is SiP and in addition for every 0 <  < 1 there exists a  > 0 such that Our definitions of SiP and ASiP are equivalent to the more common The reason for our formulation is that we want to look at a more practical concept related to such stability, namely, stochastic analog of the BOA in the stability theory for deterministic systems.We therefore back from the limit x → 0 and consider the following instead: Given some confidence 0 <  ≤ 1 how far from the origin the sample paths can start and still approach the equilibrium as  → ∞ with probability greater than or equal to .This is the motivation for the next definition.
Definition 4 (-basin of attraction (-BOA)).Consider system (3) and let 0 <  ≤ 1.One refers to the set as the -basin of attraction or short -BOA.
Note that a 1-BOA corresponds to the usual BOA for deterministic systems.
For the SDE (3), the associated generator is given by for some appropriately differentiable  : U → R with U ⊂ R  .Notice that this is just the drift term in the expression for the stochastic differential of the process   → (X()); compare (1).As discussed in the Introduction, the generator for a stochastic system corresponds to the orbital derivative of a deterministic system.
Remark 6.It is of vital importance that  is not necessarily differentiable at the equilibrium, because otherwise a large number of systems with an ASiP null solution do not possess a Lyapunov function; compare [24, Remark 5.5].
Remark 8.If the condition (x) < 0 is violated, even at only one point x ∈ N\{0}, one cannot conclude that   → (X x (∧ )) is a supermartingale; compare [24, p. 149].The reason why (x) ≤ 0 does not have to hold true at the equilibrium at the origin is because it is so-called inaccessible point of the SDE, compare again [24, p. 149], meaning that P {∃ > 0 : X x () = 0} = 0 for every x ̸ = 0. (15) Instead of trying to compute a local Lyapunov function for system (3) on a large domain N, we suggest to compute it on a small N, for example, by using linearization, and then to compute a nonlocal Lyapunov function, defined below, to compute a reasonably sized rigid estimate on the equilibrium's -BOA.The advantages of this procedure were explained in the Introduction and are addressed again in Section 4, where we give a roadmap of our approach.
Proof.We first assume that  fulfills (2a) in Definition 9 and set ℎ fl −max x∈U (x) and note that since the solution process remains in U for all  ∈ [0, ∧], we get by [24, Lemma 3.2] and the nonnegativity of  that Now and then Taking the limit  → ∞ delivers P{∞ ≤ } = 0 and then P{ < ∞} = 1.
If the condition (2b) in Definition 9 is fulfilled (instead of condition (2a)), then, by [20,Theorem 6.14], it follows that the boundary value problem has a solution  ∈  2 (U).Just as above with  substituted for , it follows that P{ < ∞} = 1.
For the second proposition of the lemma define the stopping-times  A fl inf{ > 0 : X x () ∈ A} and  B fl inf{ > 0 : X x () ∈ B} and observe that  =  A ∧  B a.s.because   → X x () is continuous a.s.Now and then that is, we have because P{ < ∞} = 1, which completes the proof.
We now show how to combine a local Lyapunov function and a nonlocal one to get a rigorous lower bound on -BOA.For the schematic representation of the interrelation between the (sub)level sets in Theorem 11 see Figure 1.
Theorem 11.Consider system (3) and assume there exist a local Lyapunov function  : N → R + as in Definition 5 and a nonlocal Lyapunov function  : U → R + as in Definition 9 for the system.Assume further that the constants   > 0 and 0 <  < 1 from Theorem 7 and the constants 0 <  <  < 1 and the set B from Lemma 10 are such that Then the set  −1 ([0, ]) ∪ B is a subset of the -BOA of the origin, where  fl Proof.Let x ∈ B∪ −1 ([0, ]) be arbitrary but fixed.We must show that Outward from the equilibrium (thick dot): ).The local Lyapunov function  asserts that a solution starting in the innermost area leaves the red area with probability no higher than  and the nonlocal Lyapunov function  asserts that a solution leaving the red area goes to the innermost area with probability no less than 1 − .
and by what we have showed above the probability of this event is no larger than  ⋅ (1 − ) ⋅  ⋅ (1 − ) ⋅ ⋅ ⋅ = 0, which completes the proof.
Remark 12.Note that in the proof of Theorem 11 the conditions on f and g in (3) need only to hold within A, because we stop our solutions at the boundary A, and therefore f and g can be altered outside of A without changing the conclusions of the theorem.It thus suffices that f and g fulfill the Lipschitz condition (4) on A, which is, for example, always the case if f and g are locally Lipschitz.

Worked Out Example
We consider an example to show how our approach works.Because we do not want to consider appropriate methods to solve the relevant PDEs in this paper we consider a 1dimensional example, but a highly nonlinear one.The system is given by the SDE (43) We fix  = 1,  = 3, and  = 1/2.This corresponds to the deterministic system ẋ = sin  with an unstable null solution.
The noise, however, stabilizes the null solution, which can be seen from () < 0 for all  ∈ [−2 To enlarge our estimate we solve the PDE () = 0 (note that () 2 > 0 for  ̸ = 0) with the boundary conditions (10 −4 ) = 0 and (/2) = 1 with  = 5; compare Lemma 10 and Theorem 11.Because the system is 1-dimensional the PDE is also an ODE and can be conveniently solved with the shooting method.With the initial conditions (10 −4 ) = 0 and   (10 −4 ) = 0.00157296, we get (2.5)= 0.99998 using a very conservative step-size of 10 For comparison we get the rigid bound  1−0.1 = 0.1 2 ⋅ 2 −1/2 = 0.00707, that is, [−0.00707, 0.00707] on the 0.9-BOA, if we only use the local Lyapunov function .Our combined method thus delivers a considerably larger set that is affirmed to be contained in the equilibrium's 0.9-BOA, namely, [−5.77, 5.77] instead of [−0.00707, 0.00707] from the local Lyapunov function.For different  the results are comparable.In Figure 2 we show nonlocal Lyapunov functions computed for the system, not only with  = 5 but also for  = 1, 2, 3, 4 to give a better idea of what is happening.

Definition 9 (Lemma 10 .
nonlocal Lyapunov function).Let A, B ⊂ R  , B ⊂ A ∘ , be simply connected compact neighbourhoods of the origin with  2 boundaries and set U fl A\B ∘ .A function  ∈  2 (U) for system (3) such that (1) 0 ≤ (x) ≤ 1 for all x ∈ U,  −1 (0) = B,  −1 (1) = A, and either (2a) (x) < 0 for all x ∈ U or (2b) (x) ≤ 0 and g(x)g(x) ⊤ is positive definite for all x ∈ U, is called a nonlocal Lyapunov function for system (3).We refer to A as the outer boundary of U and B as the inner boundary of U. Assume system (3) has a nonlocal Lyapunov function  ∈  2 (U) as in Definition 9 and let x ∈ U = A\B ∘ .Let  fl inf{ > 0 : X x () ∉ U} be the exit-time from U. Then P { < ∞} = 1,