2. Preliminaries
Throughout this paper, unless otherwise specified, let (Ω,ℱ,{ℱt}t≥0,ℙ) be complete probability space with a filtration {ℱt}t≥0 satisfying the usual conditions (i.e., it is increasing and right continuous while ℱ0 contains all ℙ-null sets). Let W(t) be a scalar Brownian motion (Wiener process) defined on the probability space. Let AT denote the transpose of A. If A is a matrix, its operator norm is denoted by ∥A∥=sup{|Ax|:|x|=1}, where |·| is the Euclidean norm. Let r(t), t≥0, be a right-continuous Markov chain on the probability space taking values in a finite state space 𝕊={1,2,…,N} with the generator Γ=(γpq)N×N given by
(1)ℙ{r(t+Δ)=q∣r(t)=p} ={γpqΔ+o(Δ) if p≠q1+γppΔ+o(Δ) if p=q,
where Δ>0. Here, γpq>0 is the transition rate from p to q if p≠q while
(2)γpp=-∑q≠pγpq.
We assume that the Markov chain r(·) is independent of the Brownian motion W(·). It is well known that almost every sample path of r(·) is a right-continuous step function with finite number of simple jumps in any finite subinterval of ℝ+:=[0,+∞).
In this paper, we will consider the following hybrid BAM neural networks with reaction diffusion terms:
(3)∂u~i(t,x)∂t=∑k=1l∂∂xk(D¯ik(r(t))∂u~i(t,x)∂xk)-ai(r(t))u~i(t,x)+∑j=1ncji(r(t))f~j(v~j(t,x))+Ii,∂v~j(t,x)∂t=∑k=1l∂∂xk(D¯jk*(r(t))∂v~j(t,x)∂xk)-bj(r(t))v~j(t,x)+∑i=1meij(r(t))g~i(u~i(t,x))+Jj,
where i=1,2,…,m, j=1,2,…,n, t≥t0≥0, t0∈ℝ+, and the initial value r(t0)=i0∈𝕊. Consider x=(x1,x2,…,xl)∈Ω0⊂ℝl; Ω0 is a compact set with smooth boundary ∂Ω0 in space ℝl, and 0<mes Ω0<+∞. u~(t,x) = (u~1(t,x),…,u~m(t,x))∈ℝm and v~(t,x) = (v~1(t,x),…,v~n(t,x)) ∈ℝn u~i(t,x), v~j(t,x), are the state of the ith neurons and the jth neurons at times t and in space x, respectively. f~j and g~i denote the signal functions on the jth neurons and the ith neurons at times t and in space x, respectively. Ii and Jj denote the external input on the ith neurons and the jth neurons, respectively. ai(r(t))>0 and bj(r(t))>0 denote the rates with which the ith neurons and the jth neurons will reset its potential to the resting state in isolation when disconnected from the networks and external inputs, respectively. cji(r(t)) and eij(r(t)) denote the strength of the jth neurons on the ith neurons and the ith neurons on the jth neurons, respectively. Smooth functions D¯ik(r(t)):=D¯ik(r(t),x,u)≥0 and D¯jk*(r(t)):=D¯jk*(r(t),x,u)≥0 correspond to the transmission diffusion operator along the ith neurons and the jth neurons, respectively.
The initial conditions and boundary conditions are given by
(4)u~i(t0,x)=ϕ¯i(x), x∈Ω0,t0∈ℝ+, i=1,2,…,m,v~j(t0,x)=ψ¯j(x), x∈Ω0,t0∈ℝ+, j=1,2,…,n,∂u~i(t,x)∂n→|∂Ω0=(∂u~i(t,x)∂x1,…,∂u~i(t,x)∂xl)T=0,bbbb(t,x)∈[t0,+∞)×∂Ω0, i=1,2,…,m,∂v~j(t,x)∂n→|∂Ω0=(∂v~j(t,x)∂x1,…,∂v~j(t,x)∂xl)T=0,bbbbi(t,x)∈[t0,+∞)×∂Ω0, j=1,2,…,n.
The neuron activation functions f~ and g~ are global Lipschitz continuous; that is, there exist constants K>0 and L>0, such that
(5)|f~(v~)-f~(v~*)|≤K|v~-v~*|, ∀v~,v~*∈ℝn|g~(u~)-g~(u~*)|≤L|u~-u~*|, ∀u~,u~*∈ℝm.
Then, the neural networks (3) have a unique state (u~(t,x;t0,ϕ¯(x)) and v~(t,x;t0,ψ¯(x))) for any initial values (ϕ¯(x),ψ¯(x)) (see [26, 27]).
In addition, we assume that the neural networks (3) have an equilibrium point u*=(u1*,…,um*)∈ℝm, v*=(v1*,…,vn*)∈ℝn.
Let u(t,x)=u~(t,x)-u*, v(t,x)=v~(t,x)-v*, f(v(t,x)) = f~(v(t,x)+v*)-f~(v*), g(u(t,x)) = g~(u(t,x)+u*)-f~(u*), Dik(r(t))=D¯ik(r(t),x,u(t,x)+u*), and Dik*(r(t))=D¯ik*(r(t),x,v(t,x)+v*), and then (3) can be rewritten as
(6)∂ui(t,x)∂t=∑k=1l∂∂xk(Dik(r(t))∂ui(t,x)∂xk)-ai(r(t))ui(t,x)+∑j=1ncji(r(t))fj(vj(t,x))∂vj(t,x)∂t=∑k=1l∂∂xk(Djk*(r(t))∂vj(t,x)∂xk)-bj(r(t))vj(t,x)+∑i=1meij(r(t))gi(ui(t,x)).
The initial conditions and boundary conditions are given by
(7)iui(t0,x)=ϕi(x)=ϕ¯i(x)-ui*,x∈Ω0,t0∈ℝ+, i=1,2,…,m,ivj(t0,x)=ψj(x)=ψ¯j(x)-vj*,x∈Ω0,t0∈ℝ+, j=1,2,…,n,∂ui(t,x)∂n→|∂Ω0=(∂ui(t,x)∂x1,…,∂ui(t,x)∂xl)T=0,bbbi(t,x)∈[t0,+∞)×∂Ω0, i=1,2,…,m,∂vj(t,x)∂n→|∂Ω0=(∂vj(t,x)∂x1,…,∂vj(t,x)∂xl)T=0,bbbb(t,x)∈[t0,+∞)×∂Ω0, j=1,2,…,n.
Hence, the origin is an equilibrium point of (6). The stability of the equilibrium point of (3) is equivalent to the stability of the origin of the state space of (6).
From (5), we give the assumption about activations functions f and g.
Assumption (H1). The neuron activation functions f and g are global Lipschitz continuous; that is, there exist constants K>0 and L>0, such that
(8)|f(v)-f(v*)|≤K|v-v*|, ∀v,v*∈ℝn, f(0)=0,|g(u)-g(u*)|≤L|u-u*|, ∀u,u*∈ℝm, g(0)=0.
We consider the following function vector space:
(9)U={v(t,x):[t0,+∞)×Ω0⟶ℝn,v(t,x) is continuous on t and twice continuous differentiable on x.
For every pair of (v,z) in U and every given t∈ℝ+, define inner product for v and z with
(10)〈v,z〉=∫Ω0(v(·,x))Tz(·,x)dx∈ℝ+.
Obviously, it satisfies inner product axiom, and the norm can be deduced by
(11)∥v(·,x)∥2=〈v(·,x),v(·,x)〉=∫Ω0|v(·,x)|2dx=∑i=1n∫Ω0|vi(·,x)|2dx.
Definition 1.
The neural networks (6) are said to be global exponentially stable if for any ϕ,ψ, there exist α>0 and β>0, such that
(12)∥u(t,x;t0,ϕ)∥22+∥v(t,x;t0,ψ)∥22 ≤α(∥ϕ∥22+∥ψ∥22)exp(-β(t-t0)), ∀t≥t0.
For the purpose of simplicity, we rewrite (6) as follows:
(13)∂u∂t=∇·(D(r(t))∘∇u)-A(r(t))u(t,x)+C(r(t))f(v(t,x))∂v∂t=∇·(D*(r(t))∘∇v)-B(r(t))v(t,x)+E(r(t))g(u(t,x)).
The initial conditions and boundary conditions are given by
(14)u(t0,x)=ϕ(x)=ϕ¯(x)-u*, x∈Ω0,t0∈ℝ+,v(t0,x)=ψ(x)=ψ¯(x)-v*, x∈Ω0,t0∈ℝ+,∂u(t,x)∂n→|∂Ω0=(∂u(t,x)∂x1,…,∂u(t,x)∂xl)T=0,bbbbbi(t,x)∈[t0,+∞)×∂Ω0,∂v(t,x)∂n→|∂Ω0=(∂v(t,x)∂x1,…,∂v(t,x)∂xl)T=0,bbbbbi(t,x)∈[t0,+∞)×∂Ω0,
where
(15)D(r(t))=(Dik(r(t),x,u))m×l,D*(r(t))=(Djk*(r(t),x,v))n×l,u(t,x)=(u1(t,x),…,um(t,x))T,v(t,x)=(v1(t,x),…,vn(t,x))T,∇u=(∇u1,…,∇um)T, ∇v=(∇v1,…,∇vn)T,∇ui=(∂ui∂x1,…,∂ui∂xl)T, ∇vj=(∂vj∂x1,…,∂vj∂xl)T,A(r(t))=diag(a1(r(t)),…,am(r(t))),B(r(t))=diag(b1(r(t)),…,bn(r(t))),C(r(t))=(cji(r(t)))n×m,E(r(t))=(eij(r(t)))m×n,f(v)=(f1(v1),…,fn(vn))T,g(u)=(g1(u1),…,gm(um))T,(D(r(t))∘∇u)=(Dik(r(t))∂ui∂xk),(D*(r(t))∘∇v)=(Djk*(r(t))∂vj∂xk).
Here, ∘ denotes Hadamard product of matrix D and ∇u and D* and ∇v.
3. Noise Impact on Stability
In this section, we consider the noise-induced neural networks (6) described by the stochastic partial differential equations
(16)du-i(t,x)={∑j=1n∑k=1l∂∂xk(Dik(r(t))∂u-i(t,x)∂xk)m-ai(r(t))u-i(t,x)m+∑j=1ncji(r(t))fj(v-j(t,x))}dt+σu-i(t,x)dW(t),dv-j(t,x)={∑k=1l∂∂xk(Djk*(r(t))∂v-j(t,x)∂xk)m-bj(r(t))v-j(t,x)m+∑i=1meij(r(t))gi(u-i(t,x))}dt+σv-j(t,x)dW(t).
The initial conditions and boundary conditions are given by
(17)u-i(t0,x)=ϕi(x), x∈Ω0, t0∈ℝ+, i=1,2,…,m,v-j(t0,x)=ψj(x), x∈Ω0, t0∈ℝ+, j=1,2,…,n,∂u-i(t,x)∂n→|∂Ω0=(∂u-i(t,x)∂x1,…,∂u-i(t,x)∂xl)T=0,bbbb(t,x)∈[t0,+∞)×∂Ω0, i=1,2,…,m,∂v-j(t,x)∂n→|∂Ω0=(∂v-j(t,x)∂x1,…,∂v-j(t,x)∂xl)T=0,bbbbi(t,x)∈[t0,+∞)×∂Ω0, j=1,2,…,n,
where σ is the noise intensity.
We rewrite (16) as follows:
(18)du-(t,x)={∇·(D(r(t))∘∇u-)-A(r(t))u-(t,x)m+C(r(t))f(v-(t,x))}dt+σu-(t,x)dW(t),dv-(t,x)={∇·(D*(r(t))∘∇v-)-B(r(t))v-(t,x)m+E(r(t))g(u-(t,x))}dt+σv-(t,x)dW(t).
For the globally exponentially stable neural networks (6), we will characterize how much stochastic noise the neural networks (16) can tolerate while maintaining global exponential stability.
Definition 2.
The neural networks (16) are said to be almost surely globally exponentially stable, if for any ϕ and ψ the Lyapunov exponent
(19)limsupt→∞log(∥u-(t,x;t0,ϕ)∥2+∥v-(t,x;t0,ψ)∥2)t<0, a.s.
Definition 3.
The neural networks (16) are said to be mean square globally exponentially stable, if, for any ϕ and ψ, the Lyapunov exponent
(20)limsupt→∞log𝔼{∥u-(t,x;t0,ϕ)∥22+∥v-(t,x;t0,ψ)∥22}t<0,
where (u-(t,x;t0,ϕ), v-(t,x;t0,ψ)) is the state of neural networks (16).
From the above definitions, it is clear that the almost sure global exponential stability of the neural networks (16) implies the mean square global exponential stability of the neural networks (16) (see [26, 27]) but not vice versa.
Theorem 4.
Under Assumption (H1), the mean square global exponential stability of neural networks (16) implies the almost sure global exponential stability of the neural networks (16).
Proof.
For any (ϕ(x),ψ(x))≢(0,0), we denote the state (u-(t,x;t0,ϕ), v-(t,x;t0,ψ)) of (16) as (u-(t,x),v-(t,x)). By Definition 3, there exist λ>0 and C>0, such that
(21)𝔼{∥u-(t,x)∥22+∥v-(t,x)∥22} ≤C(∥ϕ∥22+∥ψ∥22)e-λ(t-t0), t≥t0.
Let r(t)=p∈𝕊. Construct average Lyapunov functional
(22)V(u-(t,x),v-(t,x),p) =∫Ω0|u-(t,x)|2dx+∫Ω0|v-(t,x)|2dx =∫Ω0∑i=1mu-i2(t,x)dx+∫Ω0∑j=1nv-j2(t,x)dx.
Let n=1,2,…, by Itô formula and Assumption (H1), for t0+n-1≤t≤t0+n,
(23)V(u-(t,x),v-(t,x),p)=V(u-(t0+n-1,x),v-(t0+n-1,x),p) +∫t0+n-1t∫Ω02u-T(s,x)[+C(r(s))f(v-)∇·(D(r(s))∘∇u-)-A(r(s))u-mmmmmmmmmmmmm+C(r(s))f(v-)]dxds +σ2∫t0+n-1t∫Ω0|u-(s,x)|2dxds +∫t0+n-1t∫Ω02v-T(s,x)[∇·(D*(r(s))∘∇v-)-B(r(s))v-mmmmmmmmmmmmm+E(r(s))g(u-)]dxds +σ2∫t0+n-1t∫Ω0|v-(s,x)|2dxds +2σ∫t0+n-1t∫Ω0|u-(s,x)|2dxdW(s) +2σ∫t0+n-1t∫Ω0|v-(s,x)|2dxdW(s) +∑q=1Nγpq∫t0+n-1t∫Ω0(|u-(s,x)|2+|v-(s,x)|2)dxds.
By boundary condition and Gauss formula, we get
(24)2∫Ω0u-T(s,x)[∇·(D(r(s))∘∇u-)]dx =2∑i=1m∑k=1l∫Ω0u-i∂∂xk(Dik(r(s))∂u-i∂xk)dx =2∑i=1m∫Ω0∇·(u-iDik(r(s))∂u-i∂xk)k=1ldx -2∑i=1m∫Ω0(Dik(r(s))∂u-i∂xk)k=1l·∇u-idx =2∑i=1m∫∂Ω0(u-iDik(r(s))∂u-i∂xk)k=1ldx -2∑i=1m∑k=1l∫Ω0Dik(r(s))(∂u-i∂xk)2dx =-2∑i=1m∑k=1l∫Ω0Dik(r(s))(∂u-i∂xk)2dx,(25)2∫Ω0v-T(s,x)[∇·(D*(r(s))∘∇v-)]dx =2∑j=1n∑k=1l∫Ω0v-j∂∂xk(Djk*(r(s))∂v-j∂xk)dx =2∑j=1n∫Ω0∇·(v-jDjk*(r(s))∂v-j∂xk)k=1ldx -2∑j=1n∫Ω0(Djk*(r(s))∂v-j∂xk)k=1l·∇v-jdx =2∑j=1n∫∂Ω0(v-jDjk*(r(s))∂v-j∂xk)k=1ldx -2∑j=1n∑k=1l∫Ω0Djk*(r(s))(∂v-j∂xk)2dx =-2∑j=1n∑k=1l∫Ω0Djk*(r(s))(∂v-j∂xk)2dx.
By Hölder’s inequality, we have
(26)∫Ω02u-(t,x)TC(r(s))f(v-(t,x))dx≤maxp∈𝕊∥C(p)∥[∫Ω0|u-(t,x)|2dx+K2∫Ω0|v-(t,x)|2dx],(27)∫Ω02v-(t,x)TE(r(s))g(u-(t,x))dx≤maxp∈𝕊∥E(p)∥[∫Ω0|v-(t,x)|2dx+L2∫Ω0|u-(t,x)|2dx].
Substituting (24)–(27) into (23), we get
(28)V(u-(t,x),v-(t,x),p)=V(u-(t0+n-1,x),v-(t0+n-1,x),p) +maxp∈𝕊[2∥A(p)∥+∥C(p)∥+∥E(p)∥L2+σ2] ×∫t0+n-1t∫Ω0|u-(s,x)|2dxds +maxp∈𝕊[2∥B(p)∥+∥E(p)∥+∥C(p)∥K2+σ2] ×∫t0+n-1t∫Ω0|v-(s,x)|2dxds +2|σ|∫t0+n-1t∫Ω0(|u-(s,x)|2+|v-(s,x)|2)dxdW(s),
where we use ∑q=1Nγpq=0.
From (28), we have
(29)𝔼(supt0+n-1≤t≤t0+nV(u-(t,x),v-(t,x),p)) ≤V(u-(t0+n-1,x),v-(t0+n-1,x),p) +C1∫t0+n-1t0+n𝔼V(u-(s,x),v-(s,x),r(s))ds +2|σ|𝔼(supt0+n-1≤t≤t0+n∫t0+n-1tV(u-(s,x),v-(s,x), mmmmmmmmmmmmmmmmr(s))dW(s)supt0+n-1≤t≤t0+n∫t0+n-1t),
where C1=[2∥A^∥+2∥B^∥+∥C^∥+∥C^∥K2+∥E^∥+∥E^∥L2+2σ2] and ∥A^∥=maxp∈𝕊∥A(p)∥.
On the other hand, by the Burkholder-Davis-Gundy inequality [27] and 2ab≤(a/ε)+εb(a>0,b>0,ε>0), we have
(30)2|σ|𝔼(supt0+n-1≤t≤t0+n∫t0+n-1tV(u-(s,x),v-(s,x),r(s))dW(s)) ≤42𝔼(∫t0+n-1t0+n4σ2V2(u-(s,x),v-(s,x),r(s))ds)1/2 ≤42𝔼(supt0+n-1≤t≤t0+nV(u-(s,x),v-(s,x),r(s))mmmmmmm×∫t0+n-1t0+n4σ2V(u-(s,x),v-(s,x),r(s))dssupt0+n-1≤t≤t0+n)1/2 ≤12𝔼(supt0+n-1≤t≤t0+nV(u-(t,x),v-(t,x),p)) +64σ2∫t0+n-1t0+n𝔼V(u-(s,x),v-(s,x),r(s))ds.
Substituting the above inequality into (29), we get
(31)𝔼(supt0+n-1≤t≤t0+nV(u-(t,x),v-(t,x),p)) ≤2𝔼V(u-(t0+n-1,x),v-(t0+n-1,x),p) +2[C1+64σ2]∫t0+n-1t0+n𝔼V(s)ds.
By induction and the mean square global exponential stability of neural networks (16),
(32)𝔼(supt0+n-1≤t≤t0+nV(u-(t,x),v-(t,x),p)) ≤C(∥ϕ∥22+∥ψ∥22)(2+2[C1+64σ2])e-λ(n-1).
Let ε∈(0,λ), by Chebyshev’s inequality [27], it follows from (32) that
(33)ℙ{supt0+n-1≤t≤t0+nV(u-(t,x),v-(t,x),p)>e-(λ-ε)(n-1)} ≤e-(λ-ε)(n-1)𝔼(supt0+n-1≤t≤t0+nV(u-(t,x),v-(t,x),p)) ≤C(∥ϕ∥22+∥ψ∥22)(2+2[C1+64σ2])e-ε(n-1).
By Borel-Cantelli Lemma [27], for almost all ω∈Ω,
(34)supt0+n-1≤t≤t0+n2V(u-(t,x),v-(t,x),p)≤2e-(λ-ε)(n-1)
holds for all but finitely many n. Hence, there exists an n0=n0(ω), for all ω∈Ω, excluding a ℙ-null set, for the above inequality that holds whenever n≥n0. Consequently, for almost all ω∈Ω,
(35)log2V(u-(t,x),v-(t,x),p)t≤-(λ-ε)(n-1)t0+n-1+2t0+n-1,
if t0+n-1≤t≤t0+n. Therefore,
(36)limsupt→∞log(∥u-(t,x)∥2+∥v-(t,x)∥2)t≤-(λ-ε)2 a.s.
Theorem 5.
Let Assumption (H1) hold and the neural networks (6) be globally exponentially stable. Then, the neural networks (16) is mean square globally exponentially stable and also almost surely globally exponentially stable, if there exist μq>0,(q∈𝕊) and |σ|<σ-, where σ- is a unique positive solution of the transcendental equation
(37)4σ-2αμ^βexp{2Δ(μ^C2+maxp∈𝕊∑q=1Nγpqμq)μ˘} +2αexp{-βΔ}=1,(38)Δ>ln(2α)β>0,
where C2=[2∥A^∥+2∥B^∥+(1+K2)∥C^∥+(1+L2)∥E^∥+2σ-2], ∥A^∥=maxp∈𝕊 ∥A(p)∥, and so forth and μ^=maxp∈𝕊 μp and μ˘=minp∈𝕊 μp.
Proof.
For any (ϕ(x),ψ(x)), we denote the state (u-(t,x;t0,ϕ),v-(t,x;t0,ψ)) of (16) as (u-(t,x),v-(t,x)) and the state (u(t,x;t0,ϕ),v(t,x;t0,ψ)) of (6) as (u(t,x),v(t,x)).
From (6) and (18) and stochastic Fubini’s Theorem, we have
(39)∫Ω0(u(t,x)-u-(t,x))dx+∫Ω0(v(t,x)-v-(t,x))dx =∫t0t∫Ω0∇·(D(r(s))∘∇(u-u-))dxds +∫t0t∫Ω0[(f(v(s,x))-f(v-(s,x)))-A(r(s))(u(s,x)-u-(s,x))mmmmmmmm+C(r(s))(f(v(s,x))-f(v-(s,x)))]dxds -∫t0t∫Ω0σu-(s,x)dxdW(s) +∫t0t∫Ω0∇·(D*(r(s))∘∇(v-v-))dxds +∫t0t∫Ω0[(g(u(s,x))-g(u-(s,x)))-B(r(s))(v(s,x)-v-(s,x)) mmmmm+E(r(s))(g(u(s,x))-g(u-(s,x)))]dxds -∫t0t∫Ω0σv-(s,x)dxdW(s).
Construct average Lyapunov functional
(40)V(u(t,x),v(t,x),u-(t,x),v-(t,x),r(t)) =∫Ω0μr(t)[|u(t,x)-u-(t,x)|2+|v(t,x)-v-(t,x)|2]dx,
where μr(t)>0.
By applying generalized Itô formula [27], we have
(41)dV(u(t,x),v(t,x),u-(t,x),v-(t,x),p)=∫Ω02μp(u(t,x)-u-(t,x))T(∇·(D(p)∘∇(u-u-)))dxdt +∫Ω02μp(u(t,x)-u-(t,x))T mml×[(f(v(t,x))-f(v-(t,x)))-A(p)(u(t,x)-u-(t,x)) mmmm+C(p)(f(v(t,x))-f(v-(t,x)))]dxdt +∫Ω0σ2μp|u-(t,x)|2dxdt -2∫Ω0σ(u(t,x)-u-(t,x))Tu-(t,x)dxdW(t) +∫Ω02μp(v(t,x)-v-(t,x))Tmmmm×(∇·(D*(p)∘∇(v-v-)))dxdt +∫Ω02μp(v(t,x)-v-(t,x))T mml×[(g(u(t,x))-g(u-(t,x)))-B(p)(v(t,x)-v-(t,x)) mmml+E(p)(g(u(t,x))-g(u-(t,x)))]dxdt +∫Ω0σ2μp|v-(t,x)|2dxdt -2∫Ω0σ(v(t,x)-v-(t,x))Tv-(t,x)dxdW(t) +∑q=1Nγpqμq∫Ω0[|u(t,x)-u-(t,x)|2mmmmmmmmmmm+|v(t,x)-v-(t,x)|2]dx.
By boundary condition and (24), we have
(42)2μp∫Ω0(u(t,x)-u-(t,x))T(∇·(D(p)∘∇(u-u-)))dxdt =-2μp∑i=1m∑k=1l∫Ω0Dik(p)(∂(ui-u-i)∂xk)2dx.
By boundary condition and (25), we have
(43)2μp∫Ω0(v(t,x)-v-(t,x))T(∇·(D*(p)∘∇(v-v-)))dxdt =-2μp∑j=1n∑k=1l∫Ω0Djk*(p)(∂(vj-v-j)∂xk)2dx.
By Hölder’s inequality, we get
(44)2μp∫Ω0(u(t,x)-u-(t,x))TC(p)mmm×(f(v(t,x))-f(v-(t,x)))dx ≤maxp∈𝕊(μp∥C(p)∥)[∫Ω0|u(t,x)-u-(t,x)|2dxmmmmmmmmmmmmn+K2∫Ω0|v(t,x)-v-(t,x)|2dx],(45)2μp∫Ω0(v(t,x)-v-(t,x))TE(p)mmmm×(g(u(t,x))-g(u-(t,x)))dx ≤maxp∈𝕊(μp∥E(p)∥)[∫Ω0|v(t,x)-v-(t,x)|2dx mmmmmmmmmmm+L2∫Ω0|u(t,x)-u-(t,x)|2dx].
From (42)–(45) and Assumption (H1), we obtain that
(46)dV(u(t,x),v(t,x),u-(t,x),v-(t,x),p) ≤(μ^C1+maxp∈𝕊∑q=1Nγpqμq) ×∫Ω0(|u(t,x)-u-(t,x)|2+|v(t,x)-v-(t,x)|2)dxdt +2σ2μ^∫Ω0(|u(t,x)|2+|v(t,x)|2)dxdt -2∫Ω0σ(u(t,x)-u-(t,x))Tu-(t,x)dxdW(t) -2∫Ω0σ(v(t,x)-v-(t,x))Tv-(t,x)dxdW(t).
When t≤t0+2Δ, we have
(47)𝔼V(u(t,x),v(t,x),u-(t,x),v-(t,x),r(t))≤(μ^C1+maxp∈𝕊∑q=1Nγpqμq) ×∫t0t𝔼∫Ω0(|u(s,x)-u-(s,x)|2mmmmmmmm+|v(s,x)-v-(s,x)|2)dxds +2σ2μ^∫t0tα(∥ϕ∥22+∥ψ∥22)exp(-β(s-t0))ds -2σ𝔼∫t0t∫Ω0(u(s,x)-u-(s,x))Tu-(s,x)dxdW(s) -2σ𝔼∫t0t∫Ω0(v(s,x)-v-(s,x))Tv-(s,x)dxdW(s).
By stochastic Fubini’s Theorem, we have
(48)𝔼∫t0t∫Ω0(u(s,x)-u-(s,x))Tu-(s,x)dxdW(s)=0,𝔼∫t0t∫Ω0(v(s,x)-v-(s,x))Tv-(s,x)dxdW(s)=0.
By (47), one get
(49)𝔼V(u(t,x),v(t,x),u-(t,x),v-(t,x),r(t))≤(μ^C1+maxp∈𝕊∑q=1Nγpqμq)μ˘ ×∫t0t𝔼V(u(s,x),v(s,x),u-(s,x),v-(s,x),r(s))ds +2σ2αμ^(∥ϕ∥22+∥ψ∥22)β.
When t0+Δ≤t≤t0+2Δ, by applying Gronwall’s inequality, we have
(50)𝔼(∥u(t,x)-u-(t,x)∥22+∥v(t,x)-v-(t,x)∥22) =𝔼V(u(t,x),v(t,x),u-(t,x),v-(t,x),r(t)) ≤2σ2αμ^(∥ϕ∥22+∥ψ∥22)β ×exp(μ^C1+maxp∈𝕊∑q=1Nγpqμq)μ˘(t-t0) ≤supt0≤t≤t0+Δ𝔼(∥u-(t,x)∥2+∥v-(t,x)∥2) ×2σ2αμ^βexp2Δ(μ^C1+maxp∈𝕊∑q=1Nγpqμq)μ˘ .
By the global exponential stability of (6), we have
(51)𝔼(∥u-(t,x)∥22+∥v-(t,x)∥22)≤2𝔼(∥u(t,x)-u-(t,x)∥22+∥v(t,x)-v-(t,x)∥22) +2𝔼(∥u(t,x)∥22+∥v(t,x)∥22)≤supt0≤t≤t0+Δ𝔼(∥u-(t,x)∥2+∥v-(t,x)∥2) ×4σ2αμ^βexp2Δ(μ^C1+maxp∈𝕊∑q=1Nγpqμq)μ˘ +2α(∥ϕ∥22+∥ψ∥22)exp{-β(t-t0)}.
Moreover,
(52)𝔼(∥u-(t,x)∥22+∥v-(t,x)∥22)≤{exp{2Δ(μ^C1+maxp∈𝕊∑q=1Nγpqμq)μ˘}4σ2αμ^βexp{2Δ(μ^C1+maxp∈𝕊∑q=1Nγpqμq)μ˘}mmexp{2Δ(μ^C1+maxp∈𝕊∑q=1Nγpqμq)μ˘}+2αexp{-βΔ}} ×supt0≤t≤t0+Δ𝔼(∥u-(t,x)∥2+∥v-(t,x)∥2).
From (37), when |σ|<σ-, we have
(53)4σ2αμ^βexp{2Δ(μ^C1+maxp∈𝕊∑q=1Nγpqμq)μ˘} +2αexp{-βΔ}<1.
Let
(54)γ=(-log{{2Δ(μ^C1+maxp∈𝕊∑q=1Nγpqμq)μ˘}4σ2αμ^βmmmmm×exp{2Δ(μ^C1+maxp∈𝕊∑q=1Nγpqμq)μ˘}ll {2Δ(μ^C1+maxp∈𝕊∑q=1Nγpqμq)μ˘}mmm+2αexp{-βΔ}})×(Δ)-1>0.
By (52), we have
(55)supt0+Δ≤t≤t0+2Δ𝔼(∥u-(t,x)∥22+∥v-(t,x)∥22) ≤exp(-γΔ)(supt0≤t≤t0+Δ𝔼(∥u-(t,x)∥22+∥v-(t,x)∥22)).
For any positive integer m=1,2,…, from the existence and uniqueness of the flow of (16) (see [28]), when t≥t0+(m-1)Δ, we have
(56)(u-(t,x;t0,ϕ),v-(t,x;t0,ψ)) =(v-(t,x;t0+(m-1)Δ,v-(t0+(m-1)Δ,x;t0,ψ))u-(t,x;t0+(m-1)Δ,u-(t0+(m-1)Δ,x;t0,ϕ)), v-(t,x;t0+(m-1)Δ,v-(t0+(m-1)Δ,x;t0,ψ))).
From (55) and (56),
(57)supt0+mΔ≤t≤t0+(m+1)Δ𝔼(∥u-(t,x;t0,ϕ)∥22+∥v-(t,x;t0,ψ)∥22)=supt0+(m-1)Δ+Δ≤t≤t0+(m-1)Δ+2Δ𝔼∥u-(t,x;t0+(m-1)Δ,mmmmmmmmmmmmmmmnu-(t0+(m-1)Δ,x;t0,ϕ))∥22 +supt0+(m-1)Δ+Δ≤t≤t0+(m-1)Δ+2Δ𝔼∥v-(t,x;t0+(m-1)Δ,mmmmmmmmmmmmmmmmv-(t0+(m-1)Δ,x;t0,ψ))∥22≤exp(-γΔ)(supt0+(m-1)Δ≤t≤t0+mΔ𝔼(∥u-(t,x;t0,ϕ)∥22mmmmmmmmmmmmmmmmmm+∥v-(t,x;t0,ψ)∥22)supt0+(m-1)Δ≤t≤t0+mΔ)mmmmmmmmmmmmml⋮≤exp(-γmΔ)(supt0≤t≤t0+Δ𝔼(∥u-(t,x;t0,ϕ)∥22mmmmmmmmmmmmmm+∥v-(t,x;t0,ψ)∥22)supt0+(m-1)Δ≤t≤t0+mΔ).
Hence, for any t≥t0+Δ, there exists a positive integer m, such that t0+mΔ≤t≤t0+(m+1)Δ, and we have
(58)𝔼(∥u-(t,x;t0,ϕ)∥22+∥v-(t,x;t0,ψ)∥22)≤exp(-γmΔ) ×(supt0≤t≤t0+Δ𝔼(∥u-(t,x;t0,ϕ)∥22+∥v-(t,x;t0,ψ)∥22))≤exp{-γt+γt0+γΔ} ×(supt0≤t≤t0+Δ𝔼(∥u-(t,x;t0,ϕ)∥22+∥v-(t,x;t0,ψ)∥22))≤C3exp{γΔ}exp{-γ(t-t0)},
where C3=supt0≤t≤t0+Δ𝔼(∥u-(t,x;t0,ϕ)∥22+∥v-(t,x;t0,ψ)∥22). The above inequality also holds for t0≤t≤t0+Δ.
Therefore, the neural networks (16) are mean square globally exponentially stable, and by Theorem 4, the neural networks (16) are also almost surely globally exponentially stable.
4. Connection Weight Matrices Uncertainty and Noise Impact on Stability
In this section, we first consider the parameter uncertainty intensity which is added to the self-feedback matrix (A,B)T of the neural networks (16). Then, the neural networks (16) are changed as
(59)du-i(t,x)={∑k=1l∂∂xk(Dik(r(t))∂u-i(t,x)∂xk)m-(1+λ)ai(r(t))u-i(t,x)m+∑j=1ncji(r(t))fj(v-j(t,x))}dt+σu-i(t,x)dW(t)dv-j(t,x)={∑k=1l∂∂xk(Djk*(r(t))∂v-j(t,x)∂xk)m-(1+λ)bj(r(t))v-j(t,x)m+∑i=1meij(r(t))gi(u-i(t,x))}dt+σv-j(t,x)dW(t).
The initial conditions and boundary conditions are given by
(60)u-i(t0,x)=ϕi(x), x∈Ω0, t0∈ℝ+, i=1,2,…,m,v-j(t0,x)=ψj(x), x∈Ω0, t0∈ℝ+, j=1,2,…,n,∂u-i(t,x)∂n→|∂Ω0=(∂u-i(t,x)∂x1,…,∂u-i(t,x)∂xl)T=0,bbbi(t,x)∈[t0,+∞)×∂Ω0, i=1,2,…,m,∂v-j(t,x)∂n→|∂Ω0=(∂v-j(t,x)∂x1,…,∂v-j(t,x)∂xl)T=0,bbbb(t,x)∈[t0,+∞)×∂Ω0, j=1,2,…,n,
where λ is the self-feedback matrix (A,B)T uncertainty intensity and σ is the noise intensity.
We rewrite (59) as follows:
(61)du-(t,x)={∇·(D(r(t))∘∇u-)-(1+λ)A(r(t))u-(t,x)m+C(r(t))f(v-(t,x))}dt+σu-(t,x)dW(t)dv-(t,x)={∇·(D*(r(t))∘∇v-)-(1+λ)B(r(t))v-(t,x)m+E(r(t))g(u-(t,x))}dt+σv-(t,x)dW(t).
For the global exponential stability of neural networks (6), we will characterize how much the intensity of both the self-feedback matrix (A,B)T uncertainty and stochastic noise the stochastic neural networks (59) can tolerate while maintaining global exponential stability.
Theorem 6.
Let Assumption (H1) hold and let the neural networks (6) be globally exponentially stable. Then, the neural networks (59) are mean square globally exponential stability and also almost sure globally exponential stability, if there exists μq>0,(q∈𝕊), and (λ,σ) is in the inner of the closed curve described by the following transcendental equation:
(62)4μ^(σ2+λ2(∥A^∥+∥B^∥))αβ ×exp{2Δ(μ^C4+maxp∈𝕊∑q=1Nγpqμq)μ˘}, +2αexp{-βΔ}=1,(63)Δ>ln(2α)β>0,
where C4=[(3+2λ2)(∥A^∥+∥B^∥)+(1+K2)∥C^∥+(1+L2)∥E^∥+2σ2], ∥A^∥=maxp∈𝕊 ∥A(p)∥, and so forth and μ^=maxp∈𝕊 μp and μ˘=minp∈𝕊 μp.
Proof.
For any (ϕ(x),ψ(x)), we denote the state (u-(t,x;t0,ϕ),v-(t,x;t0,ψ)) of (59) as (u-(t,x),v-(t,x)) and the state (u(t,x;t0,ϕ),v(t,x;t0,ψ)) of (6) as (u(t,x),v(t,x)).
From (6) and (61) and stochastic Fubini’s Theorem, we have
(64)∫Ω0(u(t,x)-u-(t,x))dx+∫Ω0(v(t,x)-v-(t,x))dx =∫t0t∫Ω0∇·(D(r(s))∘∇(u-u-))dxds +∫t0t∫Ω0[(f(v(s,x))-f(v-(s,x)))-A(r(s))(u(s,x)-u-(s,x)) mmmmm+C(r(s))(f(v(s,x))-f(v-(s,x)))] dxds -∫t0t∫Ω0σu-(s,x)dxdW(s) +∫t0t∫Ω0λA(r(s))u-(s,x)dxds +∫t0t∫Ω0∇·(D*(r(s))∘∇(v-v-))dxds +∫t0t∫Ω0[(g(u(s,x))-g(u-(s,x)))-B(r(s))(v(s,x)-v-(s,x)) l+E(r(s))(g(u(s,x))-g(u-(s,x)))] dxds -∫t0t∫Ω0σv-(s,x)dxdW(s) +∫t0t∫Ω0λB(r(s))v-(s,x)dxds.
Construct the average Lyapunov functional
(65)V(u(t,x),v(t,x),u-(t,x),v-(t,x),r(t)) =∫Ω0μr(t)[|u(t,x)-u-(t,x)|2+|v(t,x)-v-(t,x)|2]dx,
where μr(t)>0.
By applying generalized Itô formula [27], we have
(66)dV(u(t,x),v(t,x),u-(t,x),v-(t,x),p)|(21)=∫Ω02μp(u(t,x)-u-(t,x))T(∇·(D(p)∘∇(u-u-)))dxdt +∫Ω02μp(u(t,x)-u-(t,x))T m×[-A(p)(u(t,x)-u-(t,x))mmmmml+C(p)(f(v(t,x))-f(v-(t,x)))mmmmm+λA(p)u-(t,x)]dxdt +∫Ω0σ2μp|u-(t,x)|2dxdt -2∫Ω0σ(u(t,x)-u-(t,x))Tu-(t,x)dxdW(t) +∫Ω02μp(v(t,x)-v-(t,x))Tmmmmm×(∇·(D*(p)∘∇(v-v-)))dxdt +∫Ω02μp(v(t,x)-v-(t,x))T mm×[-B(p)(v(t,x)-v-(t,x)) mmmm+E(p)(g(u(t,x))-g(u-(t,x))) mmmm+λB(p)v-(t,x)]dxdt +∫Ω0σ2μp|v-(t,x)|2dxdt -2∫Ω0σ(v(t,x)-v-(t,x))Tv-(t,x)dxdW(t) +∑q=1Nγpqμq∫Ω0[|u(t,x)-u-(t,x)|2 mmmmmmmmm+|v(t,x)-v-(t,x)|2]dx.
By Hölder’s inequality, we get
(67)∫Ω02μp(u(t,x)-u-(t,x))TλA(p)u-(t,x)dx≤maxp∈𝕊∥μpA(p)∥[∫Ω0|u(t,x)-u-(t,x)|2dxmmmmmmmmmm+λ2∫Ω0|u-(t,x)|2dx]=maxp∈𝕊∥μpA(p)∥[∫Ω0|u(t,x)-u-(t,x)|2dxmmmmmmmmlm+λ2∫Ω0|u(t,x)-u-(t,x)-u(t,x)|2dx]≤maxp∈𝕊∥μpA(p)∥[(1+2λ2)∫Ω0|u(t,x)-u-(t,x)|2dxmmmmmmmmmmm+2λ2∫Ω0|u(t,x)|2dx],∫Ω02μp(v(t,x)-v-(t,x))TλB(p)v-(t,x)dx≤maxp∈𝕊∥μpB(p)∥[∫Ω0|v(t,x)-v-(t,x)|2dxmmmmmmmmm+λ2∫Ω0|v-(t,x)|2dx]=maxp∈𝕊∥μpB(p)∥[∫Ω0|v(t,x)-v-(t,x)|2dxmmmmmmmmml+λ2∫Ω0|v(t,x)-v-(t,x)-v(t,x)|2dx]≤maxp∈𝕊∥μpB(p)∥[(1+2λ2)∫Ω0|v(t,x)-v-(t,x)|2dxmmmmmmmmmm+2λ2∫Ω0|v(t,x)|2dx].
From (42), (43), and (67) and Assumption (H1), we obtain that
(68)dV(u(t,x),v(t,x),u-(t,x),v-(t,x),p)≤(μ^C4+maxp∈𝕊∑q=1Nγpqμq) ×∫Ω0(|u(t,x)-u-(t,x)|2mmmmm+|v(t,x)-v-(t,x)|2)dxdt +2μ^(σ2+λ2(∥A^∥+∥B^∥)) ×∫Ω0(|u(t,x)|2+|v(t,x)|2)dxdt -2∫Ω0σ(u(t,x)-u-(t,x))Tu-(t,x)dxdW(t) -2∫Ω0σ(v(t,x)-v-(t,x))Tv-(t,x)dxdW(t).
When t≤t0+2Δ, we have
(69)𝔼V(u(t,x),v(t,x),u-(t,x),v-(t,x),r(t))≤(μ^C4+maxp∈𝕊∑q=1Nγpqμq) ×∫t0t𝔼∫Ω0(|u(s,x)-u-(s,x)|2mmmmmmmm+|v(s,x)-v-(s,x)|2)dxds +2μ^(σ2+λ2(∥A^∥+∥B^∥)) ×∫t0tα(∥ϕ∥22+∥ψ∥22)exp(-β(s-t0))ds -2σ𝔼∫t0t∫Ω0(u(s,x)-u-(s,x))Tu-(s,x)dxdW(s) -2σ𝔼∫t0t∫Ω0(v(s,x)-v-(s,x))Tv-(s,x)dxdW(s).
By stochastic Fubini’s Theorem, we have
(70)𝔼∫t0t∫Ω0(u(s,x)-u-(s,x))Tu-(s,x)dxdW(s)=0,𝔼∫t0t∫Ω0(v(s,x)-v-(s,x))Tv-(s,x)dxdW(s)=0.
By (69), one get
(71)𝔼V(u(t,x),v(t,x),u-(t,x),v-(t,x),r(t))≤(μ^C4+maxp∈𝕊∑q=1Nγpqμq)μ˘ ×∫t0t𝔼V(u(s,x),v(s,x),u-(s,x),v-(s,x),r(s))ds +2μ^(σ2+λ2(∥A^∥+∥B^∥))α(∥ϕ∥22+∥ψ∥22)β.
When t0+Δ≤t≤t0+2Δ, by applying Gronwall’s inequality, we have
(72)𝔼(∥u(t,x)-u-(t,x)∥22+∥v(t,x)-v-(t,x)∥22) =𝔼V(u(t,x),v(t,x),u-(t,x),v-(t,x),r(t)) ≤2μ^(σ2+λ2(∥A^∥+∥B^∥))α(∥ϕ∥22+∥ψ∥22)β ×exp(μ^C4+maxp∈𝕊∑q=1Nγpqμq)μ˘(t-t0) ≤supt0≤t≤t0+Δ𝔼(∥u-(t,x)∥2+∥v-(t,x)∥2) ×2μ^(σ2+λ2(∥A^∥+∥B^∥))αβ ×exp2Δ(μ^C4+maxp∈𝕊∑q=1Nγpqμq)μ˘.
By the global exponential stability of (6), we have
(73)𝔼(∥u-(t,x)∥22+∥v-(t,x)∥22)≤2𝔼(∥u(t,x)-u-(t,x)∥22+∥v(t,x)-v-(t,x)∥22) +2𝔼(∥u(t,x)∥22+∥v(t,x)∥22)≤supt0≤t≤t0+Δ𝔼(∥u-(t,x)∥2+∥v-(t,x)∥2) ×4μ^(σ2+λ2(∥A^∥+∥B^∥))αβ ×exp2Δ(μ^C4+maxp∈𝕊∑q=1Nγpqμq)μ˘ +2α(∥ϕ∥22+∥ψ∥22)exp{-β(t-t0)}.
Moreover,
(74)𝔼(∥u-(t,x)∥22+∥v-(t,x)∥22)≤{4μ^(σ2+λ2(∥A^∥+∥B^∥))αβ{2Δ(μ^C4+maxp∈𝕊∑q=1Nγpqμq)μ˘} mm×exp{2Δ(μ^C4+maxp∈𝕊∑q=1Nγpqμq)μ˘} lmm{2Δ(μ^C4+maxp∈𝕊∑q=1Nγpqμq)μ˘}+2αexp{-βΔ}} ×supt0≤t≤t0+Δ𝔼(∥u-(t,x)∥2+∥v-(t,x)∥2).
From (62), when (λ,σ) is in the inner of the closed curve described by the transcendental equation, we have
(75)4μ^(σ2+λ2(∥A^∥+∥B^∥))αβ ×exp{2Δ(μ^C4+maxp∈𝕊∑q=1Nγpqμq)μ˘}+2αexp{-βΔ}<1.
Let
(76)γ=(-log{{2Δ(μ^C4+maxp∈𝕊∑q=1Nγpqμq)μ˘}+2αexp{-βΔ}{2Δ(μ^C4+maxp∈𝕊∑q=1Nγpqμq)μ˘}})4μ^(σ2+λ2(∥A^∥+∥B^∥))αβmmmmm×exp{2Δ(μ^C4+maxp∈𝕊∑q=1Nγpqμq)μ˘}mmmmm{2Δ(μ^C4+maxp∈𝕊∑q=1Nγpqμq)μ˘}+2αexp{-βΔ}{2Δ(μ^C4+maxp∈𝕊∑q=1Nγpqμq)μ˘}})(Δ)-1>0.
By (74), we have
(77)supt0+Δ≤t≤t0+2Δ𝔼(∥u-(t,x)∥22+∥v-(t,x)∥22) ≤exp(-γΔ)(supt0≤t≤t0+Δ𝔼(∥u-(t,x)∥22+∥v-(t,x)∥22)).
Similar to the proof of Theorem 5, we can prove that the neural networks (59) are mean square globally exponentially stable and also almost surely globally exponentially stable.
To continue, we consider the parameter uncertainty intensity which is added to the connection weight matrix (C,E)T of the neural networks (16). Then, the neural networks (16) are changed as
(78)du-i(t,x)={∑j=1n∑k=1l∂∂xk(Dik(r(t))∂u-i(t,x)∂xk)m-ai(r(t))u-i(t,x)m+∑j=1n(1+δ)cji(r(t))fj(v-j(t,x))}dt+σu-i(t,x)dW(t),dv-j(t,x)={∑k=1l∂∂xk(Djk*(r(t))∂v-j(t,x)∂xk)m-bj(r(t))v-j(t,x)m+∑i=1m(1+δ)eij(r(t))gi(u-i(t,x))}dt+σv-j(t,x)dW(t).
The initial conditions and boundary conditions are given by
(79)u-i(t0,x)=ϕi(x), x∈Ω0, t0∈ℝ+, i=1,2,…,m,v-j(t0,x)=ψj(x), x∈Ω0, t0∈ℝ+, j=1,2,…,n,∂u-i(t,x)∂n→|∂Ω0=(∂u-i(t,x)∂x1,…,∂u-i(t,x)∂xl)T=0,bbbi(t,x)∈[t0,+∞)×∂Ω0, i=1,2,…,m,∂v-j(t,x)∂n→|∂Ω0=(∂v-j(t,x)∂x1,…,∂v-j(t,x)∂xl)T=0,bbbb(t,x)∈[t0,+∞)×∂Ω0, j=1,2,…,n,
where δ is the connection weight matrix (C,E)T uncertainty intensity and σ is the noise intensity.
We rewrite (78) as follows:
(80)du-(t,x)={C(r(t))f(v-(t,x))∇·(D(r(t))∘∇u-)-A(r(t))u-(t,x)l+(1+δ)C(r(t))f(v-(t,x))}dt+σu-(t,x)dW(t),dv-(t,x)={∇·(D*(r(t))∘∇v-)-B(r(t))v-(t,x)l+(1+δ)E(r(t))g(u-(t,x))}dt+σv-(t,x)dW(t).
For the global exponential stability of neural networks (6), we will characterize how much the intensity of both the connection weight matrix (C,E)T uncertainty and stochastic noise the stochastic neural networks (78) can tolerate while maintaining global exponential stability.
Theorem 7.
Let Assumption (H1) hold and let the neural networks (6) be global exponential stability. Then, the neural networks (78) are mean square globally exponentially stable and also almost surely globally exponentially stable, if there exists μq>0, (q∈𝕊), and (δ,σ) is in the inner of the closed curve described by the following transcendental equation:
(81)4μ^(σ2+δ2(K2∥C^∥+L2∥E^∥))αβ ×exp{2Δ(μ^C5+maxp∈𝕊∑q=1Nγpqμq)μ˘} +2αexp{-βΔ}=1,Δ>ln(2α)β>0,
where C5=[2(∥A^∥+∥B^∥)+(2+(1+2δ2)K2)∥C^∥+(2+(1+2δ2)L2)∥E^∥+2σ2], ∥A^∥=maxp∈𝕊 ∥A(p)∥, and so forth and μ^=maxp∈𝕊 μp and μ˘=minp∈𝕊 μp.
The proof is similar to the proof of Theorem 6.