We try to pave a smooth road to a proper understanding of control problems in terms of mathematical disciplines, and partially show how to number-theorize some practical problems. Our primary concern is linear systems from the point of view of our principle of visualization of the state, an interface between the past and the present. We view all the systems as embedded in the state equation, thus visualizing the state. Then we go on to treat the chain-scattering representation of the plant of Kimura 1997, which includes the feedback connection in a natural way, and we consider the H∞-control problem in this framework. We may view in particular the unit feedback system as accommodated in the chain-scattering representation, giving a better insight into the structure of the system. Its homographic transformation works as the action of the symplectic group on the Siegel upper half-space in the case of constant matrices. Both of H∞-
and PID-controllers are applied successfully in the EV control by J.-Y. Cao and B.-G. Cao 2006 and Cao et al. 2007, which we may unify in our framework. Finally, we mention some similarities between control theory and zeta-functions.
1. Introduction and Preliminaries
It turns out there is great similarity in control theory and number theory in their treatment of the signals in time domain (t)and frequency domain (ω,s=σ+jω)which is conducted by the Laplace transform in the case of control theory while, in the theory of zeta-functions, this role is played by the Mellin transform, both of which convert the signals in time domain to those in the right half-plane. For integral transforms, compare Section 11.
Section 5 introduces the Hardy space Hp which consists of functions analytic in ℛℋ𝒫—right half-plane σ>0.
2. State Space Representation and the Visualization Principle
Let x=x(t)∈ℝn, u=u(t)∈ℝr, and y=y(t)∈ℝm be the state function, input function, and output function, respectively. We write ẋ for (d/dt)x. The system of (differential equations) DEsẋ=Ax+Bu,y=Cx+Du
is called a state equation for a linear system, where A∈Mn,n(ℝ), B,C,D are given constant matrices.
The state state x is not visible while the input and output are so, and the state may be thought of as an interface between the past and the present information since it contains all the information contained in the system from the past. The x being invisible, (2.1) would ready=Du,
which appears in many places in the literature in disguised form. All the subsequent systems, for example, (3.1), are variations of (2.2). And whenever we would like to obtain the state equation, we are to restore the state x to make a recourse to (2.1), which we would call the visualization principle. In the case of feedback system, it is often the case that (2.2) is given in the form of (3.8). It is quite remarkable that this controller S works for the matrix variable in the symplectic geometry (compare Section 4).
Using the matrix exponential function eAt, the first equation in (2.1) can be solved in the same way as for the scalar case:x=x(t)=eAtx(0)+BeAt∫0te-Aτu(t)dτ.
Definition 2.1.
A linear system with the input u=oẋ=ddtx=Ax,
called an autonomous system, is said to be asymptotically stable if for all initial values, x(t) approaches a limit as t→∞.
Since the solution of (2.4) is given byx=eAtx(0),
the system is asymptotically stable if and only if||eAt||⟶0ast⟶∞.
A linear system is said to be stable if (2.6) holds, which is the case if all the eigenvalues of Ahave negative real parts. Compare Section 5 in this regard. It also amounts to saying that the step response of the system approaches a limit as time elapses, where step response means a responsey(t)=∫0teA(t-τ)u(τ)dτ,
with the unit step function u=u(t) as the input function, which is 0 for t<0 and 1 for t≥0.
Up here, the things are happening in the time domain. We now move to a frequency domain. For this purpose, we refer to the Laplace transform to be discussed in Section 11. It has the effect of shifting from the time domain to frequency domain and vice versa. For more details, see, for example, [1]. Taking the Laplace transform of (2.1) with x(0)=o, we obtain sX(s)=AX(s)+BU(s),Y(s)=CX(s)+DU(s),
which we solve asY(s)=G(s)U(s),
whereG(s)=C(sI-A)-1B+D,
where Iindicates the identity matrix, which is sometimes denoted by Into show its size.
In general, supposing that the initial values of all the signals in a system are 0, we call the ratio of output/input of the signal, the transfer function, and denote it by G(s), Φ(s), and so forth. We may suppose so because, if the system is in equilibrium, then we may take the values of parameters at that moment as standard and may suppose the initial values to be 0.
Equation (2.10) is called the state space representation (form, realization, description, characterization) of the transfer function G(s) of the system (2.1) and is written asG(s)=(ABCD).
According to the visualization principle above, we have the embedding principle. Given a state space representation of a transfer function G(s), it is to be embedded in the state equation (2.1).
Example 2.2.
If
G(s)=(ABCD)=(010-2-31-10-22),
then it follows from (2.10) that
G(s)=(-10,-2)((s00s)-(01-2-3))-1(01)+2=(-10,-2)(s-12s+3)-1(01)+2=1(s+1)(s+2)(-10,-2)(s+31-2s)-1(01)+2=-2s+5(s+1)(s+2)+2=2(s+3)(s-1)(s+1)(s+2).
The principle above will establish the most important cascade connection (concatenation rule) [1, (2.13), page 15]. Given two state space representationsGk(s)=(AkBkCkDk),k=1,2,
their cascade connection G(s)=G1(s)G2(s) is given byG(s)=G1(s)G2(s)=(A1B1C2B1D2OA2B2C1D1C2D1D2).
Proof of (2.15).
We have the input/output relation (2.10)
Y(s)=G1(s)U(s),U(s)=G2(s)V(s),
which means that
ẋ=A1x+B1u,y=C1x+D1u,ξ̇=A2ξ+B2v,u=C2ξ+D2v.
Eliminating u, we conclude thatẋ=A1x+B1C2ξ+B1D2v,y=C1x+D1C2ξ+D1D2v.
Hence
(ẋξ̇)=(A1B1C2OA2)(xξ)+(B1D2B2)v,y=(C1D1C2)(xξ)+D1D2v,
whence we conclude (2.15).
Example 2.3.
Given two state space representations (2.14), their parallel connection G(s)=G1(s)+G2(s) is given by
G(s)=G1(s)+G2(s)=(A1OB1OA2B2C1C2D1+D2).
Indeed, we have (2.17), and for (2.18), we have
ξ̇=A2ξ+B2u,y+z=C2ξ+D2u.
Hence for (2.20), we have(x+ξ)⋅=A1x+A2ξ+(B1+B2)u,y=C1x+C2ξ+(D1+D2)v,
whence (2.21) follows.
As an example, combining (2.15) and (2.21) we deduceI-G1(s)G2(s)=(IOO-A1-B1C2-B1D2OO-A2-B2O-C1-D1C2V-1).
Example 2.4.
For (2.1), we consider the inversionU(s)=G-1(s)Y(s). Solving the second equality in (2.1) for u we obtain
u=-D-1Cx+D-1y.
Substituting this in the first equality in (2.1), we obtainẋ=(A-BD-1C)x+BD-1y.
Whence
G-1(s)=(A-BD-1C-BD-1-D-1CD-1).
Example 2.5.
If the transfer function
Θ(s)=(Θ11Θ12Θ21Θ22)
has a state space representation
Θ(s)=(ABCD)=(AB1B2C1D11D12C2D21D22),
then we are to embed it in the linear system
ẋ=Ax+(B1B2)(b1b2),(a1a2)=y=(C1C2)x+(D11D12D21D22)(b1b2).
3. Chain-Scattering Representation
Following [1, pages 7 and 67], we first give the definition of a chain-scattering representation of a system.
Suppose a1∈ℝm, a2∈ℝq, b1∈ℝr, and b2∈ℝp are related by(a1a2)=P(b1b2),
whereP=(P11P12P21P22).
According to the embedding principle, this is to be thought of as y=Su corresponding to the second equality in (2.1).
Equation (3.1) means thata1=P11b1+P12b2,a2=P21b1+P22b2.
Assume that P21 is a (square) regular matrix (whence q=r). Then from the second equality of (3.3), we obtainb1=P21-1(a2-P22b2)=-P21-1P22b2+P21-1a2.
Substituting (3.4) in the first equality of (3.3), we deduce thata1=(P12-P11P21-1P22)b2+P11P21-1a2.
Hence puttingΘ=CHAIN(P)=(P12-P11P21-1P22P11P21-1-P21-1P22P21-1)=(Θ11Θ12Θ21Θ22),
which is usually referred to as a chain-scattering representation of P, we obtain an equivalent form of (3.1)(a1b1)=CHAIN(P)(b2a2)=(Θ11Θ12Θ21Θ22)(a2b2).
Suppose that a2 is fed back to b2 byb2=Sa2,
where S is a controller. Multiplying the second equality in (3.3) by S and incorporating (3.8), we find thatb2=Sa2=SP21b1+SP22b2,
whence b2=(I-P22K)-1SP21b1.
Let the closed-loop transfer function Φ be defined bya1=Φb1.
Φ is given byΦ=P11+P12(E-P22S)-1SP21.
Equation (3.11) is sometimes referred to as a linear fractional transformation and denoted byLF(P;K).
Substituting (3.8), (3.7) becomes(a1b1)=(Θ11S+Θ12Θ21S+Θ22)a2,
whence we deduce thatΦ=(Θ11S+Θ12)(Θ21S+Θ22)-1=ΘS,
the linear fractional transformation (which is referred to as a homographic transformation and denoted by HM(Φ;S)), where in the last equality we mean the action of Θ on the variable S. We must impose the nonconstant condition |Θ|≠0. Then Θ∈GLm+r(ℝ).
If S is obtained from S′ under the action of Θ′, S=Θ′S′, then its composition J with (3.14) yields JS′=ΦΦ′=ΘΘ′S′, that is,J=ΘΘ′,HM(Θ;HM(Θ′;S))=HM(ΘΘ′;S),
which is referred to as the cascade connection or the cascade structure of Θ and Θ′.
Thus the chain-scattering representation of a system allows us to treat the feedback connection as a cascade connection.
Suppose a closed-loop system is given with z=a1∈ℝm, y=a2∈ℝq, w=b1∈ℝr, and u=b2∈ℝp and Φ given by (3.2).
H∞-Control Problem. Find a controller K such that the closed-loop system is internally stable and the transfer function Φ satisfies‖Φ‖∞<γ,
for a positive constant γ. For the meaning of the norm, compare Section 5.
4. Siegel Upper Space
Let * denote the conjugate transpose of a square matrix: S*=S̅t, and let the imaginary part of S defined by ImS=(1/2j)(S-S*). Let ℋn be the Siegel upper half-space consisting of all the matrices S (recall (3.8)) whose imaginary parts are positive definite (ImS>0—imaginary parts of all eigen values are positive) and satisfies S=tS:Hn={S∈Mn(C)∣ImS>0,S=St},
and let Sp(n,ℝ) denote the symplectic group of order n:Sp(n,R)={Θ=(Θ11Θ12Θ21Θ22)∣(Θ11Θ12Θ21Θ22)-1=(Θ22-tΘ12-tΘ21Θ11)}.
The action of Sp(n,ℝ) on ℋn is defined by (3.14) which we restate asΘS=(Θ11S+Θ12)(Θ21S+Θ22)-1(=Φ).
Theorem 4.1.
For a controller S living in the Siegel upper space, its rotation Z=-jS lies in the right half-space ℛℋ𝒮, that is, stable having positive real parts. For the controller Z, the feedback connection
-jb2=Z(-ja2)
is accommodated in the cascade connection of the chain-scattering representation Θ (3.15), which is then viewed as the action (3.15) of Θ∈Sp(n,ℝ) on S∈ℋn:
(ΘΘ′)S=Θ(Θ′S);orHM(Θ;HM(Θ′;S))=HM(ΘΘ′;S),
where Θ is subject to the condition
Θ̅tUΘ=U,
with U=(OIn-InO). An FOPID controller (in Section 6), being a unity feedback connection, is also accommodated in this framework.
Remark 4.2.
With action, we may introduce the orbit decomposition of ℋn and whence the fundamental domain. We note that, in the special case of n=1, we have ℋ1=ℋ and Sp(1,ℝ)=SLn(ℝ) and the theory of modular forms of one variable is well known. Siegel modular forms are a generalization of the one variable case into several variables. As in the case of the sushmna principle in [2], there is a need to rotate the upper half-space into the right half-space ℛℋ𝒮, which is a counter part of the right-half plane ℛℋ𝒫. In the case of Siegel modular forms, the matrices are constant, while in control theory, they are analytic functions (mostly rational functions analytic in ℛℋ𝒫). A general theory would be useful for controlling theory. See Section 7 for physically realizable cases. There are many research problems lying in this direction.
5. Norm of the Function Spaces
The norm x=(x1⋮xn)∈ℂn is defined to be the Euclidean norm‖x‖=‖x‖2=∑j=1n|xj|2,
or by the sup norm ‖x‖=‖x‖∞=max{|x1|,….|xn|},
or anything that satisfies the axioms of the norm. They introduce the same topology on ℂn.
The definition of the norm of a matrix should be given in a similar way by viewing its elements as an n2-dimensional vector, that is, embedding it in ℂn2. If A=(aij),1≤i,j≤n, then‖A‖=‖A‖2=∑i,j=1n|aij|2,
or otherwise.
The sup norm is a limit of the p-norm as p→∞. For a=(a1,…,an), limp→∞‖a‖p=limp→∞(∑k=1n|ak|p)1/p=‖a‖∞=max1≤k≤n{|ak|p}.
Suppose |a1|=max1≤k≤n{|ak|p}. Then for any p>0|a1|=(|a1|p)1/p≤(∑k=1n|ak|p)1/p.
On the other hand, since |a1|≥|ak|,1≤k≤n, we obtain(∑k=1n|ak|p)1/p=|a1|(1+∑k=2n|aka1|p)1/p≤|a1|(1+n-1)1/p.
For p>1, the Bernoulli inequality gives (1+n-1)1/p≤1+(n-1)/p→1 as p→∞. Hence the right-hand side of (5.5) tends to |a1|.
The proof of (5.4) can be readily generalized to givelimp→∞‖f‖p=‖f‖∞=supt≥0|f(t)|.
The p-norm in (5.6) is defined by‖f‖p=(∫0∞‖f(t)‖pdt)1/p,
where ∥f(t)∥ is any Euclidean norm. Note that the functions are not ordinary functions but classes of functions which are regarded as the same if they differ only at measure 0 set. Lp is a Banach space (i.e., a complete metric space), and in particular L2 is a Hilbert space. The 2-norm ∥·∥2 is induced from the inner product〈f,g〉=∫0∞f*(t)g(t)dt,‖f‖2=〈f,f〉,
where * refers to the transposed complex conjugation.
The Parseval identity holds true if and only if the system is complete.
However, the restriction that ∥f(t)∥→0 as t→∞ excludes signals of infinite duration such as unit step signals or periodic ones from Lp. To circumvent the inconvenience, the notion of averaged norm, M2(f)=M2(f,T)=(1/T)∫0T∥f(t)∥2dt or similar, is important and the power norm has been introduced: power(f)=limT→∞M2(f,T)1/2=limT→∞(1T∫0T‖f(t)‖2dt)1/2.
Remark 5.1.
In mathematics and in particular in analytic number theory, studying the mean square in the form of a sum or an integral is quite common. Especially, this idea is applied to finding out the true order of magnitude of the error term on average. Such an average result will give a hint on the order of the error term itself.
Example 5.2.
Let ζ(s) denote the Riemann zeta-function defined for σ>1(s=σ+it), in the first instance, where it is analytic and then continued meromorphically over the whole complex plane with a simple pole at s=1. It is essential that it does not vanish on the line σ=1 for the prime number theorem (PNT) to hold. The plausible best bound for the error term for the PNT is equivalent to the celebrated Riemann hypothesis (RH) to the effect that the Riemann zeta-function does not vanish on the critical line σ=1/2. Since the values on the critical line are expected to be small, the averaged norm M2(ζ) or M4(ζ), that is, the mean value (1/T)∫0T|ζ((1/2)+it)|2kdt for k=1,2 is of great interest and there have appeared a great deal of research on the subject. The first result for M4(ζ) is due to Ingham who used the approximate functional equation for the Riemann zeta-function to obtain
M4(ζ)=1T∫0T|ζ(12+it)|4dt=14π2log4T(1+o(1)),
for T→∞. See, for example, [3]. The main interest in such estimates as (5.10) lies in the fact that estimates for all k∈ℕM2k(ζ)=1T∫0T|ζ(12+it)|2kdt=O(Ta)
would imply the weak Lindelöf hypothesis (LH) in the form
ζ(12+it)=O(T(a/2k)+ε),
for every ε>0. It is apparent that the RH implies the LH.
The Hardy space Hp (cf. e.g., [1, page 39]) is well known. It consists of all f(s) which are analytic in ℛℋ𝒫—right half-plane σ>0 such that f(jω)∈Lp, in particular, H∞ with sup norm. Thus H∞-control problem is about those (rational) functions which are analytic in ℛℋ𝒫, a fortiori stable, with regard to the sup norm. Thus the above-mentioned mean-value problem for the Riemann zeta-function is related to the H2k-control problem with finite Dirichlet series (main ingredients in the approximate functional equation). Since the H∞-control problem asks for all individual values, it flows afar from the H2k-control problem and goes up to the LH or the RH.
6. (Unity) Feedback System
The synthesis problem of a controller of the unity feedback system, depicted in Figure 1, refers to the sensitivity reduction problem, which asks for the estimation of the sensitivity function S=S(s) multiplied by an appropriate frequency weighting function W=W(s):S=(I+PC)-1
is a transfer function from r to e, where C=K is a compensator and P is a plant. The problem consists in reducing the magnitude of S over a specified frequency range Ω, which amounts to finding a compensator C stabilizing the closed-loop system such that‖WS‖∞<γ
for a positive constant γ.
Unity feedback system.
To accommodate this in the H∞ control problem (3.1), we choose the matrix elements Pij of P in such a way that the closed-loop transfer function Φ in (3.11) coincides with WS. First we are to choose P22=-P. Then we would choose P12P21=WP. Then Φ becomes P11+WPC(I+PC)-1=P11-W+W(I+PC)-1. Hence choosing P11=W, we have Φ=WS. Hence we may choose, for example,P=(P11P12P21P22)=(WWPE-P).
Example 6.1.
First we treat the case of general feedback scheme. Denoting the Laplace transforms by the corresponding capital letters, we have
Y=PR+PU,U=KE,
whence Y=PR+PKE. Now if it so happens that e=r-y and P is replaced by PK, that is, in the case of unity FD, we derive (6.1) directly from Figure 2. We have E=R-Y, so that Y=PR+PK(R-Y). Solving in Y, we deduce that (I+PK)-1PKR.
We take into account the disturbance d, and we obtain since U=CE=C(R-Y)Y=PU+PD=PC(R-Y)+PD,
whence Y=PC(R-Y)+PD. It follows that Y=(I+PC)-1PCR+(I+PC)-1PD. In the case where d=0, PC being the open-loop transfer function, we have SR is the tracking error for the input R. Hence (6.1) holds true.
7. J-Lossless Factorization and Dualization
In this section we mostly follow Helton ([4–6]) who uses the unit ball in place of ℛℋ𝒫. They shift to each other under the complex exponential map. For conventional control theory, the unit ball is to be replaced by the critical line (σ=0). In practice what appears is the algebra of functions ([5, page 2]), (Table 1)R={functions defined on the unit ball havingthe rational continuation to the whole space},
or still larger algebra ψ consisting of those functions which have (pseudo)meromorphic continuations ([5, footnote 6, page 27]). The occurrence of the gamma function [5, Figure 2.5, page 17] justifies our incorporation of more advanced special functions and ultimately zeta-functions in control theory (see Section 13).
Correspondence between control systems and zeta-functions.
System
Functions
Action
Region of convergence
Critical line
S
Rational
Symplectic
σ>0
σ=1
ζ
Meromorphic
Modular
σ>1
σ=1/2
Along with the algebra ℛ, one considersBH∞={F∣analyticontheunitballhavingthesupremumnorm<1}.
Then the only mapping Θ∈ℛU(m,n) acting on ℬH∞ must satisfy the J-lossless property. Let Θ denote an (m+n)×(m+n) matrix.
ThenΘ*JmnΘ≤Jmn,
which is interpreted to be the power preservation of the system in the chain-scattering representation (3.6) ([1, page 82]).
We now briefly refer to the dual chain-scattering representation of the plant P in (3.2). We assume P12 is a square invertible matrix (whence m=p). Then the argument goes in parallel to that leading to (3.7). Defining the dual chain-scattering matrix byDCHAIN(P)=(P12-1P11P12-1-P12-1P22P21-P22P12-1P11),
we obtainCHAIN(P)⋅DCHAIN(P)=E.
8. FOPID
“FO” means “Fractional order” and “PID” refers to “Proportional, Integral, Differential,” whence “Proportional” means just constant times the input function e(t), “Integral” means the fractional order integration ItλDt-λ of e(t) (λ>0), and “Differential” the fractional order differentiation Dtδ of e(t) (δ>0).
The FO PIλDδ controller (control signal in the time domain) is one of the most refined feed-forward compensators defined byu(t)=(Kp+KiDt-λ+KdDtδ)e(t),
where u is the input function, e is the deviation, and Kp,Ki,Kd are constant parameters which are to be specified (Kp: the position feedback gain, Kd: the velocity feedback gain). DE (8.1) translates into the state equation Y(s)=C(s)E(s),
where U,Y indicate the Laplace transforms of u,y, respectively, and G is the compensators continuous transfer function C(s)=Kp+Kis-λ+Kdsδ.
The derivation of (8.3) from (8.1) depends on the following. The general fractional calculus operatoraDtα is symbolically stated asaDtα={dαdtα,Reα>0,1,Reα=0,∫at1dtα,Reα<0,
where a and t are the lower and upper limits of integration and α is the order of calculus.
More precisely, the definition of the fractional differintegral is given by the Riemann-Liouville expressionaDtαf(t)=1Γ(1-{α})(ddt)α-{α}+1∫at(t-τ)-{α}f(τ)dτ,
where {α}=α-[α] indicates the fractional part of α, with [α] the integral part of α. Thus we are also led to the Riemann-Liouville fractional integral transform:RL[f]=1Γ(μ)∫0y(y-x)μ-1f(x)dx.
For applications, compare Section 13.
When α∈ℕ, (8.5) readsaDtαf(t)=(ddt)α+1∫atf(τ)dτ=f(α)(t),
the αth derivative of f.
We will see that the definition (8.5) is a natural outcome of the general formula for the difference operator of order α∈ℕ with difference y≥0:Δyαf(x)=∑ν=0α(-1)α-ν(αν)f(x+νy).
If f has the α-th derivative f(α), thenΔyαf(x)=∫xx+ydt1∫t1t1+ydt2⋯∫tα-1tα-1+yf(α)(tα)dtα.
The special case of (8.9) with tν=a,a+y→x(φ(t)=f(α)(t)) readsΔx-aαφ(x)=∫axdt∫axdt⋯∫axφ(t)dt=1Γ(α)∫ax(x-t)α-1φ(t)dt,
whose far-right hand side is ℛL[φ].
Let F(s) be the Laplace transform of the input function f(t). Then L[Dtα0f](t)=sαF(s)-0Dtα-1f(t)∣t=0,L[Dt-α0f](t)=s-αF(s).
9. Fourier, Mellin, and (Two-Sided) Laplace Transforms
We state the Mellin, (two-sided) Laplace, and the Fourier transforms. If f(x)=O(xα),α∈ℝ for x>0, then its Mellin transform M[f] is defined byM[f](s)=∫0∞xsf(x)dxx,σ>α
Under the change of variable x=e-t, the Mellin transform and the two-sided Laplace transform shift each other:L±[φ](s)=∫-∞∞e-stφ(t)dt,σ>α,
where we write φ(t)=f(e-t).
The ordinary Laplace transform (one-sided Laplace transform) is obtained by multiplying the integrand by the unit step function u=u(t) (cf. the passage immediately after (2.7)): L[f](s)=L±[fu](s)=∫0∞e-stf(e-t)dt,σ>α,
compare Definition 11.1.
If we fix ϰ>α and write s=ϰ+jω, G(y)=L±[f](ϰ+jω),g(t)=e-ϰtf(e-t) in (9.2), then it changes intoG(ω)=F[g](ω)=∫-∞∞e-jωtg(t)dt=L±[φ](jω),
the Fourier transform of g.
We explain Plancherel’s theorem for functions in L2(ℝ). Letf̂T(x)=12π∫-TTe-ixtf(t)dt.
Then f̂T(x) is convergent to a function f̂ in L2:‖f̂T-f̂‖⟶0,T⟶∞,limT→∞f̂T(x)=f̂(t),
where lim is a short-hand for “limit in the mean.” The Parseval identity reads‖f̂‖2=‖f‖2,∫0∞|f̂(t)|2dt=∫0∞|f(t)|2dt.
If we apply (9.7) to a causal function f, then it leads to [1, (3.19)]∫-∞∞|L±[f](iω)|2dω=∫0∞|f(t)|2dt.
Hence we see that [1, (3.19)] is indeed the Parseval identity for the Fourier (or Plancherel) transform for f∈L2(ℝ).
10. Examples of Second-Order Systems10.1. Electrical Circuits
The electric current i=i(t) flowing an electrical circuit which consists of four ingredients, electromotive-force e=e(t), resistance R, coil L, and condenser C, satisfiesLd2idt2+Rdidt+1Ci=e′(t).
10.2. Newton’s Equation of Motion (cf. [7])
One hasMd2ydt2+Rdydt+Ky=e(t)=F,
where M is the inertance of mass, R is the viscous resistance of the dashpot, and K is the spring stiffness.
Introducing the new parametersωn=KM:naturalangularfrequence,ζ=R2K/M:dampingratio,
(10.2) becomes1ωn2d2ydt2+2ζωndydt+y=1KF.
11. Laplace Transforms
To solve (10.1), we use the Laplace transform which has been defined by (9.3) and we state its definition independently.
Definition 11.1.
Suppose y(t)=O(eat),t→∞ for an a∈ℝ. The Laplace transform Y(s)=L[y](s) of y=y(t) is defined by
L[y](s)=∫0∞e-sty(t)dt,Res>a.
The integral converges absolutely in Res>a and represents an analytic function there.
Example 11.2.
Let α∈ℂ. Then
L[eαt](s)=1s-α,
valid for Res>Reα in the first instance. The right-hand side of (11.2) gives a meromorphic continuation of the left-hand side to the punctured domain ℂ∖{α}. Furthermore, (11.2) with α replaced by iα reads
L[sinαt](s)=αs2+α2,L[cosαt](s)=ss2+α2.
For α=ω∈ℝ they reduce to familiar formulas:
L[sinωt](s)=ωs2+ω2,L[cosωt](s)=ss2+ω2.
Proof.
By definition (11.2) clearly holds true. Since the right-hand side is analytic in ℂ∖{α}, the consistency theorem establishes the last assertion. Once (11.2) is established, we have
L[eiαt](s)=1s-iα,L[e-iαt](s)=1s+iα,
whence, for example,
L[cosαt](s)=12(L[eiαt](s)+L[e-iαt](s))=ss2+α2,by Euler’s identity, that is, (11.4).
12. Partial Fraction Expansion and Examples
As long as the input function is a sinusoidal function, Example 11.2 will suffice to compute its Laplace transform. To go back to the time domain from the frequency domain, we need to solve the DE and, for most purposes, the following partial fraction expansion will give the answer almost automatically.
The following theorem, which is well known, provides us with the partial fraction expansion.
Theorem 12.1.
If the denominator C(z) of the rational function S(z)=P(z)/C(z) is given by
C(z)=c0∏i=1q(z-βi)σi,∑i=1qσi=degC=L,
where βi=σi-1, then
S(z)=∑i=1q∑j=0σi-jak,σk-j1(z-βi)σi-j,
where the coefficients are given by
ai,σi-j=1j!limz→βidjdzj((z-βi)σiR(z)).
Proof.
By (12.1), for each i, 1≤i≤q, we may write
S(z)=Pi(z)(z-βi)σi,Pi(z)∈C(z),
and Si(z) has no pole at z=βi. We write
((z-βi)σiS(z)=)Pi(z)=∑j=0σi-1ai,σi-j(z-βi)j+(z-βi)σiHi(z),
where Hi(z)∈ℂ(z) has no pole at z=βi. By successively differentiating and setting z=βi, we obtain (12.3).
Now, the rational function
F(z):=S(z)-∑i=1q∑j=0σi-1ak,rk-j1(z-βi)σi-j
has no pole, so that it must be a polynomial. But, since limz→∞F(z)=0 (where we use the assumption degP<degC), it follows that F(z) must be zero.
Now we will give examples of (2.2) for the second-order systems which do not appear anywhere else save for [2].
Example 12.2.
We find the output signal (current) y=y(t) described by the DE
y′′+y′+y=e-(1/2)tsin32t,
where the initial values are assumed to be 0: y(0)=0,y′(0)=0.
Proof.
Let Y(s)=L[y](s) be the Laplace transform of y(t). Then we have
Y(s)=Φ(s)U(s),U(s)=L[u](s)=3/2s2+s+1,23L[y](s)=1(s2+s+1)2,
and we may obtain the partial fraction expansion
1(s2+s+1)2=-1/3(s-ρ)2+-(2/(33))is-ρ+-1/3(s-ρ̅)2+(2/(33))is-ρ̅,where ρ=e(2πi/3)=(-1+3i)/2 is the first primitive cube root of 1. Hence
23y(t)=23L-1[L[y]](t)=-23te-(1/2)tcos32t+233e-(1/2)tsin32t.
As a transfer function, the function in (12.8)
Φ(s)=1s2+s+1
is stable.
Example 12.3.
The following integral may be evaluated by the partial fraction expansion above or by the residue calculus:
∫-∞∞1(x2+x+1)2dx=2πi(-233i)=433π.
Example 12.4.
In the same vein as with Example 12.2, we may find the solution of the DE
y′′-y′+y=u(t)=e(1/2)tsin32t,where the initial values are assumed to be 0: y(0)=0,y′(0)=0.
We haveY(s)=Φ1(s)U(s),U(s)=L[u](s)=3/2s2-s+1,
or23L[y](s)=1(s2-s+1)2.
In this section, we illustrate the use of fractional integrals by proving a slight generalization of the result of Chandrasekharan and Narasimhan ([8]) involving the ΓΓ-type functional equation, which is the first instance beyond Hecke theory of the functional equation with a single gamma factor. First we state the basic settings.
13.1. Statement of the Situation
Let {λk},{μk} be increasing sequences of positive numbers tending to ∞, and let {αk},{βk} be complex sequences. We form the Dirichlet seriesφ(s)=∑k=1∞αkλks,ψ(s)=∑k=1∞βkμks
and suppose that they have finite abscissas of absolute convergence σφ, σψ, respectively.
We suppose the existence of the meromorphic function χ satisfying the functional equation (of ΓΓ-type) of the form with r a real number and having a finite number of poles sk(1≤k≤L): χ(s)={Γ(s+ν2)Γ(s-ν2)φ(s),(Res>σφ),Γ(r-s+ν2)Γ(r-s-ν2)ψ(r-s),(Res<r-σψ)
We introduce the processing gamma factorΔ(w)=Γ({bj+Bjw}j=1m)Γ({aj-Ajw}j=1n)Γ({aj+Ajw}j=n+1p)Γ({bj-Bjw}j=m+1q)(Aj,Bj>0)
and suppose that for any real numbers u1,u2(u1<u2)lim|v|→∞Δ(u+iv-s)χ(u+iv)=0,
uniformly in u1≤u≤u2.
In the w-plane we take two deformed Bromwich pathsL1(s):γ1-i∞⟶γ1+i∞,L2(s):γ2-i∞⟶γ2+i∞(γ2<γ1)
such that they squeeze a compact set 𝒮 with boundary 𝒞 for which sk∈𝒮(1≤k≤L) and all the poles ofΓ({bj-Bjs+Bjw}j=1m)Γ({aj-Ajs+Ajw}j=n+1p)Γ({bj+Bjs-Bjw}j=m+1q)
lie to the left of L2(s) and those ofΓ({aj+Ajs-Ajw}j=1n)Γ({aj-Ajs+Ajw}j=n+1p)Γ({bj+Bjs-Bjw}j=m+1q)
lie to the right of L1(s).
Then we define the H-function by (0≤n≤p,0≤m≤q,Aj,Bj>0)Hp,qm,n(z∣(1-a1,A1),…,(1-an,An),(an+1,An+1),…,(ap,Ap)(b1,B1),…,(bm,Bm),(1-bm+1,Bm+1),…,(1-bq,Bq))=12πi∫LΓ(b1+B1s,…,bm+Bms)Γ(a1-A1s,…,an-Ans)Γ(an+1+An+1s,…,ap+Aps)Γ(bm+1-Bm+1s,…,bq-Bqs)z-sds.
In the special case, where Aj=Bj=1, the H-function reduces to G-functions and denoted by G with other parameters remaining the same. We also define the χ-function X(z,s) byX(z,s)=12πi∫L1(s)Δ(w-s)χ(w)z-wdw,
which is for χ=1 one of H-functions. Hereafter we always assume that z>0, which may be extended to Rez>0.
Then we haveX(z,s)=12πi∫L2(s)Δ(w-s)χ(w)z-wdw+12πi∫CΔ(w-s)χ(w)z-wdw,
which amounts to the following.
Theorem 13.1 ([9]).
One has the modular relation equivalent to (13.2):X(z,s)=∑k=1∞αkλks×Hp,q+2m+2,n(zλk∣{(1-aj,Aj)}1n,{(aj,Aj)}n+1p(s+ν2,1),(s-ν2,1),{(bj,Bj)}1m,{(1-bj,Bj)}m+1q)=∑k=1∞βkμkr-s×Hq,p+2n+2,m(μkz∣{(1-bj,Bj)}1m,{(bj,Bj)}m+1q(r-s+ν2,1),(r-s-ν2,1),{(aj,Aj)}1n,{(1-aj,Aj)}n+1p)+∑k=1LRes(Δ(w-s)χ(w)zs-w,w=sk).(∑j=1nAj+∑j=1mBj+2≥∑j=n+1pAj+∑j=m+1qBj).
In the special case, where Aj=Bj=1, we have the following.
Theorem 13.2.
One has
zsX(z,s)=∑k=1∞αkλksGp,q+2m+2,n(zλk∣1-a1,…,1-an,an+1,…,aps+ν2,s-ν2,b1,…,bm,1-bm+1,…,1-bq)=∑k=1∞βkμkr-s×Gq,p+2n+2,m(μkz∣1-b1,…,1-bm,bm+1,…,bqr-s+ν2,r-s-ν2,a1,…,an,1-an+1,…,1-ap)+∑k=1LRes(Δ(w-s)χ(w)zs-w,w=sk),(2n+2m+2≥p+q).
For many important applications, compare [9].
13.2. The Riesz Sum (G4,42,2↔G2,64,0).
Formula (13.11) in the special case of the title reads∑k=1∞αkλksG4,42,2(zλk∣a,b,c,ds+ν2,s-ν2,e,f)=∑k=1∞βkμr-sG2,64,0(μkz∣1-e,1-fr-s+ν2,r-s-ν2,1-a,1-b,1-c,1-d)+∑k=1LRes(Δ(w-s)χ(w)zs-w,w=sk),
whereΔ(w)=Γ(1-a-w)Γ(1-b-w)Γ(c+w)Γ(d+w)Γ(1-e-w)Γ(1-f-w).
We treat the case r=1/2. Assuming λ is a nonnegative integer, we put a=s+(ν/2)+(λ/2)+(1/2), b=s+(ν/2)+(λ/2)+1, c=s-(ν/2), d=s+(ν/2)+λ+1, e=s+(ν/2)+(1/2), f=s+(ν/2)+λ+1. Then (13.12) becomes ∑k=1∞αkλksG4,42,2(zλk∣s+ν2+λ2+12,s+ν2+λ2+1,s-ν2,s+ν2+λ+1s+ν2,s-ν2,s+ν2+12,s+ν2+λ+1)=∑k=1∞βkμ(1/2)-sG2,64,0(μkz∣-s-ν2+12,-s-ν2-λ*)+∑k=1LRes(Δ(w-s)χ(w)zs-w,w=sk),
where * indicates -s+ν/2+1/2, -s-ν/2+1/2, -s-ν/2-λ/2+1/2, -s-ν/2-λ/2, -s+ν/2+1, -s-ν/2-λ.
We note that the G-functions in (13.14) reduce toG4,42,2(z∣s+ν2+λ2+12,s+ν2+λ2+1,s-ν2,s+ν2+λ+1s+ν2,s-ν2,s+ν2+12,s+ν2+λ+1)=2λG1,11,0(z∣2s+ν+λ+12s+ν)={2λΓ(λ+1)zs+(ν/2)(1-z)λ,(|z|<1)0,(|z|>1)
(by the formula in [10]) andG2,64,0(z∣-s-ν2+12,-s-ν2-λ**)=G2,64,0(z∣-s-ν2+1,-s-ν2-λ†),
where ** indicates -s+ν/2+1/2, -s-ν/2+1/2, -s-ν/2-λ/2+1/2, -s-ν/2-λ/2, -s+ν/2+1, -s-ν/2-λ and †, -s+ν/2+1/2, -s+ν/2+1, -s-ν/2-λ/2, -s-ν/2-λ/2+1/2, -s+ν/2+1, -s-ν/2-λ. Hence it reduces further toz-s-(λ/2)+(1/4){2π2πcos((ν+λ+1)π)K2ν+λ+1(4z4)+cos((ν+1)π)Y2ν+λ+1(4z4)+sin((ν+1)π)J2ν+λ+1(4z4)2π}=-z-s-(λ/2)+(1/4){2π(-1)λ2πcos(νπ)K2ν+λ+1(4z4)+cos(νπ)Y2ν+λ+1(4z4)+sin(νπ)J2ν+λ+1(4z4)2π}=z-s-(λ/2)+(1/4)G2ν+λ+1λ(4z4),
say, where, slightly more general than Wilton’s (1.22) [11], we putGνλ(z)=-(-1)λ2πsin(ν-λ2π)Kν(z)-sin(ν-λ2π)Yν(z)+cos(ν-λ2π)Jν(z).
Hence (13.14) readszs+ν/22λΓ(λ+1)∑λk<1/zαkλkν/2(1-zλk)λ=zs+λ/4-1/4∑k=1∞βkμkλ/4+1/4G2ν+λ+1λ(4μkz4)+∑k=1LRes(Δ(w-s)χ(w)zs-w,w=sk),
which gives a more general form of Wilton’s Theorem 1 [11].
Rewriting (13.19) slightly, we deduce an analogue of Chandrasekharan and Narasimhan result [8, Theorem 7.1(a)],
Theorem 13.3.
For x>0, the functional equation (13.2) implies the identity
1Γ(λ+1)∑λk<xαkλkν(x-λk)λ=xλ/2+ν+1/22-λ∑k=1∞βkμkλ/2+1/2G2ν+λ+1λ(4μkx)+Pλ(x),
where
Pλ(x)=x2s+λ+ν∑k=1LRes(Δ(w-s)χ(w)x2(w-s),w=sk)
and where
Δ(w)=Γ(1-s-λ/2-ν/2-w)Γ(-s-λ/2-ν/2-w)Γ(s-ν/2+w)Γ(s+λ+ν/2+1+w)Γ(1-s-ν/2-1/2-w)ΓZ,
where ℨ denotes (-s-λ-ν/2-w), with G2ν+λ+1λ being given by (13.18).
Corollary 13.4.
For x>0, the functional equation (13.2) implies the identity
Aλ(x):=1Γ(λ+1)∑λk<xαk(x-λk)λ=-2-λ∑k=1∞βk(xμk)λ/2+1/2Fλ+1(4μkx)+Pλ(x),
where
Fλ+1(z)=-Gλ+1λ(z)=Yλ+1(z)+(-1)λ2πKλ+1(z).
We are now in a position to prove an analogue of [8, Theorem 7.1(b)] (although Theorem 13.3 contains [8, Theorem 7.2], too) by the Riemann-Liouville fractional integral transform.
Lemma 13.5 (Riemann-Liouville integral of Bessel functions).
For the well-known Bessel functions J and Y, one has
1Γ(μ)∫0y(y-x)μ-1x(1/2)νJν(ax1/2)dx=2μa-μy(1/2)μ+(1/2)νJμ+ν(ay1/2)(Reμ>0,Reν>0)1Γ(μ)∫0y(y-x)μ-1x(1/2)νYν(ax1/2)dx=2μa-μy(1/2)μ+(1/2)νJμ+ν(ay1/2)+Γ(ν+1)Γ(μ)π2ν+2a-μy(1/2)μ+(1/2)νSμ-ν-1,μ+ν(ay1/2)(Reμ>0,Reν>-1),
where S stands for the Lommel function.
Equation (13.26) is [12, (63), page 194] and (13.27) is [12, page 196]. We only need (13.27) and (13.27) is for treating the J-Bessel function.
Arguing in the same way as in [8], we may prove the following.
Theorem 13.6.
With a C∞-function Rρ and a certain constant c one has
Aρ(x)-Rρ(x)=c∑k=1∞βk(xμk)ρ/2+1/2Yρ+1(4μkx),
for integral λ, ρ=λ+α, 0<α<1, λ≥2σψ-3/2.
Acknowledgment
This work is supported by the SMX SUDA CO. (No. SDJN1001).
KimuraH.1997Berlin, GermanyBirkhäuserKanemitsuS.TsukadaH.2007SingaporeWorld ScientificBellmanR.Wigert's Approximate Functional Equation and the Riemann ζ-Function194916454755210.1215/S0012-7094-49-01650-6HeltonJ. W.The distance of a function to H∞
in the Poincaré metric; electrical power transfer198038227331458791010.1016/0022-1236(80)90066-XZBL0496.47026HeltonJ. W.Non-euclidean functional analysis and electronics19827116465619710.1090/S0273-0979-1982-15001-7ZBL0493.46047HeltonJ. W.MerinoO.1998Philadelphia, Pa, USASociety for Industrial and Applied MathematicsGrodinsF. S.1963New York, NY, USAColumbia University PressChandrasekharanK.NarasimhanR.Functional equations with multiple gamma factors and the average order of arithmetical functions196276293136KanemitsuS.TsukadaH.2012SingaporeWorld ScientificErdélyiA.MagnusW.OberhettingerF.TricomiF. G.19531–3New York, NY, USAMcGraw-HillWiltonJ. R.An extended form of Dirichlet's divisor problem193436239142610.1112/plms/s2-36.1.391ErdélyiA.MagnusW.OberhettingerF.TricomiF. G.19531–3New York, NY, USAMcGraw-Hill