This paper presented an inverse optimal neural controller with speed gradient (SG) for discrete-time unknown nonlinear systems in the presence of external disturbances and parameter uncertainties, for a power electric system with different types of faults in the transmission
lines including load variations. It is based on a discrete-time recurrent high order neural network (RHONN) trained with an extended Kalman
filter (EKF) based algorithm. It is well known that electric power grids are considered as complex systems due to their interconections
and number of state variables; then, in this paper, a reduced neural model for synchronous machine is proposed for the stabilization of nine
bus system in the presence of a fault in three different cases in the lines of transmission.
1. Introduction
Many physical systems, such as electric power grids, computer and communication networks, networked dynamical systems, transportation systems, and many others, are complex large-scale interconnected systems [1]. To control such large scale systems, centralized control schemes are proposed in the literature assuming available global information for the overall system. Another problem in complex large-scale interconnected systems is the effect of delays that typically are unknown and time-variable [2, 3]. While using control centralization has theoretical advantages, it is very difficult for a complex large-scale system with interconnections due to technical and economic reasons [4]. Furthermore, centralized control designs are dependent upon the system structure and cannot handle structural changes. If subsystems are added or removed, the controller for the overall system should be redesigned. Therefore decentralized control for interconnected power systems has also attracted considerable attention of researchers in the field of complex and large-scale systems like multiarea interconnected power systems. Besides, due to physical configuration and high dimensionality of interconnected systems, centralized control is neither economically feasible nor even necessary. These facts motivate the design of decentralized controllers, using only local information while guaranteeing stability for the whole system [1].
The main issue in this paper is the analysis of a fault in the electric power system in different lines of transmission, the recurrent high order neural networks (RHONN) allow the identification of nonlinear systems, and then the RHONN model can be used for the controller design. Recently, some works have been published about synchronous generators in which reduced models have been proposed, such models are able to reproduce full order dynamics for synchronous generators [1, 5]. The system under study consists of three synchronous generators interconnected (nine bus system) and there are cases of study of power electric system, where a three-phase fault is introduced at the end of the line 7 [6]; in this paper, the analysis for the system is focused in other lines, at the end of buses 8 and 9, the fault is proposed and tested via simulation and the purpose is the production and distribution of a reliable and robust electric energy.
On the other hand, a model in discrete time has been proposed [7], in which a recurrent high order neural network has been incorporated to implement a control law as this reduced model allows the stabilization through the inverse optimal control law SG. In this work, a neural model of the multimachine system is proposed, which results useful, because it is focused in the variable states that are more relevant for this paper: position, velocity, and voltage rotor [7]; further, the control law is implemented for the power electric system that consists of three interconnected synchronous generators. A solution is proposed for the destabilization problem of multimachine power electric system in the presence of a fault in one of its lines of transmission that occurs at 10 seconds of simulation. A system identification of the complete multimachine power electric system model (nine bus system) is presented through a neural reduced model and this allows the design of a neural inverse optimal SG control law. Finally the results obtained are shown, in which it can be seen that the control law stabilizes the system in presence of the fault in the three cases of fault that are presented.
In literature, there are works that report the parameter identification for synchronous machines for full order models [5] as well as for reduced order ones [8]; however, these models are for nominal condition; that is, they do not consider fault scenarios; in [1], a reduced order neural model is considered; however, it is developed for continuous time; nevertheless, the need to real-time implementations makes necessary the use of digital models, besides, in [9], has been developed a discrete-time neural controller, which is proposed for a single machine system. Then, the paper main contributions can be stated as follows: first a RHONN is used to establish a discrete-time reduced order mathematical model for a multimachine power electric system model. Then this neural model is used to synthesize an inverse optimal SG control law to stabilize the system and, finally, three fault scenarios are considered in order to illustrate the applicability of the proposed scheme.
2. Mathematical Preliminaries2.1. Discrete-Time High Order Neural Networks
The use of multilayer neural networks is well known for pattern recognition and for static systems modelling. The NN is trained to learn an input-output map. Theoretical works have proven that, even with just one hidden layer, a NN can uniformly approximate any continuous function over a compact domain, provided that the NN has a sufficient number of synaptic connections [10]. To implement the neural network (NN) design, a RHONN is used [7] and this model turns out to be very flexible because it allows incorporating priory information to the model:
(1)x^i(k+1)=ωiTzi(xi(k),u(k)),i=1,…,n,
where x^i(i=1,2,…,n) is the state of the ith neuron and ωi(i=1,2…,n) is the respective online adapted weight vector. Now we define the vector:
(2)zi(xi(k),u(k))=[zi1zi2⋮ziLi]=[∏j∈I1ξijdij(1)∏j∈I2ξijdij(2)⋮∏j∈ILiξijdij(Li)].Li is the respective number of high-order connections, {I1,I2,…,ILi} is a collection of nonordered subsets of {1,2,…,n+m}, n is the state dimension, and m is the number of external inputs, with di(k) being nonnegative integers and ξi defined as follows:
(3)ξi=[ξi1⋮ξinξin+1⋮ξin+m]=[S(x1)⋮S(xn)u1⋮S(um)].u = [u1,u2,…,um]T is the input vector to the neural network and S(•) is defined by
(4)S(ς)=11+exp(-βς),β>0,
where ς is any real value variable.
2.2. The EKF Training Algorithm
The best well-known training approach for recurrent neural networks (RNN) is the backpropagation through time learning [11]. However, it is a first order gradient descent method and hence its learning speed could be very slow [12]. Recently, Extended Kalman Filter (EKF) based algorithms have been introduced to train neural networks [7, 9, 13, 14]. With the EKF based algorithm, the learning convergence is improved [14]. The EKF training of neural networks, both feedforward and recurrent ones, has proven to be reliable and practical for many applications over the past years [14]. It is known that Kalman filtering (KF) estimates the state of a linear system with additive state and output white noises [15, 16]. For KF-based neural network training, the network weights become the states to be estimated. In this case, the error between the neural network output and the measured plant output can be considered as additive white noise. Due to the fact that the neural network mapping is nonlinear, an EKF-type is required (see [17] and references therein). The training goal is to find the optimal weight values which minimize the prediction error. The EKF-based training algorithm is described by [15]:
(5)Ki(k)=ρi(k)Hi(k)Mi-1(k)ωi(k+1)=ωi(k)+ηiKi(k)ei(k)ρi(k+1)=ρi(k)-Ki(k)HiT(k)ρi(k)+ϕi(k)
with
(6)Mi(k)=[τi(k)+HiT(k)ρi(k)Hi(k)]-1ei(k)=xi(k)-x^i(k),
where ρi∈ℜLi×Li is the prediction error associated covariance matrix, ωi∈ℜLi is the weight (state) vector, xi∈ℜ is the ith plant state component, x^i∈ℜ is the ith neural state component, ηi is a design parameter, Ki∈ℜLi×m is the Kalman gain matrix, ϕi∈ℜLi×Li is the state noise associated covariance matrix, τi∈ℜm×m is the measurement noise associated covariance matrix, and Hij∈ℜLi×m is a matrix, for which each entry (Hij) is the derivative of one of the neural network output, (x^ij), with respect to one neural network weight, (ωij), as follows:
(7)Hij(k)=[∂x^i(k)∂ωij(k)]ωi(k)=ω^i(k+1)i=1,…,n,j=1,…,Li.
Usually ρi, ϕi, and τi, are initialized as diagonal matrices, with entries ρi(0), ϕi(0), and τi(0), respectively.
3. Controller Design
Optimal control is related to finding a control law for a given system such that a performance criterion is minimized. This criterion is usually formulated as a cost functional, which is a function of the state and control variables. The optimal control problem can be solved using Pontryagin’s maximum principle (a necessary condition) [18] and the method of dynamic programming developed by Bellman [19, 20], which can lead to a nonlinear partial differential equation called the Hamilton-Jacobi-Bellman (HJB) equation (a sufficient condition); nevertheless, solving the HJB equation is not a feasible task [21, 22].
3.1. Inverse Optimal Control via CLF
In this paper, the inverse optimal control and its solution by proposing a quadratic control Lyapunov function (CLF) are used [23] and the CLF depends on a fixed parameter in order to satisfy stability and optimality condition. A posteriori, the speed gradient algorithm is established to compute this CLF parameter and it is used to solve the inverse optimal control problem. Motivated by the favorable stability margins of optimal control systems, a stabilizing feedback control law is proposed, which will be optimal with respect to a meaningful cost functional. At the same time, it is desirable to avoid the difficult task of solving the HJB partial differential equation. In the inverse optimal control problem, a candidate CLF is used to construct an optimal control law directly without solving the associated HJB equation [24]. Inverse optimality is selected, because it avoids solving the HJB partial differential equations and still allows obtaining Kalman-type stability margins [21].
In contrast to the inverse optimal control via passivity approach, in which a storage function is used as a candidate CLF and the inverse optimal control law is selected as the output feedback, for the inverse optimal control via CLF, the control law is obtained as a result of solving the Bellman equation. Then, a candidate CLF for the obtained control law is proposed such that it stabilizes the system and a posteriori a meaningful cost functional is minimized.
In this paper, a quadratic candidate CLF is used to synthesize the inverse optimal control law. The following assumptions and definitions allow the inverse optimal control solution via the CLF approach.
The full state of system
(8)x(k+1)=f(x(k))+g(x(k))u(k)
is measurable.
Definition 1 (inverse optimal control law).
Let us define the control law [23]
(9)u(k)=-12R-1(x(k))gT(x(k))∂V(x(k+1))∂x(k+1)
to be inverse optimal (globally) stabilizing if
it achieves (global) asymptotic stability of x= 0 for system (8);
V(x(k)) is (radially unbounded) positive definite function such that inequality
(10)V¯∶=V(x(k+1))-V(x(k))+u(k)TR(x(k))u(k)≤0
is satisfied. When l(x(k):=V¯≥0, is selected; then V(x(k)) is a solution for the HJB equation
(11)l(x(k)+V(x(k+1))-V(k))+14VT*R-1(x(k))gT(x(k))V*=0,
where
(12)VT*=∂VT(x(k+1))∂x(k+1),V*=∂V(x(k+1))∂x(k+1).
It is possible to establish the main conceptual differences between optimal control and inverse optimal control as follows.
For optimal control, the meaningful cost indexes l(x(k))≥0 and R(x(k))>0 are given a priory; then, they are used to calculate u(x(k)) and V(x(k)) by means of HJB equation solution.
For inverse optimal control, a candidate CLF V(x(k)) and the meaningful cost index R(x(k)) are given a priory, and then these functions are used to calculate the inverse control law u(k) and the meaningful cost index (x(k)), defined as l(x(k))∶=-V¯(x(k)).
As established in Definition 1, the inverse optimal control problem is based on the knowledge of V(x(k)). Thus, it is proposed as a CLF V(x(k)), such that (1) and (2) are guaranteed. That is, instead of solving (11) for V(x(k)), it is proposed a control Lyapunov function V(x(k)) as
(13)V(x(k))=12xT(k)Px(k)
for control law (9), in order to ensure stability of the equilibrium point x(k)=0 of system (8), which will be achieved by defining an appropriate matrix P. Moreover, it will be established that control law (9) with (13), which is referred to as the inverse optimal control law, optimizes a meaningful cost functional of the form:
(14)J(x(k))=∑0∞(l(x(k))+uT(k)R(x(k))u(k)).
Consequently, by considering V(x(k)) as in (13), the control law takes the following form:
(15)α(x(k))∶=u(k)=-12(R(x(k))+P2(x(k)))-1P1(x(k)),
where P1(x(k))=gT(x(k))Pf(x(k)) and P2(x(k))=(1/2)gT(x(k))Pg(x(k)). It is worth pointing out that P and R(x(k)) are positive definite and symmetric matrices; thus, the existence of the inverse in (15) is ensured.
3.2. Speed-Gradient SG Algorithm
Given that (15) P is redefined as P(k) where P1(x(k))=gT(x(k))P(k)f(x(k)) and P2(x(k))=(1/2)gT(x(k))P(k)g(x(k)), this will allow us to compute a time variant value in time for P(k), which ensures stability to the system (8) by means of the algorithm SG.
In [25] a discrete-time application of the SG algorithm is formulated to find a control law u(k) which ensures the control goal:
(16)Q(x(k+1))≤Δ,fork≥k*,
where Q is a control goal function, a constant Δ>0, and k*∈ℤ+ is the time at which the control goal is achieved. Q ensures stability if it is a positive definite function.
Based on the SG application proposed in [25], the control law given by (15) is considered, with Δ in (16) a state dependent function Δ(x(k)).
Consider the control law redefined for the speed gradient algorithm which at every time depends on the matrix P(k). Let us define the matrix P(k) at every time k as
(17)P(k)=p(k)P′,
where P′=P′T>0 is a given constant matrix and p(k) is a scalar parameter to be adjusted by the SG algorithm. Then the control law is transformed as follows:
(18)u(k)=-p(k)2(R(x(k))+p(k)2P1*)-1P2*,
where
(19)P1*=gT(x(k))P′g(x(k)),P2*=gT(x(k))P′f(x(k)).
The SG algorithm is now reformulated for the inverse optimal control problem.
Definition 2 (SG goal function).
Consider a time-varying parameter p(k)∈𝒫⊂ℜ+ with p(k)>0 for all k, and 𝒫 is the set of admissible values for p(k) [23]. A nonnegative function Q:ℜn×ℜ→ℜ of the form
(20)Q(x(k),p(k))=VSG(x(k+1)),
where VSG(x(k+1))=-(1/2)xT(k+1)P′x(k+1) with x(k+1) as defined in (8), is referred to as SG goal function for system (8), with Q(k(p))∶=Q(x(k),p(k)).
Definition 3 (SG control goal).
Consider a constant p*∈𝒫. The SG control goal for system (8) with (18) is defined as finding p(k), so that the SG goal function Q(k(p)) [23], as in (20), fulfills
(21)Q(k(p))≤Δ(x(k)),fork≥k*,
where
(22)Δ(x(k))=VSG(x(k))-1p(k)uT(k)R(x(k))u(k)
with VSG(x(k))=-(1/2)xT(k)P′x(k) and u(k) as defined in (18); k*∈Z+ is the time at which the SG control goal is achieved.
Solution p(k) must guarantee that VSG(x(k)) > (1/p(k))uT(k)R(x(k))u(k) in order to obtain a positive definite function Δ(x(k)).
To conclude, the SG algorithm is used to calculate p(k) in order to achieve the SG control goal defined above.
Proposition 4.
Consider a discrete-time nonlinear system of the form (8) with (18) as input [23]. Let Q be a SG goal function as defined in (2) and denoted by Q(k(p)). Let p¯, p*∈𝒫 be positive constant values and let Δ(x(k)) be a positive definite function with Δ(0)=0 and let ϵ* be a sufficiently small positive constant. Assume the following.
There exist p* and ϵ* such that(23)Q(k(p*))≤ϵ*≪Δ(x(k)),1-ϵ*Δ(x(k))≈1.
For all p(k)∈𝒫,(24)(p*-P(k))T∇(p)Q(k(p))≤ϵ*-Δ(x(k))<0,where ∇(p)Q(k(p)) denotes the gradient of Q(k(p) with respect to p(k). Then, for any initial condition p(0)>0, there exists a k*∈ℜ+ such that the SG control goal (16) is achieved by means of the following dynamic variation of parameter p(k):(25)p(k+1)=p(k)-γd(k)∇(p)Q(k(p))with(26)γd(k)=γcδ(k)|∇(p)Q(k(p))|-20<γc≤2Δ(x(k)),δ(k)={1forQ(p(k))>Δ(x(k))0otherwise.
Finally, for k≥k*, p(k) becomes a constant value denoted by p¯ and the SG algorithm is completed.
With Q(p(k)) as defined in (20), the dynamic variation of parameter p(k) in (25) results in
(27)p(k+1)=p(k)+Θ*,
where
(28)Θ*=8γd(k)×fT(x(k))P′g(x(k))R(x(k))2gT(x(k))f(x(k))(2R(x(k))+p(k)gT(x(k))P′g(x(k)))3
which is positive for all time k if p(0)>0. Therefore positiveness for p(k) is ensured and requirement P(k)=PT(k)>0 para V(x(k))=(1/2)xT(k)P(k)x(k) with P(k)=PT(k)>0 is guaranteed. When SG control goal (21) is achieved, then p(k)=p for k≥k*. Thus, matrix P(k) in (18) is considered constant and P(k)=P where P is computed as P=p¯P′, with P′ a design positive definite matrix. Under these constraints, we obtain
(29)α(x(k))∶=u(k)=-12(R(x(k))+P2(x(k)))-1P1(x(k)),
where P1(x(k))=gT(x(k))Pf(x(k)) and P2(x(k))=(1/2)gT(x(k))Pg(x(k)).
3.3. Tracking Reference
In the case of tracking reference, the control law is defined as follows [23]:
(30)u(k)=-12(R(x(k))+P2(x(k)))-1P1(x(k)),
where P1(x(k))=gT(x(k))Pf(x(k)-xref(k+1)) and P2(x(k))=(1/2)gT(x(k))Pg(x(k)).
4. Multimachine Power System Control4.1. Multimachine Power System Complete Model
In this work, the proposed decentralized identification and control scheme is tested with the Western System Coordinating Council (WSCC) 3-machine, 9-bus system [6, 26]. The differential and algebraic equations which represent the ith generator dynamics and power flow constraints respectively [1, 6] are given by
(31)x˙1i=x2i-ωsx˙2i=(ωs2Hi)(Tmi-(ψdiIqi-ψqiIdi))x˙3i=(1Td0i′)(-x3i-Xdd)×[Idi-Xdi*(x5i+Xdls)]+Efdi)]x˙4i=(1Tq0i′)(-x4i-Xqq)×[Iqi-Xqi*(x6i+Xqls)]+Edi′)x˙5i=(1Td0i′′)(-x5i+x3i-(Xdi′-Xlsi)Idi)x˙6i=(1Tq0i′′)(-x6i-x4i-(Xqi′-Xlsi)Iqi),
where x1 is the power angle of the ith generator in rad, x2 is the rotating speed of the ith generator in rad/s, x3 is the q-axis internal voltage of the ith generator in p.u., x4 is the d-axis internal voltage of the ith generator in p.u., x5 is the 1d-axis flux linkage of the ith generator in p.u., x6 is the 2q-axis flux linkage of the ith generator in p.u., Efdi is the excitation control input, and ψdi and ψqi are the d-axis flux linkage and q-axis flux linkage of the ith generator in p.u., respectively; ωs is the synchronous rotor speed in rad/s, Idi and Iqi are the d-axis and q-axis currents of the ith generator in p.u., and Edi′ is the transient voltage in d-axis of the ith generator. Besides, (4.1) is complemented with
(32)Xdi*=Xdi′-Xdi′′(Xdi′-Xlsi)2,Xqi*=Xqi′-Xqi′′(Xqi′-Xlsi)2Xdd=Xdi-Xdi′,Xqq=Xqi-Xqi′Xdls=(Xdi′-Xlsi)Idi,Xqls=(Xqi′-Xlsi)Iqi
being parameters for each synchronous generator. It is important to consider that each machine model considered is a flux decay model (one axis model) given in [1, 6]; exciters and governors are not included in this model [1, 8].
4.2. Reduced Neural Model of Multimachine Power System
The model mentioned above [1] is in continuous time and due to this fact, we proceed to discretize the states using Euler methodology; with the state variables discretized, the reduced neural model is proposed [7] as follows:
(33)x^1(k+1)=f1(k)x^2(k+1)=f2(k)x^3(k+1)=f3(k)+w34u(k)(34)f1(k)=w11(k)S(x^1(k))+w12(k)S(x^2(k))f2(k)=w21(k)S(x^1(k))6+w22(k)S(x^2(k))+w23(k)S(x^3(k))f3(k)=w31(k)S(x^1(k))+w32(k)S(x^2(k))+w33(k)S(x^3(k)),
where x^i estimates xi(i=1,2,3). Given the neural reduced model, the inverse optimal SG control law is applied to the reduced neural model to each synchronous generator, that is, in a decentralized way. Thus, the control law is established from (30) where the matrix P is given for different values for each fault as follows: in the case of the fault at the end of bus 7 (100×I, 5×I, 20×I), in the case of the fault at the end of bus 8 (80×I, 5×I, 700×I), and in the case of the fault at the end of bus 9 (100×I, 5×I, 10×I) for generators 1, 2, and 3 respectively, where I is an identity matrix of 3×3.
From (33) (x(k)), f(x(k)), the control law for the neural network is defined as
(35)g(x(k))=[00ω34]f(x(k))=[f1(k)f2(k)f3(k)].
It is important to note, that [5] proves that low-order models are well-suited for stability analysis and feedback control design for industrial power generators. Moreover, the use of neural networks allows modelling system interconnections using only local information, as well as not modeled dynamics for the reduced model [1].
5. Preliminary Calculations for Faults
For the design of the fault, a system data preparation is required and the following preliminary calculations are taken from [6],considering the parameters of the generators given in Tables 7 and 8.
All system data are converted to a common base; a system base of 100 MVA is frequently used.
The loads are converted to equivalent impedances or admittances. The needed data for this step are obtained from the load-flow study. Thus if a certain load bus has a voltage V¯L, power PL, reactive power QL, and current flowing into a load admittance Y¯L=GL+jBL, then
(36)PL+jQL=V¯LI¯L*=V¯L[V¯L*(GL-jBL)]=VL2(GL-jBL).
The equivalent shunt admittance at that bus is given by
(37)Y¯L=PLVL2-j(QLVL2).
The internal voltages of the generators Ei∠δi0 are calculated from the load-flow data. These internal angles may be computed from the pretransient terminal voltages V∠α as follows. Let the terminal voltage be used temporarily as a reference, as shown in Figure 1. If I¯=I1+jI2, then, from the relation P+jQ=V¯I¯* it is possible to obtain I1+jI2=(P-jQ)/V. Since E∠δ′=V¯+jxd′I¯, then
(38)E∠δ′=(V+Qxd′V)+j(Pxd′V).
The initial generator angle δ0 is then obtained by adding the pretransient voltage angle α to δ′, or
(39)δo=δ′+α.
The Y¯ matrix for each network condition is calculated. The following steps are usually needed.
The equivalent load impedances (or admittances) are connected between the load buses and the reference node; additional nodes are provided for the internal generator voltages (nodes 1,2,…,n in Figure 2) and the appropriate values of xd′ are connected between these nodes and the generator terminal nodes. Also, simulation of the fault impedance is added as required, and the admittance matrix is determined for each switching condition.
All impedance elements are converted to admittances.
Elements of the Y¯ matrix are identified as follows: Y¯ij is the sum of all the admittances connected to node i, and Y¯ij is the negative of the admittance between node i and node j.
Finally, all the nodes except for the internal generator nodes are eliminated and obtain the Y¯ matrix for the reduced network. The reduction can be achieved by matrix operation recalling all the nodes that have zero injection currents except for the internal generator nodes. This property is used to obtain the network reduction as shown below. Let
(40)I=YV,
where
(41)I=[In0].
Generator representation for computing δ0.
Representation of a multimachine system (classical model).
Now the matrices Y and V are partitioned accordingly to get
(42)[In0]=[YnnYnrYrnYrr][VnVr],
where the subscript n is used to denote generator nodes and the subscript r is used for the remaining nodes. Thus for the network in Figure 2, Vn∈ℜn×1 and V∈ℜr×1. Expanding (42),
(43)In=YnnVn+YnrVr,0=YrnVn+YrrVr
from which we eliminate Vr to find
(44)In=(YnnVn-YnrYrr-1Yrn)Vn.
The matrix (YnnVn-YnrYrr-1Yrn) is the desired reduced matrix Y∈ℜn×n, where n is the number of the generators. The network reduction illustrated by (43)-(44) is a convenient analytical technique that can be used only when the loads are treated as constant impedances. If the loads are not considered to be constant impedances, the identity of the load buses must be retained. Network reduction can be applied only to those nodes that have zero injection current.
Once the preliminaries calculations are made to obtain the Y matrix for each fault in the correspondent bus, the network reduction for each fault is applied. For the first case of the analysis, the fault occurs at bus 7 and then the correspondent Y matrix is obtained as shown in Tables 9, 10, and 11 included at the Appendix. Then the network reduction of Y matrix is applied and is defined as in Table 1.
Reduced Y Matrices at bus 7.
Type of network
Node
1
2
3
Pre-fault
1
0.846-j2.988
0.287+j1.513
0.210+j1.226
2
0.287+j1.513
0.420-j2.724
0.213+j1.088
3
0.210+j1.226
0.213+j1.088
0.277-j2.368
Faulted
1
0.657-j3.816
0.000+j0.000
0.070+j0.631
2
0.000+j0.000
0.000-j5.486
0.000+j0.000
3
0.070+j0.631
0.000-j0.000
0.174-j2.796
Fault cleared
1
1.181-j2.229
0.138+j0.726
0.191+j1.079
2
0.138+j0.726
0.389-j1.953
0.199+j1.229
3
0.191+j1.079
0.199+j1.229
0.273-j2.342
For the second case of the analysis, the fault occurs at bus 8 and then the correspondent Y matrix is obtained as shown in Tables 12, 13, and 14 included at the Appendix after the network reduction of Y matrix is realized to obtain the reduced networks defined as in Table 2.
Reduced Y Matrices at bus 8.
Type of network
Node
1
2
3
Pre-fault
1
0.938-j2.798
0.325+j1.588
0.251+j1.315
2
0.325+j1.588
0.436-j2.694
0.230+j1.123
3
0.251+j1.315
0.230+j1.123
0.296-j2.325
Faulted
1
0.736-j3.569
0.082+j0.535
0.063+j0.534
2
0.082+j0.535
0.146-j4.128
0.006+j0.058
3
0.063+j0.534
0.006+j0.058
0.122-j3.115
Fault cleared
1
0.850-j3.252
0.334+j1.346
0.075+j0.569
2
0.334+j1.346
0.687-j2.061
0.032+j0.148
3
0.075+j0.569
0.032+j0.148
0.124-j3.111
For the third case of the analysis, the fault occurs at bus 9 and then the correspondent Y matrix is obtained as shown in Tables 15, 16 and 17 included at the Appendix after the network reduction of Y matrix is realized to obtain the reduced networks defined as in Table 3.
Reduced Y Matrices at bus 9.
Type of network
Node
1
2
3
Pre-fault
1
0.938-j2.798
0.325+j1.588
0.251+j1.315
2
0.325+j1.588
0.436-j2.694
0.230+j1.123
3
0.251+j1.315
0.230+j1.123
0.296-j2.325
Faulted
1
0.727-j3.735
0.135+j0.787
-0.004+j0.001
2
0.135+j0.787
0.263-j3.377
-0.002+j0.000
3
-0.004+j0.001
-0.002+j0.000
-0.010-j4.168
Fault cleared
1
1.271-j1.980
0.290+j1.247
0.102+j0.344
2
0.290+j1.247
0.380-j2.957
0.149+j0.702
3
0.102+j0.344
0.149+j0.702
0.209-j2.853
6. Fault Simulation
The power electric system used in this paper is presented in Figure 3. It corresponds to the nine bus system. Figure 3 also includes the bus interconnection and the related parameters in the transmission lines. Data for simulation is given in Tables 7 and 8, respectively [6], where the modeling of the system is explained and the related parameters for each synchronous generator are described.
Nine bus system.
In this paper, the 18 state variables related to 3 synchronous generators are stabilized, using the neural reduced model [7], reaching stabilization for the system with the fault in three different lines of transmission, for simulation the sample time is fitted to 0.005 ms.
There are three cases contemplated in the system simulation.
The fault occurs near bus 7 at the end of the lines 5–7. Results are depicted in Figure 4 for generator 1, Figure 5 for generator 2, and Figure 6 for generator 3.
The fault occurs near bus 8 at the end of the lines 8-9. Results are depicted in Figure 7 for generator 1, Figure 8 for generator 2, and Figure 9 for generator 3.
The fault occurs near bus 9 at the end of the lines 6–9. Results are depicted in Figure 10 for generator 1, Figure 11 for generator 2, and Figure 12 for generator 3.
Generator 1 response with a fault at bus 7.
Generator 2 response with a fault at bus 7.
Generator 3 response with a fault at bus 7.
Generator 1 response with a fault at bus 8.
Generator 2 response with a fault at bus 8.
Generator 3 response with a fault at bus 8.
Generator 1 response with a fault at bus 9.
Generator 2 response with a fault at bus 9.
Generator 3 response with a fault at bus 9.
For the cases above mentioned, the fault is incepted at 10 seconds of simulation and then it is possible to see that the system has a prefault state (before 10 seconds), a fault state (at 10 seconds), and a postfault state (after 10 seconds). The admittances for the loads are given in p.u. in Table 4.
Admittance loads.
Load
Admittance
A
y-L5=1.2610-j0.5044
B
y-L6=0.8777-j0.2926
C
y-L8=0.9690-j0.3391
The initial conditions for the system are given in Table 5.
Initial conditions of the generators.
Initial conditions
Generator 1
Generator 2
Generator 3
x01
0.0396
0.3444
0.23
x02
377
377
377
x03
1.056
1.0502
1.0170
x04
0
0.622
0.624
x05
1.0478
0.7007
0.7078
x06
−0.0425
−0.7568
−0.7328
It is important to note that initial conditions of the generators are defined by their respective parameters [1]; however, in order to test the NN approximation capabilities, it is common to use signals that can represent a wide range of frequencies; then, it is possible that plant signals can exhibit a high frequency behavior [10].
The control goal is to stabilize the power electric system and this is why the references given for each state variable of the neural reduced model for the multimachine system are proposed as in Table 6.
References for the system.
References
Generator 1
Generator 2
Generator 3
x1ref
0.0396
0.3444
0.23
x2ref
377
377
377
x3ref
0.5
1.0502
1.0170
Parameters of the generators.
Parameter
Generator 1
Generator 2
Generator 3
H (sec)
23.6400
6.4000
3.0100
Tm (pu)
0.7160
1.6300
0.8500
Td0′ (sec)
8.9600
6.0000
5.8900
Td0′′ (sec)
0.2000
0.3000
0.4000
Tq0′ (sec)
0.3100
0.5350
0.6000
Tq0′′ (sec)
0.2000
0.3000
0.4000
Xd (pu)
0.1460
0.8958
1.3125
Xd′ (pu)
0.0608
0.1198
0.1813
Xd′′ (pu)
0.0200
0.0500
0.0800
Xq (pu)
0.0969
0.8645
1.2578
Xq′ (pu)
0.0969
0.1969
0.2500
Xq′′ (pu)
0.0200
0.500
0.0800
Xls (pu)
0.0336
0.0521
0.0742
Parameters of the transmission lines.
Bus i
Bus j
Rij
Xij
Gij
Bij
1
4
0.000
0.1184
0.000
−8.4459
2
7
0.000
0.1823
0.000
−5.4855
3
9
0.000
0.2399
0.000
−4.1684
4
5
0.0100
0.0850
1.3652
−11.6041
4
6
0.0170
0.0920
1.9522
−10.5107
5
7
0.0320
0.1610
1.1876
−5.9751
6
9
0.0390
0.1700
1.2820
−5.5882
7
8
0.0085
0.0720
1.6171
−9.7843
8
9
0.0119
0.1008
1.1551
−9.7843
5
0
0.000
0.000
1.2610
−0.2634
6
0
0.000
0.000
0.8777
−0.0346
8
0
0.000
0.000
0.9690
−1.1601
4
0
0.000
0.000
0.000
0.1670
7
0
0.000
0.000
0.000
0.2275
9
0
0.000
0.000
0.000
0.2835
Y matrix of pre-faulted Network near to bus 7.
Node
1
2
3
4
5
6
7
8
9
1
-j8.4459
0
0
j8.4459
0
0
0
0
0
2
0
-j5.4855
0
0
0
0
j5.4855
0
0
3
0
-j4.1684
0
0
0
0
0
0
j4.1684
4
j8.4459
0
0
3.3074-30.3937
-1.3652+j11.6041
-1.9422+j10.5107
0
0
0
5
0
0
0
-1.3652+j11.6041
3.8138-j17.8426
0
-1.1876+j5.9751
0
0
6
0
0
0
-1.9422+j10.5107
0
4.1019-j15.4225
0
0
-1.2820+j5.5882
7
0
j5.4855
0
0
-1.1876+j5.9751
0
2.8047-j24.9311
-1.6171+j13.6980
0
8
0
0
0
0
0
0
-1.6171+j13.6980
3.7412-j23.6423
-1.1551+j9.7843
9
0
0
j4.1684
0
0
-1.2820+j5.5882
0
-1.1551+j9.7843
2.4371-j19.2574
Y matrix of faulted Network near to bus 7.
Node
1
2
3
4
5
6
7
8
9
1
-j8.4459
0
0
j8.4459
0
0
0
0
0
2
0
-j5.4855
0
0
0
0
0.0100
0
0
3
0
-j4.1684
0
0
0
0
0
0
0
4
j8.4459
0
0
3.3074-j30.3937
-1.3652+j11.6041
-1.9422+j10.5107
0
0
0
5
0
0
0
-1.3652+j11.6041
3.8138-j17.8426
0
0.0100
0
0
6
0
0
0
-1.9422+j10.5107
0
4.1019-j15.4225
0
0
-1.2820+j5.5882
7
0
0.0100
0
0
0.0100
0
0.0100
0.0100
0
8
0
0
0
0
0
0
0.0100
3.7412-j23.6423
-1.1551+j9.7843
9
0
0
j4.1684
0
0
-1.2820+j5.5882
0
-1.1551+j9.7843
2.4371-j19.2574
Y matrix of fault cleared Network near to bus 7.
Node
1
2
3
4
5
6
7
8
9
1
-j8.4459
0
0
j8.4459
0
0
0
0
0
2
0
-j5.4855
0
0
0
0
j5.4855
0
0
3
0
-j4.1684
0
0
0
0
0
0
j4.1684
4
j8.4459
0
0
3.3074-j30.3937
-1.3652+j11.6041
-1.9422+j10.5107
0
0
0
5
0
0
0
-1.3652+j11.6041
2.6262-j11.8675
0
0.01
0
0
6
0
0
0
-1.9422+j10.5107
0
4.1019-j15.4225
0
0
-1.2820+j5.5882
7
0
j5.4855
0
0
0.0100
0
1.6171-j18.9559
-1.6171+j13.6980
0
8
0
0
0
0
0
0
-1.6171+j13.6980
3.7412-j23.6423
-1.1551+j9.7843
9
0
0
j4.1684
0
0
-1.2820+j5.5882
0
-1.1551+j9.7843
2.4371-j19.2574
Y matrix of pre-faulted Network near to bus 8.
Node
1
2
3
4
5
6
7
8
9
1
-j8.4459
0
0
j8.4459
0
0
0
0
0
2
0
-j5.4855
0
0
0
0
j5.4855
0
0
3
0
-j4.1684
0
0
0
0
0
0
j4.1684
4
j8.4459
0
0
3.3074-j30.3937
-1.3652+j11.6041
-1.9422+j10.5107
0
0
0
5
0
0
0
-1.3652+j11.6041
3.8138-j17.8426
0
-1.1876+j5.9751
0
0
6
0
0
0
-1.9422+j10.5107
0
4.1019-j15.4225
0
0
-1.2820+j5.5882
7
0
j5.4855
0
0
-1.1876+j5.9751
0
2.8047-j24.9311
-1.6171+j13.6980
0
8
0
0
0
0
0
0
-1.6171+j13.6980
3.7412-j23.6423
-1.1551+j9.7843
9
0
0
j4.1684
0
0
-1.2820+j5.5882
0
-1.1551+j9.7843
2.4371-j19.2574
Y matrix of faulted Network near to bus 8.
Node
1
2
3
4
5
6
7
8
9
1
-j8.4459
0
0
j8.4459
0
0
0
0
0
2
0
-j5.4855
0
0
0
0
j5.4855
0
0
3
0
-j4.1684
0
0
0
0
0
0
j4.1684
4
j8.4459
0
0
3.3074-j30.3937
-1.3652+j11.6041
-1.9422+j10.5107
0
0
0
5
0
0
0
-1.3652+j11.6041
3.8138-j17.8426
0
-1.1876+j5.9751
0
0
6
0
0
0
-1.9422+j10.5107
0
4.1019-j15.4225
0
0
-1.2820+j5.5882
7
0
j5.4855
0
0
-1.1876+j5.9751
0
2.8047-j24.931
0.0010
0
8
0
0
0
0
0
0
0.0100
0.0010
0.0010
9
0
0
j4.1684
0
0
-1.2820+j5.5882
0
0.0010
2.4371-j19.2574
Y matrix of fault cleared Network near to bus 8.
Node
1
2
3
4
5
6
7
8
9
1
-j8.4459
0
0
j8.4459
0
0
0
0
0
2
0
-j5.4855
0
0
0
0
j5.4855
0
0
3
0
-j4.1684
0
0
0
0
0
0
j4.1684
4
j8.4459
0
0
3.3074-j30.3937
-1.3652+j11.6041
-1.9422+j10.5107
0
0
0
5
0
0
0
-1.3652+j11.6041
3.8138-j17.8426
0
-1.1876+j5.9751
0
0
6
0
0
0
-1.9422+j10.5107
0
4.1019-j15.4225
0
0
-1.2820+j5.5882
7
0
j5.4855
0
0
-1.1876+j5.9751
0
2.8047-j24.9311
-1.6171+j13.6980
0
8
0
0
0
0
0
0
-1.6171+j13.6980
2.5861-j13.8580
0
9
0
0
j4.1684
0
0
-1.2820+j5.5882
0
0
2.4371-j19.2574
Y matrix of pre-faulted Network near to bus 9.
Node
1
2
3
4
5
6
7
8
9
1
-j8.4459
0
0
j8.4459
0
0
0
0
0
2
0
-j5.4855
0
0
0
0
j5.4855
0
0
3
0
-j4.1684
0
0
0
0
0
0
j4.1684
4
j8.4459
0
0
3.3074-j30.3937
-1.3652+j11.6041
-1.9422+j10.5107
0
0
0
5
0
0
0
-1.3652+j11.6041
3.8138-j17.8426
0
-1.1876+j5.9751
0
0
6
0
0
0
-1.9422+j10.5107
0
4.1019-j15.4225
0
0
-1.2820+j5.5882
7
0
j5.4855
0
0
-1.1876+j5.9751
0
2.8047-j24.9311
-1.6171+j13.6980
0
8
0
0
0
0
0
0
-1.6171+j13.6980
3.7412-j23.6423
-1.1551+j9.7843
9
0
0
j4.1684
0
0
-1.2820+j5.5882
0
-1.1551+j9.7843
2.4371-j19.2574
Y matrix of faulted Network near to bus 9.
Node
1
2
3
4
5
6
7
8
9
1
-j8.4459
0
0
j8.4459
0
0
0
0
0
2
0
-j5.4855
0
0
0
0
j5.4855
0
0
3
0
-j4.1684
0
0
0
0
0
0
0.0100
4
j8.4459
0
0
3.3074-j30.3937
-1.3652+j11.6041
-1.9422+j10.5107
0
0
0.0100
5
0
0
0
-1.3652+j11.6041
3.8138-j17.8426
0
-1.1876+j5.9751
0
0.0100
6
0
0
0
-1.9422+j10.5107
0
4.1019-j15.4225
0
0
0.0100
7
0
j5.4855
0
0
-1.1876+j5.9751
0
2.8047-j24.9311
-1.6171+j13.6980
0.0100
8
0
0
0
0
0
0
-1.6171+j13.6980
3.7412-j23.6423
0.0100
9
0
0
0.0100
0
0
0.0100
0
0.0100
0.0100
Y matrix of fault cleared Network near to bus 9.
Node
1
2
3
4
5
6
7
8
9
1
-j8.4459
0
0
j8.4459
0
0
0
0
0
2
0
-j5.4855
0
0
0
0
j5.4855
0
0
3
0
-j4.1684
0
0
0
0
0
0
j4.1684
4
j8.4459
0
0
3.3074-j30.3937
-1.3652+j11.6041
-1.9422+j10.5107
0
0
0
5
0
0
0
-1.3652+j11.6041
2.6262-j11.8675
0
-1.1876+j5.9751
0
0
6
0
0
0
-1.9422+j10.5107
0
2.8199-j9.8343
0
0
0
7
0
j5.4855
0
0
0.0100
0
1.6171-j18.9559
-1.6171+j13.6980
0
8
0
0
0
0
0
0
-1.6171+j13.6980
3.7412-j23.6423
-1.1551+j9.7843
9
0
0
j4.1684
0
0
0
0
-1.1551+j9.7843
2.4371-j19.2574
7. Conclusions
In this paper a SG discrete-time inverse optimal controller is synthesized for a reduced order neural model to stabilize a multimachine power electric system in the presence of a fault at line 7, at line 8, and at line 9; from simulation results, it can be seen that the proposed controller allows stabilizing the state in an efficient way in the three different cases, allowing the system stabilization after the fault occurs. As future work authors are considering the stability analysis including the neural decentralized controller, besides the analysis of control delay for closed loop system.
Appendix
In this appendix, parameters used for simulations are presented. Tables 7 and 8 show the parameters for generators and transmission lines, respectively. Tables 9, 10 and 11 display the Y matrix of network with fault near to bus 7 for prefault, fault, and fault cleared conditions. Tables 12, 13 and 14 show the Y matrix of network with fault near to bus 8 for prefault, fault, and fault cleared conditions. Tables 15, 16 and 17 present the Y matrix of network with fault near to bus 9 for prefault, fault, and fault cleared conditions.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
Acknowledgments
The authors thank the support of CONACYT Mexico, through Projects 103191Y, 106838Y, and 156567Y. They also thank the very useful comments of the anonymous reviewers, which help to improve the paper. First author also thanks the scholarship L’Oreal-AMC (Mexican Academy of Sciences) for woman in science.
BenitezV. H. B.2010Cinvestav, MexicoUnidad GuadalajaraLiH.JingX.KarimiH. R.Output-feedback-based H∞ control for vehicle suspension systems with control delay201461143644610.1109/TIE.2013.2242418ZhuX.-L.ChenB.YueD.WangY.An improved input delay approach to stabilization of fuzzy systems under variable sampling20122023303412-s2.0-8485970897410.1109/TFUZZ.2011.2174242HuangS.TanK. K.LeeT. H.Decentralized control design for large-scale systems with strong interconnections using neural networks200348580581010.1109/TAC.2003.811258MR1980586ArjonaM. A.Escarela-PerezR.Espinosa-PerezG.Alvarez-RamirezJ.Validity testing of third-order nonlinear models for synchronous generators20097969539582-s2.0-6254913396610.1016/j.epsr.2008.12.008AndersonP. M.FouadA. A.1994New York, NY, USAIEEE PressAlanisA. Y.SanchezE. N.TellezF. O.Discrete-time inverse optimal neural control for synchronous generators20132669770510.1016/j.engappai.2012.07.008WeckesserT.JohannssonH.ØstergaardJ.Impact of model detail of synchronous machines on real-time transient stability assessmentProceedings of the IREP Symposium-Bulk Power System Dynamics and Control2013AlanisA. Y.Arana-DanielN.Lopez-FrancoC.Neural-PSO second order sliding mode controller for unknown discrete-time nonlinear systems1Proceedings of the International Joint Conference on Neural Networks201330653070HaykinS.2001New York, NY, USAJohn Wiley & SonsWilliamsR. J.ZipserD.A learning algorithm for continually runnig fully recurrent neural networks1989127028010.1162/neco.1989.1.2.270LeungC.-S.ChanL.-W.Dual extended Kalman filtering in recurrent neural networks20031622232392-s2.0-003733826510.1016/S0893-6080(02)00230-7AlanisA. Y.Lopez-FrancoM.Arana-DanielN.Lopez-FrancoC.Discrete-time neural control for electrically driven nonholonomic mobile robots201226763064410.1002/acs.2289MR2955429ZBL1250.93077FeldkampL. A.FeldkampT. M.ProkhorovD. V.Neural network training with the nprKFProceedings of the International Joint Conference on Neural Networks (IJCNN '01)July 2001usa1091142-s2.0-0034856561GroverR.HwangP. Y. C.19922ndNew York, NY, USAJohn Wiley & SonsSongY.GrizzleJ. W.The extended Kalman filter as a local asymptotic observer for discrete-time nonlinear systems1995515978MR1304405ZBL0827.93009PoznyakA. S.SanchezE. N.YuW.2001SingaporeWorld ScientificPontryaginL. S.BoltyankiiV. G.GamkrelizdeR. V.MischenkoE. F.1962New York, NY, USAInterscienceBellmanR.1957Princeton, NJ, USAPrinceton University PressMR0090477BellmanR. E.DreyfusS. E.1962Princeton, NJ, USAPrinceton University PressMR0140369KrstićM.DengH.1998Berlin, GermanySpringerMR1639235SepulchreR.JankovićM.KokotovićP. V.1997Berlin, GermanySpringer10.1007/978-1-4471-0967-9MR1481435Ornelas-TellezF. O.2011Cinvestav, MexicoUnidad GuadalajaraFreemanR. A.KokotovićP. V.1996Cambridge, Mass, USABirkhäuser Boston10.1007/978-0-8176-4759-9MR1396307FradkovA. L.PogromskyA. Yu.199835SingaporeWorld Scientific10.1142/9789812798619MR1695310Power System Dynamic Analysis-Phase IElectric power research institute1977EL-484