Terminal guidance law against the maneuvering target is always the focal point. Most of the literatures focus on estimating the acceleration of target and time to go in guidance law, which are difficult to acquire. This paper presents a terminal guidance law based on receding horizon control strategy. The proposed guidance law adopts the basic framework of receding horizon control, and the guidance process is divided into several finite time horizons. Then, optimal control theory and target motion prediction model are used to derive guidance law for minimum time index function with continuous renewal of original conditions at the initial time of each horizon. Finally, guidance law performs repeated iteration until intercepting the target. The guidance law is of subprime optimal type, requiring less guidance information, and does not need to estimate the acceleration of target and time to go. Numerical simulation has verified that the proposed guidance law is more effective than traditional methods on constant and sinusoidal target with bounded acceleration.
1. Introduction
Generally, there are two design methods for unmanned aerial vehicle (UAV) guidance control system: The first method is to classify the flight system to high frequency attitude control inner-loop and low frequency guidance outer-loop, which are separately and independently designed [1], according to the principle of timescale separation. The other method is to introduce overload, aspect angle, and other information for integrated design of guidance control at inner-loop design [2]. The paper mainly studied the design of outer-loop guidance law specific to the first method. Target maneuver refers to constant changes of speed, angle, and acceleration of object motion. Attack guidance technology specific for maneuvering target has always been the emphasis for guidance law research. With the development of technology, the target maneuvering capability has been strengthening consistently and the target maneuver becomes even more difficult to predict which is a major issue restricting the improvement of guidance precision. Many scholars have carried out extensive studies on new guidance law specific to maneuvering target. Recently, the studies specific to the guidance law with maneuvering target are mainly classified to two types [3–6]: (1) the optimal guidance law; (2) nonlinear guidance law.
The optimal guidance law based on linear is to utilize optimal control theory to design the controller with end constraint. Optimal guidance law (OGL) is a guidance motion equation based on linearization. The setting quadratic performance index under terminal constraint condition will be minimum. Under the circumstance of known target information in [7], it is assumed that the guidance command is vertical to missile velocity vector. The optimal guidance law of intercept maneuvering target can be obtained through integration to obtain nonlinear algebraic equation. Du et al. [8] studied 3D guidance law with constraints. In case of external disturbance of target acceleration, guidance law design was transformed to application dynamic programming solution. Hexner and Shima [9] proposed a stochastic optimal control guidance law with terminal constraint. The guideline results indicate that, in case of target maneuvering boundary, the interception performance of guidance law is superior to classical optimal guidance law. A tracker with remaining time of variable motion is proposed as a recursive algorithm which has improved the estimation of remaining time and effectively improved the performance of guidance law [10]. In [11], a new impact angle control optimal guidance law has been developed for missiles with arbitrary velocity profiles against maneuvering targets.
The guidance law based on nonlinear control method is to derive guidance law by utilizing nonlinear control theory. The nonlinear control methods utilized extensively include variable structure control, Lyapunov optimizing feedback control, and H∞ [12–17]. In [12], the presented guidance law is based on nonsingular terminal sliding mode, smooth second-order sliding mode, and finite time convergence disturbance observer. It is used to estimate and compensate the lumped uncertainty in missile guidance system; no prior knowledge of target maneuver is required. Shang et al. [13] considered target maneuver as external disturbance. Under the circumstance of small target-missing amount, a collision time controllable guidance law was proposed based on the theory of finite time convergence control. The system state can be converted to specified sliding mode within limited time. Zhou et al. [14] put forward a guidance law based on integral sliding mode control. The guidance law relaxed the assumption on conventional collision time control guidance law setting the speed as a constant. Vincent and Morgan [15] utilized Lyapunov optimal feedback control method to derive a nonlinear guidance law. The advantage is that the rate of LOS angle and target acceleration does not need to be measured. Yang and Chen [16] studied the guidance law which determined the target maneuver as disturbance input. The guiding issue of guided missile was changed to nonlinear disturbance attenuation control issue. Three kinds of H∞ guidance laws were obtained through deriving related Hamilton-Jacobi partial differential inequality.
Although guidance laws based on different theories were proposed specific to maneuvering target, there was no uniform treatment method specific to the target motion information specific to any maneuvering flight to estimate the uncertainty. The optimal guidance laws need to obtain estimation on target acceleration information and time to go. If the estimated precision is not high, the interception performance of guidance law will be significantly weakened [17, 18]. The estimation accuracy of the remaining time from linear guidance law will greatly affect the guidance performance; in addition, the linear optimal guidance law is effective in case of minor aspect angle. In most cases, unknown disturbance and nonmodeled dynamics exist in the actual system which will affect the performance of guidance law. There are many guidance information required by guidance law based on nonlinear method. The form is relatively complicated which will cause difficulties for actual engineering applications. And nonlinear methods cannot ensure optimality [19, 20]. Consequently, more effective guidance laws shall be designed specific to maneuvering target.
In order to treat the issues arising from such attacking maneuvering target, the paper proposed a UAV terminal guidance law based on receding horizon control strategy. Receding horizon control (RHC) is a control technology based on the application online calculation to solve the optimal control issue repeatedly according to the currently measured system status. It has been applied in aircraft control and guidance [17–23]. This paper will use receding horizon control method to solve guidance law design problem. First of all, the paper conducted guidance law control strategy within the rolling time; and then the optimal control law was derived based on the minimum time within a receding horizon; guidance law and its iterative algorithm were designed on this basis. The algorithm utilized control commands generated by the optimal control law which formed suboptimal guidance law during the whole guidance law process. Compared to conventional terminal guidance law, the guidance law requires less guidance information. It does not need to estimate the time remaining and can intercept maneuvering target of bounded acceleration while not aware of the acceleration information in the future.
The remaining structures in the paper are as follows: Section 2 described the relative motion model between UAV and object; Section 3 described the terminal guidance law based on receding horizon control strategy; Section 4 introduced the comparative results and analysis of the guidance law and other several guidance law simulation results.
2. Problem Formulation
This section specified the guidance mathematical model for intercepting target. In order to highlight the major issues of the research, the following are assumed:
The UAV and target speed are constant.
The body axis of UAV and velocity direction are consistent. The error angle is negligible.
At the initial moment of guidance law, the objectives are within the field range of UAV.
The response time delay of aircraft can be negligible.
According to the above assumptions, the UAV and target can be abstracted as a controllable mass point. Since the three-dimensional movement can be divided into two mutually perpendicular two planar motions, studies on guidance law with the interception movements of UAV and target on the same plane were shown in Figure 1. Subscripts M and T represent the related physical quantities of UAV and target, where r refers to the relative distance between the UAV and target, q refers to LOS (line of sight) angle between UAV and target, θM refers to course angle of UAV, θT is the course angle of object, VM is the kinematic velocity of UAV, and VT is the kinematic velocity of object. q, θM, VM, and VT are known quantities. θT is unknown quantity. The process of guidance is that the UAV impacts the target according to established guidance law.
Relative motion relationship of UAV and target.
The guidance equations are as follows:(1)r˙=VTcosθT-q-VMcosθM-q,(2)q˙=VTsinθT-q-VMsinθM-qr,(3)aM=VMθ˙M,(4)aT=VTθ˙T.
Equations (1)~(4) constitute the guidance kinematic model of the UAV and target on the two-dimensional plane. The basic conditions that required satisfying successful guidance attack between UAV and object include(5)VM>VT,Vm2rM≥VT2rT,(6)rtf≤R.
In formula (5), rM and rT refer to the minimum turning radius of UAV and objectives, respectively. Formula (5) is mainly to ensure that the speed of UAV is greater than target velocity. In addition, the maneuvering capability of UAV is stronger than the target maneuvering capability. In formula (6), r(tf) is the last moment relative distance of guidance attack process (namely, the relative distance at target-missing quality) and R refers to the detecting range of UAV.
The guidance law designed in the paper not only has to make sure that the relative distance between UAV and maneuvering target is within the detecting range of UAV at the terminal tf, but also has to make sure that UAV can attack the target within the shortest time once detecting the target. In other words, it has to satisfy the constraint conditions of inequation (6), as well as the minimum indicator function as shown in (7)J=tf=∫0tf1dt.
Provided that, under the known θT(t) changes, target trajectory can be estimated accurately in advance and the remaining time can be estimated accurately, formula (7) can be minimized by utilizing the optimal control theory. Assume that the minimum guidance time required is Tmin. During the actual applications, the target approaching motion information (trajectory and acceleration) can hardly be predicted accurately. Consequently, the paper adopts receding horizon control strategy thus to realize the optimal guidance based on the minimum time in each receding horizon so that the continuous receding horizon constitutes overall state feedback control during the whole guiding process and obtains a suboptimal solution for the minimum intercept time. Suboptimality is for the optimum control relative to the known target motion state. It can be seen from the aforementioned four assumptions that the motion trajectory of UAV can be determined by θM(t) (constant in the plan VM). Therefore, the minimum optimal problem which should satisfy formula (7) can be transformed to design optimal θM(t) so that minimum indicator (7) can satisfy all constraint conditions of formula (5) and (6) simultaneously. The paper adopts θM(t) as the controlled variable to design guidance law.
3. Guidance Method
In this section, we will derive a guidance law; the paper proposed a terminal guidance law based on receding horizon control strategy. By adopting receding horizon control strategy, the guidance process is divided into several finite time horizons. Conduct minimum time optimal control with continuous renewal of original conditions within each time horizon, and perform repeated iteration until intercepting the target. The guidance law is of subprime optimal type, requiring less guidance information. It does not need to estimate the time remaining and can intercept maneuvering target of bounded acceleration.
3.1. The Receding Horizon Control Strategy
The receding horizon control strategy is as shown in Figure 2. In the figure, Δt refers to the unit time of line computation; tc refers to the online computation time of guidance algorithm; Teh refers to updating cycle of guidance command; ti(i=1,2,…,n) refers to the time for guidance instruction updating; tp refers to the length of receding horizon; and u refers to guidance command.
Receding horizon control strategy.
Receding horizon control strategy is to solve optimization issues in receding horizon tp by taking current state measurements as initial conditions and to calculate the optimal control solution u∗ online. Execute control in guidance instruction execution cycle Teh until the system obtains new measurement values and takes it as the new initial conditions. Calculate the optimal control solution of the next finite horizon in the same way. Continuously repeat the process until satisfying the requirements and obtain a group of feedback control law. The receding horizon control only requires the optimal control of the state in the current trajectory of the system, avoiding the global and difficult-to-calculate Hamilton-Jacobi method. In addition, the closed-loop stability of receding horizon has been verified.
3.2. Derivation of the Guidance Law in One Receding Horizon
According to the receding horizon control strategy specified in Section 3.1, we derived an optimal guidance control law in a receding horizon specific to nonlinear optimal issues composed by (1), (2), and (7). Receding horizon control is to perform optimal tracking for target in different receding horizons. Because the future trajectory or maneuver of the target is unknown, in the nth receding horizon tpn, it is assumed that the target escapes with no-maneuvering. Meanwhile, the target moves with the state value of the initial time in each horizon as the initial condition. The target of tracking is to ensure target interception of UAV in the minimum interception time. Consequently, the issue to be solved is to obtain the control variables θM(t) to be satisfied in the given time domain tpn: (8)minJn=tfn=∫0tfn1dt.
The minimum time interception can obtain analytical solution in the single time domain through the optimal control theory and minimum modulus principle. First of all, take the Hamilton function as follows: (9)H=1+λrVTcosθT-q-VMcosθM-q+λqVTsinθT-q-VMsinθM-qr,where λT=λrλq is costate. The costate equation is (10)λ˙r=-∂H∂r=-λqVTsinθT-q-VMsinθM-qr2=-λqθ˙r,(11)λ˙q=-∂H∂q=λrVTsinθT-q-VMsinθM-q-λqVTcosθT-q-VMcosθM-qr=λrq˙r-λqr˙r.
Solve the costate differential equations composed of (10) and (11) to derive λr and λq: (12)λrt=k1sinq+k2(13)λqt=k1rcosq+k2,where k1 and k2 are integration constants.
Since the terminal time in the time domain tfn is free, the issue belongs to terminal time freedom. From the transversal conditions of λq(tfn)=0,H(tfn)=0, it can be seen that(14)k1=±1r˙tfn,(15)k2=π2-qtfn+nπ.
Substitute formula (14) and (15) into (12) and (13):(16)λrt=cosqtfn+qtr˙tfn,(17)λqt=rtsinqtfn+qtr˙tfn.
The value r˙(tfn) can be obtained from (1):(18)r˙tfn=VTcosθTtfn-qtfn-VMcosθMtfn-qtfn.
Obtain from the control equation ∂H(t)/∂θM(t)=0 that(19)θMt=qt+tan-1λqtrtλrt.
Substitute formula (16) and (17) into (19):(20)θMt=qtfn.
The control law expressed in formula (20) means that, in order to satisfy the requirements of indicator function (8), the course angle of UAV in one receding horizon needs to be controlled to the size of the aspect angle of the time domain. Namely, the direction of UAV attack shall be pointed to the objective angle defensive line θfn at the terminal time tfn=t0n+tpn. The guidance law based on receding horizon control strategy is the synthesis of optimal tracking control solution based on the series of finite horizons. At the beginning of receding horizon tpn, UAV take the currently measured target state values as original values. Take expression (8) as objective optimization function to solve the optimal tracking control instruction of the time domain, adjust the flight direction of aircraft to the optimal tracking direction, and fly to the target along the collision line until obtaining the new target state value and enter the receding horizon tp(n+1) to realize optimal tracking. Repeat the optimal tracking control solutions and control instruction execution process tpn in the new receding horizon as the updated initial state. Formula (6) shall be verified in each receding horizon. In case of satisfying the conditions of formula (6), it refers to target interception. In case of not satisfying the conditions, perform the optimal control in the next receding horizon under the new initial conditions. Obviously, for each time domain, the control is an open loop. However, the whole tracking process guidance law is under closed-loop control.
UAV utilize receding horizon control strategy. In case of target interception in the k receding horizon, the computational formula for the general time of target interception is as follows: (21)ttotal≈∑n=1ktpn.
Obviously, ttotal>Tmin; the guidance time obtained by utilizing receding horizon control strategy is greater than the guidance time under the optimal guidance law which is greater than the predicted target motion information. Therefore, the guidance law presented in this paper is a suboptimal guidance law. The time loss relative to the optimal guidance law is(22)η=ttotal-TminTmin.
3.3. The Iterative Computation Algorithm of the Guidance Law
It can be seen from formula (3) that lateral overload controlled by UAV in two-dimensional planar is VMθ˙M. In addition, aircraft track angle θM is adopted in the paper as controlled variable. The conditions satisfying the optimal tracking of track angle are given in formula (20). Although UAV are unaware of target prospective maneuvering, the target track can be predicted in the single receding horizon. According to the requirements of formula (20), the track angles of prospective aircraft in each receding horizon need to coincide with the aspect angles of virtual target. The guidance law can be produced according to(23)u=k1r˙tpe-1/0.1+ttpqtfn-θM1+ttp+q˙,where k1 is the proportionality coefficient and r˙tp is the relative velocity between UAV and target without maneuvering. Since the assumed target in one time domain is free of maneuvering, r˙tp can be obtained easily; ttp refers to the estimated remaining time of target without maneuvering; qtfn indicates that the track angle of UAV in receding horizon reaches the angles prospected. Since the assumed target in one time domain is free of maneuvering, the virtual motion trajectory in time domain can be predicted accurately. Therefore, r˙tp and ttp can be calculated easily, not requiring specific measuring or estimation. q˙ is the LOS rate between UAV and realistic objective. The parameter can be measured by the sensors on UAV. e-1/(0.1+ttp)(qtfn-θM)/(1+ttp) is the track angle control issue making sure to satisfy the requirements in receding horizon of track angle as stipulated in formula (20). Exponential term is a smooth term. q˙ is a compensation dosage to make sure of the gap between target virtual position and maneuvering actual target position on the normal direction of UAV.
Since it is difficult to calculate the attack direction qfn of UAV in each receding horizon via the nonlinear equations (1) and (2) accurately, it can be subject to approximate evaluation based on the current position of aircraft and terminal position of the target in the receding horizon, as shown in(24)qtfn≈tan-1rt0nsinqt0n+VTtpnsinθTt0nrt0ncosqt0n+VTtpncosθTt0n,n=1,2,3…
During the actual iterative calculation, the value of receding horizon length tpn is quite important. It plays a role balancing the load and system stability performance. The paper adopts fixed receding horizon following the method as stipulated in formula (26), where Tmin is calculated in [10]. (25)tp1=μTmin,0<μ<1.
The guidance law iterative algorithm can be obtained based on the receding horizon control strategy.
Select the length of receding horizon according to formula (26).
Step 4.
Calculate the LOS of terminal time according to formula (25).
Step 5.
Perform optimum control based on the minimum time within receding horizon according to formula (24).
Step 6.
Determine if the calculation satisfies formula (6). If yes, complete iteration and guidance process; if not, update [q(tfn),θM(tfn),θT(tfn),r(tfn)] and return to Step 3 for iteration.
4. Simulations and Discussion
In this section, simulations of the proposed guidance law based on receding horizon control strategy are presented for a variety of scenarios. For validating the performance of our method, the new guidance law will be compared with some other guidance laws such as augmented proportional navigation (APN) and optimal guidance law (OGL) [12, 24]. The compared guidance laws can be written as(26)aAPN=-Nr˙q˙+aT2,(27)aOGL=2+k+tgo2q˙2Vc/Vm+1Vcq˙+k/tgo+2Vc/Vm+1Vcq-θm+1Vc/Vm+1aT.The initial parameters are set in Table 1. The initial LOS angle q0 and initial relative distance r0 are relating to the performance of the detectors on UAV. The values of VM and VT are decided by the velocity of UAV and target in real world. θM0 is the initial flight course angle, and its value should ensure the target to be in the field of view of detector. Acceleration limits, aMmax and aTmax, are decided by rules of thumb.
Initial parameters for guidance simulation.
q0 (deg)
r0 (m)
VM (m/s)
VT (m/s)
θM0 (deg)
aMmax (m/s^{2})
aTmax (m/s^{2})
30
400
30
20
20
30
10
The parameter settings presented in formula (23), (26), and (27) are as shown as follows: N=k1=5. In order to ensure consistency of simulating comparative results, the values of parameters k in formula (27) are around the intercept point. In case of remaining time tgo→0, the equivalent gain of proportional guidance items is 5. The value is k=8. According to formula (6), R<1 in the simulation refers to successful interception. In order to compare the application feasibility of each guidance law on self-detecting guided weapon with FOV limits, look angle is defined in formula (28) under the assumption of the consistent body axis and speed of UAV.(28)β=q-θM.
4.1. Case 1 (Impact a Target Moving Straight Line with Constant Velocity)
In this scenario, the target moves along a straight line without lateral acceleration. So the aT guidance law aOGL and aAPN is zero. The initial flight path angle of target, θT0, is set to 90 deg. Since the target has no lateral acceleration, and the target speed is uniform, the target trajectory can be predicted for APN, OGL, and the guidance law proposed in the paper. Therefore, this is mainly to investigate the comparison of three types of guidance law performances under different target motion directions under such circumstances.
Simulation results are as shown in Figure 3. It can be seen from Figure 3 that APN, OGL, and the guidance law proposed in the paper can intercept target. The trajectories of OGL and proposed law are relatively close. And those of APN, OGL, and proposed law are relatively straight. It can be seen from Figure 3(b) that the energy consumption of APN acceleration is low. And that of the other two guidance laws is comparatively high. About 12 s later, the acceleration of APC is around 0; the acceleration of OGL at guidance terminal increases to −2.6 m/s^{2} reversely; it can obviously detect the change rules of proposed law in each time domain. The last terminal acceleration is 0. It can be seen from Figure 3(c) that the LOS rate of APN and proposed law has decreased around 0°/s. And the LOS rate of OGL increased to −3.4°/s at the terminal. It can be seen from Figure 3(d) that look angle of APN increased and stabilized at 13.5°. The look angle of OGL decreased to 5.9°. And the look angle of proposed law decreased and stabilized at 7.7°. Through simulation calculation, the final miss distances of APN, OGL, and proposed law are 0.9714 m, 0.9137 m, and 0.8216 m, respectively; the impact time of OGL is 18.72 s; the impact time of proposed law is 18.73 s. And the time loss of proposed law is 0.05% according to formula (22).
Simulation results are as shown in Figure 4. It can be seen from Figure 4(a) that APN, OGL, and the guidance law proposed in the paper can intercept target. The trajectories of the three guidance laws at the first half section of guidance process are relatively close. And those of APN, OGL, and proposed law at the remaining half section are relatively straight. It can be seen from Figure 4(b) that the energy consumption of APN acceleration is low. And that of the other two guidance laws is comparatively high. About 12 s later, the acceleration of APC is around 0; the acceleration of OGL at guidance terminal increases to −5.5 m/s^{2}; it can obviously detect the change rules of proposed law in each time domain. The last terminal acceleration is 0. It can be seen from Figure 4(c) that the LOS rate of APN and proposed law has decreased around 0°/s. And the LOS rate of OGL increased to 6.9°/s at the terminal. It can be seen from Figure 4(d) that look angle of APN increased and stabilized at 21.5. The look angle of OGL look angle decreased to −12° reversely. And the look angle of proposed law decreased and stabilized to 14°. Through simulation calculation, the final miss distances of APN, OGL, and proposed law are 0.9901 m, 0.9584 m, and 0.9742 m, respectively; the impact time of OGL is 15.89 s; the impact time of proposed law is 15.96 s. And the time loss of proposed law is 0.4% according to formula (22).
Simulation results are as shown in Figure 5. It can be seen from Figure 5(a) that APN, OGL, and the guidance law proposed in the paper can intercept target. The trajectories of three guidance laws of the first half section at the guidance process are relatively close. And those of OGL and proposed law of the last half section are relatively straight. It can be seen from Figure 5(b) that the energy consumption of APN acceleration is low. And that of the other two guidance laws is comparatively high. About 12 s later, the acceleration of APN is around 0; the acceleration of OGL at guidance terminal increases to 30 m/s^{2}; it can obviously detect that the change rule of proposed law in each time domain at the last terminal is converged to 0 m/s^{2}. It can be seen from Figure 5(c) that the LOS rate of APN and proposed law has decreased around 0°/s. And the LOS rate of OGL increased to 22.5°/s at the terminal. It can be seen from Figure 5(d) that look angle of APN increased reversely to −16°. The look angle of OGL decreased to −22.5° reversely. And the look angle of proposed law decreased to −20° reversely. Through simulation calculation, the final miss distances of APN, OGL, and proposed law are 0.5577 m, 0.9890 m, and 0.7703 m, respectively; the impact time of OGL is 7.22 s; the impact time of proposed law is 7.24 s. And the time loss of proposed law is 0.3% according to formula (22).
Responses in Case 1 (aT=0m/s2,θM0=180°).
Trajectories of UAV and target
Acceleration command
LOS angular rate
Look angle
It can be seen from the guidance simulation results of different target initial velocity guidance simulation results that the trajectory of proposed law is close to the trajectory characteristics of OGL. When there is no acceleration maneuvering of the target, the energy consumption of APN is low, followed by proposed law. The energy consumption of OGL is the maximum, especially at the terminal of guidance process. The acceleration control amount of OGL will increase sharply. The acceleration of proposed law can be effectively reduced to the last time domain. The LOS rate of APN can be subject to convergence smoothly; and the LOS rate of OGL at the terminal will disperse reversely. Especially when the initial track angle of the target is 180°, the LOS rate at the terminal of guidance is 22.5°/s. Under such circumstances, it is prone to off-target; for the control effect of receding horizon, each time domain of proposed law can be updated at the initial period. And LOS rate can be controlled around 0°/s at the terminal. In most cases, look angle of proposed law is small. And look angle of APN is big. It can be seen from Figure 6 that the guidance law proposed in the paper is within the initial track angle of 0°~140°. The maximum look angle is the minimum one among the three guidance laws. Figure 7 shows that the target is within the initial track angle of 0°~180°. From the final miss distance of the three guidance laws, it can be seen that the final miss distance of three guidance laws is distributed within 0.5~1 m. Since the target motion direction remains unchanged, the time loss of proposed law is not great.
Max look angle of APN, OGL, and proposed law under different initial flight angel of target.
Final miss distance of APN, OGL, and proposed law under different initial flight angel of target.
4.2. Case 2 (Impact a Maneuvering Target with Constant Lateral Acceleration)
In this scenario, the target moves with a constant lateral acceleration. The initial flight path angle of target θT0 is set to 90 deg. For APN and OGL, aT, the changing laws, are unknown. And for proposed law, the changing law of target maneuvering is unknown. According to the simulation results, APN works within the target maneuver of −2.3~4.9 m/s^{2}. However, APN will disperse while exceeding the scope and cannot satisfy the guidance conditions (6). Therefore, the performance of the three guidance laws shall be investigated within the target maneuvering scope of −2.3~2.3 m/s^{2}.
Simulation results are as shown in Figure 8. It can be seen from Figure 8(a) that the trajectories of OGL and that proposed in the paper are relatively close. And those of OGL and proposed law are relatively straight compared to that of APN. It can be seen from Figure 8(b) that the energy consumption of APN acceleration is high. And the guidance law proposed in the paper is equivalent to the energy consumption of OGL. The acceleration of APN is around 10 m/s^{2}; the acceleration of OGL at guidance terminal increases to −7 m/s^{2}; it can obviously detect that the accelerated velocity of the proposed law at the last time domain is converged to −4 m/s^{2}. It can be seen from Figure 8(c) that LOS rate of APN is changing around 3~7°/s. That of OGL increased to −6°/s reversely. The guidance law proposed in the paper finally converged to −2 °/s. It can be seen from Figure 8(d) that look angle of APN increased to 60°. The look angle of OGL and proposed law is within 25°. In addition, the look angle of proposed law is small. Through simulation calculation, the final miss distances of APN, OGL, and proposed law are 0.7446 m, 0.9508 m, and 0.9336 m, respectively; the impact time of OGL is 17.87 s; the impact time of proposed law is 17.99 s. And the time loss of proposed law is 0.7% according to formula (22).
Simulation results are as shown in Figure 9. It can be seen from Figure 9(a) that the trajectory of three guidance laws is relatively straight. It can be seen from Figure 9(b) that the energy consumption of OGL acceleration is high. And the guidance law proposed in the paper is equivalent to the energy consumption of APN. The acceleration of APN is around 0 m/s^{2}; the acceleration of OGL at guidance terminal increases to −1 m/s^{2}; and the accelerated velocity of the proposed law at the terminal is around −2 m/s^{2}. It can be seen from Figure 9(c) that LOS rate of APN reduced and stabilized to 1.5°/s. After the LOS rate of OGL decreasing to 0, it increased to −0.2°/s reversely. The guidance law proposed in the paper finally converged to −1°/s. It can be seen from Figure 9(d) that look angle of OGL reached the maximum value of 16°. The look angle of proposed law reached the minimum value of 10°. In addition, proposed law and look angle of OGL at the last time are converged around 0°. Through simulation calculation, the final miss distances of APN, OGL, and proposed law are 0.9259 m, 0.8748 m, and 0.8021 m, respectively; the impact time of OGL is 17.85 s; the impact time of proposed law is 17.93 s. And the time loss of proposed law is 0.5% according to formula (22).
Simulation results are as shown in Figure 10. It can be seen from Figure 10(a) that the trajectory of APN is comparatively bending and that of OGL is straight. The trajectory of the guidance law proposed in the paper is between the two. It can be seen from Figure 10(b) that the energy consumption of APN acceleration is high. And the guidance law proposed in the paper is equivalent to the energy consumption of OGL. The APN acceleration at the terminal is about −10 m/s^{2}; OGL acceleration of guidance law increased to 13 m/s^{2} reversely; and the acceleration of proposed law at the terminal increased around −1 m/s^{2}. It can be seen from Figure 10(c) that LOS rate of APN increased to −3°/s reversely. That of OGL increased to 7°/s reversely. The guidance law proposed in the paper finally converged around 0. It can be seen from Figure 10(d) that look angle of APN reached the maximum value of 45°. The look angle of proposed law increased around 28° reversely. Through simulation calculation, the final miss distances of APN, OGL, and proposed law are 0.7807 m, 0.9531 m, and 0.9716 mm, respectively; the impact time of OGL is 10.91 s; the impact time of proposed law is 11.21 s. And the time loss of proposed law is 3% according to formula (22).
Responses in Case 2 (aT=2.3m/s2).
Trajectories of UAV and target
Acceleration command
LOS angular rate
Look angle
It can be seen through the guidance simulation results of different target constant acceleration that the trajectory scale division of three guidance laws with acceleration maneuvering is high. In case of high acceleration, the trajectory bending of APN is high. In case of acceleration maneuvering of the target, the energy consumption of APN is high and the energy consumption of proposed law and that of OGL are equivalent. And at the terminal of guidance process, the acceleration control amount of OGL will increase sharply. The acceleration of proposed law can be effectively reduced at the last receding horizons. The LOS rate of APN will be enhanced along the increasing of target acceleration. Under such circumstance, it is prone to off-target; due to the restriction of proposed law at receding horizon, the original values will be upgraded at the initial time. And LOS rate can be controlled around the low value at the terminal. In most cases, look angle of proposed law is smaller than the look angle of APN. It can be seen from Figure 11 that the look angle of proposed law and that of OGL are related to comparison and target acceleration. The look angle of proposed law within −10~−8 m/s^{2} acceleration range is smaller than the look angle ranges of other guidance laws. And the look angle of −7~−5 m/s^{2} acceleration range scope is small. Figure 12 shows that the target is within the initial track angle of −10~10 m/s^{2}. From the final miss distance of the three guidance laws, it can be seen that the final miss distance of three guidance laws is distributed within 0.4~1 m. With the increasing of target acceleration, the time loss of proposed law presents an increasing tendency.
Max look angle of OGL and proposed law under different acceleration of target.
Max look angle of OGL and proposed law under different acceleration of target.
4.3. Case 3 (Impact a Maneuvering Target with Sinusoidal Lateral Acceleration)
In this scenario, the target moves with a sinusoidal lateral acceleration, aT=-10sin(ωt); the above formula ω shows change frequency. The initial flight path angle of target θT0 is set to 90 deg. For APN and OGL, aT, the changing laws, are known. And for proposed law, the changing law of target maneuvering is unknown. According to the simulation calculation, APN ω will disperse while exceeding 0.02 rad/s and will not satisfy the guidance conditions (6). Therefore, select ω=0.01,0.02 and compare the three guidance law performances through simulation calculation. Select ω=0.4, and compare the guidance performances of OGL and proposed law.
Simulation results are as shown in Figure 13. It can be seen from Figure 13(a) that the trajectories of APN and that proposed in the paper are relatively close. And those of the three guidance laws are relatively straight. It can be seen from Figure 13(b) that the energy consumption of OGL acceleration is high. And the guidance law proposed in the paper is equivalent to the energy consumption of APN. The acceleration of APN at terminal is around 0 m/s^{2}; the acceleration of OGL at guidance terminal increases to −1.5 m/s2; and the accelerated velocity of the proposed law at the terminal is around −3 m/s^{2} reversely. It can be seen from Figure 13(c) that LOS rate of APN is changing around 1~3.5°/s. That of OGL increased to 0.5°/s reversely. The guidance law proposed in the paper finally converged to −2 °/s. It can be seen from Figure 13(d) that look angle of OGL increased to the maximum value of 18°. The look angle of proposed law decreased to the minimum value of 11.5°. Through simulation calculation, the final miss distances of APN, OGL, and proposed law are 0.8905 m, 0.9586 m, and 0.8033 m, respectively; the impact time of OGL is 16.81 s; the impact time of proposed law is 17.00 s. And the time loss of proposed law is 1.1% according to formula (22).
Simulation results are as shown in Figure 14. The trajectory scale division of three guidance laws is obvious as shown in Figure 14(a). The trajectory at APN guidance terminal is obviously bent which is related to −2.3 m/s^{2} approaching the target acceleration (relevant interpretation is as shown in simulation in Section 4.2). It can be seen from Figure 14(b) that the energy consumption of APN acceleration is high. And the guidance law proposed in the paper is equivalent to the energy consumption of OGL. The acceleration of APN is around 16 m/s^{2}; the acceleration of OGL at guidance terminal increases to −8 m/s^{2}; and the accelerated velocity of the proposed law at the terminal is around −6.5 m/s^{2}. It can be seen from Figure 14(c) that LOS rate of APN is changing around 3.5~8°/s. That of OGL rate of OGL increased to −6°/s reversely. The guidance law proposed in the paper finally converged to −4.5°/s. It can be seen from Figure 14(d) that look angle of OGL increased to the maximum value of 42°. The look angle of proposed law decreased to the minimum value of 21.5°. Through simulation calculation, the final miss distances of APN, OGL, and proposed law are 0.8194 m, 0.9032 m, and 0.9752 m, respectively; the impact time of OGL is 17.23 s; the impact time of proposed law is 17.29 s. And the time loss of proposed law is 0.3% according to formula (22).
Simulation results are as shown in Figure 15. It can be seen from Figure 15(a) that the trajectories of OGL and that proposed in the paper are relatively straight. It can be seen from Figure 15(b) that the energy consumption of OGL acceleration is low. And the guidance law proposed in the paper is high. The OGL acceleration at terminal is around 6.5 m/s^{2}; the acceleration of proposed law at terminal reduces around 1 m/s^{2}. Generally, the acceleration of proposed law can follow the changes of target acceleration. It can be seen from Figure 15(c) that LOS rate of OGL is −2~6°/s and that at the terminal is 6°/s. Proposed law decreased to 0.5°/s eventually. It can be seen from Figure 15(d) that look angle of OGL is 24°, greater than that of proposed law. The look angle scope of proposed law is 22.5°. Through simulation calculation, the final miss distances of OGL and proposed law are 0.7781 m and 0.776 m, respectively; the impact time of OGL is 15.51 s; the impact time of proposed law is 15.59 s. And the time loss of proposed law is 0.5% according to formula (22).
Responses in Case 2 (aT=-10sin(0.4t)).
Trajectories of UAV and target
Acceleration command
LOS angular rate
Look angle
It can be seen from the guidance simulation results through different frequency changes of target acceleration that APN is only applicable to slow changes of target acceleration. In case of changes of target acceleration, the energy consumption of APN will reach the maximum value. The energy consumption of proposed law and that of OGL are equivalent. In case of frequency increasing of target acceleration changes, the energy consumption of proposed law will increase mainly because the proposed law is free of target maneuvering information compared to OGL. Therefore, more energy is needed to track the changes of target acceleration. The LOS rate of APN will be enhanced continuously under sine mobility. It is prone to off-target under such circumstance; due to the restriction of proposed law at receding horizon, the original values will be upgraded at the initial time. And LOS rate can be controlled around the low value at the terminal. In most cases, look angle of proposed law is smaller than that of APN and OGL. It can be seen from Figure 16 that, at most frequency sections, the look angle of proposed law is compared to the look angle of OGL. Figure 17 shows that the final miss distances of OGL and proposed law are distributed within 0.4~1 m within 0.01~1 rad/s under different frequencies of sine mobility. With the increasing of frequency, the time loss of proposed law presents no obvious increasing tendency and maintains a low level.
Max look angle of OGL and proposed law under different sinusoidal acceleration of target.
Final miss distance of OGL and proposed law under different sinusoidal acceleration of target.
5. Conclusion
The paper proposed a suboptimal terminal guidance law based on receding horizon control strategy which can be used in self-optimizing guided weapon attacking maneuvering target. By adopting receding horizon control strategy, the guidance process is divided into several finite time horizons. Conduct minimum time optimal control with continuous renewal of original conditions within each time horizon, and perform repeated iteration until intercepting the target. The simulation results have verified that proposed law is an effective suboptimal guidance law. On the aspect of energy consumption, APN energy consumption reaches the minimum value while making uniform rectilinear motion, followed by proposed law. The energy consumption of APN is the maximum. In case of target maneuver, the energy consumption of APN will be maximum. The energy consumption of OGL and proposed law is low. On the aspect of guidance duration, no matter whether the target is mobile, the time loss of proposed law compared to OGL is not high. In addition, proposed law can reduce the terminal acceleration and LOS rate relying on the receding horizon control strategy thus to reduce the possibility of off-target. In most cases, the look angle changing range of proposed law is smaller which is in favor of self-optimizing guided weapon with field limitation. Although the guidance time and energy consumption are not optimal, the guidance information required by proposed law is scarce. Particularly, maneuvering target (constant value mobility and sine mobility) has stronger adaptability. The target can be intercepted not requiring estimation on target acceleration and remaining time.
Conflicts of Interest
The authors declare that there are no conflicts of interest regarding the publication of this paper.
ZarchanP.ChenK.FuB.DingY.YanJ.Integrated guidance and control method for the interception of maneuvering hypersonic vehicle based on high order sliding mode approachGurfilP.Zero-miss-distance guidance law based on line-of-sight rate measurement onlyDhananjayN.GhoseD.Accurate time-to-go estimation for proportional navigation guidanceXiongS.WangW.LiuX.WangS.ChenZ.Guidance law against maneuvering targets with intercept angle constraintLeeC. H.HyunC.LeeJ. G.ChoiJ. Y.SungS.A hybrid guidance law for a strapdown seeker to maintain lock-on conditions against high speed targetsWangX.WangJ.Partial integrated guidance and control for missiles with three-dimensional impact angle constraintsDuR.MengK.ZhouD.LiuJ.Design of three-dimensional nonlinear guidance law with bounded acceleration commandHexnerG.ShimaT.Stochastic optimal control guidance law with bounded accelerationTahkM.-J.RyooC.-K.ChoH.Recursive time-to-go estimation for homing guidance missilesChoH.RyooC.-K.TsourdosA.WhiteB.Optimal impact angle control guidance law based on linearization about collision triangleHeS.LinD.WangJ.Continuous second-order sliding mode based impact angle guidance lawShangW.GuoJ.TangS.MaY.ZhangY.Impact time guidance law considering autopilot dynamics based on variable coefficients strategy for maneuvering targetZhouJ.WangY.ZhaoB.Impact-time-control guidance law for missile with time-varying velocityVincentT. L.MorganR. W.Guidance against maneuvering targets using Lyapunov optimizing feedback controlProceedings of the American Control Conference (2012)May 2002Anchorage, AK, USA2152202-s2.0-0036057175YangC.-D.ChenH.-Y.Nonlinear <i>H_{∞}</i> robust guidance law for homing missilesJamilniaR.NaghashA.Optimal guidance based on receding horizon control and online trajectory optimizationSznaierM.CloutierJ.HullR.JacquesD.MracekC.Receding horizon control lyapunov function approach to suboptimal regulation of nonlinear systemsWahlE.TurkogluK.Non-linear receding horizon control based real-time guidance and control methodologies for launch vehiclesProceedings of the IEEE Aerospace Conference (AERO '16)20161510.1109/AERO.2016.75008572-s2.0-84978519664PengH.JiangX.Nonlinear receding horizon guidance for spacecraft formation reconfiguration on libration point orbits using a symplectic numerical methodKoC.-H.HsiehY.-H.ChangY.-T.AgrawalS. K.YoungK.-Y.Guidance and obstacle avoidance of passive robot walking helper based on receding horizon controlProceedings of the IEEE International Conference on Automation Science and Engineering (CASE '14)August 20141032103710.1109/CoASE.2014.68994532-s2.0-84940178775OhtsukaT.Time-variant receding-horizon control of nonlinear systemsPrazenicaR.KurdilaA.SharpleyR.Receding Horizon Control for MAVs with Vision-Based State and Obstacle EstimationAIAA Guidance, Navigation and Control Conference and Exhibit2013HouM.Implementing minimum-time homing guidance based on the impact point prediction