Staring Imaging Attitude Tracking Control Laws for Video Satellites Based on Image Information by Hyperbolic Tangent Fuzzy Sliding Mode Control

This paper studies the staring imaging attitude tracking and control for satellite videos based on image information. An improved temporal-spatial context learning algorithm is employed to extract the image information. Based on this, a hyperbolic tangent fuzzy sliding mode control law is proposed to achieve the attitude tracking and control. Furthermore, the hyperbolic tangent function and fuzzy logic system are introduced into the sliding mode controller. In the experiments, the improved temporal-spatial context learning algorithm is applied for the image information of the space target video sequence captured by Jilin-1 in orbit, where the image information is used as the input of the control loop. Moreover, the proposed method is realized through simulation. Besides, the image change caused by attitude adjustment is achieved successfully, and the target imaging can be located in the center of the image plane to realize the gaze tracking control of the space target effectively.


Introduction
With the rapid development of remote sensing technology, video satellites have attracted much attention due to their continuous observation ability [1][2][3][4][5][6][7][8]. As a new type of Earth observation satellite, video satellites can employ the payload on the satellite platform to be pushed scan imaging with the help of the orbital motion of the satellite. Note that video satellites are to adjust the attitude realtimely and to obtain the dynamic information of the target area continuously in the process of staring imaging, so the optical axis of the optical load points at the target at all the time. In Ref. [9], Liu et al. introduce the gaze imaging technique, where the process generally refers to the staring attitude control. In addition, video satellites can use agility and attitude control technology to realize continuous imaging of ground targets. Due to this reason, compared with the traditional Earth observation satellites, video satellites have widely applied in many fields, such as vehicle real-time monitoring [10,11], rapid response to natural disaster emergency [12], and major engineering monitoring [13].
Up to now, there are mainly two types of staring imaging satellites in orbit, including the satellites in the geostationary orbit and video satellites in the low orbit [14][15][16]. Figure 1 shows the schematic diagram of the ground gazing attitude control of video satellites. Furthermore, the attitude control system in the video satellites can adjust the attitude in realtime so that the optical axis of the optical sensor always points to the ground target area for continuous photography. Staring imaging is the main working mode for video satellites [17,18]. In essence, although the staring imaging control problem is a dynamic attitude tracking problem, it is difficult to ensure the high stability of the optical axis of the satellite optical sensor to the observed object.
So far, compared with attitude tracking and control for spacecraft and satellites, several satellite staring attitude control methods have been introduced to achieve real-time tracking [24][25][26][27][28][29][30][31][32]. For instance, Lian et al. [27] investigated the small satellite attitude problems for staring operation. Liang et al. [28] designed the fuzzy logic control law for the staring imaging satellite attitude problem in LEO, which has a quick response and excellent robustness. In Ref. [29], Chen et al. present a quaternion-based PID feedback tracking controller with the gyroscopic term cancellation, where some desired target can be tracked on the Earth. Chen et al. [30] investigated a staring imaging attitude controller based on double-gimbaled control moment gyroscope (DGCMG), which is simple and effective for agile small satellites. In Ref. [31], Li and Liang proposed a robust finite-time controller aiming at satellite attitude maneuvers to demonstrate the robustness of some typical perturbations such as disturbance torque, model uncertainty, and actuator error. Li et al. [32] implemented the neural network controller for staring imaging, where the real-time performance can be achieved. However, the above staring attitude controller does not consider the image information directly. Moreover, the image information is separated from the attitude tracking controller. Besides, we note that in the satellite staring mode, the optical axis of the camera should point fixedly to the target for a long time. Meanwhile, both the video satellites and the object may be moving in the inertial coordinate system. In this way, the relative velocity and position may be changing over time.
In essence, visual information is introduced into the closed-loop control, which is commonly known as visual servo, and is first applied in the field of robots [35][36][37][38]. Recently, robot visual servo has achieved numerous results in both theory and practical application. We note that robot visual servo is generally divided into two structures: the position-based visual servo and the image-based visual servo.
e position-based servo should calibrate the internal parameters of the camera to determine the relative attitude between the target and the camera coordinate system, which may increase the amount of calculation of the system. On the contrary, the image-based visual servo directly uses the visual feature error of the target in the phase plane and takes the controlled object and the visual system as a whole.
Due to the above reasons and advantages, visual information was initially introduced into the control closed loop of satellites and spacecraft. erefore, an improved temporal-spatial context learning (ISTC) algorithm is employed to extract the image information in this paper. Based on this, a hyperbolic tangent fuzzy sliding mode control law (HTFSMC) for small video satellites is designed to achieve the attitude tracking and control. In special, the related coordinates are defined for attitude transformation. Subsequently, the sliding mode tracking controller is presented based on the image information from satellite videos. Furthermore, the hyperbolic tangent function and the fuzzy logic system are employed in the sliding mode controller.
In summary, the contributions of this paper are threefold.
(1) In this paper, an ISTC algorithm is employed to obtain the image information. Hence, the visual information can be employed effectively in the visual tracking control based on spatial moving images, instead of cumbersome, complex camera internal parameter calibration, and accurate information of target and camera motion. (2) Based on the image information, this paper proposed the HTFSMC, where the hyperbolic tangent function and fuzzy logic system are introduced into the sliding mode controller. (3) In the experiments, the image information of the space target video sequence captured by Jilin-1 in orbit is used as the input of the controller. e control part is realized through simulation. Besides, the image change caused by attitude adjustment is achieved successfully, and the target imaging can be located in the center of the image plane to realize the gaze tracking control of the space target effectively. e rest of this paper is arranged as follows. In Section 2, the staring imaging attitude dynamics model is described in detail. In what follows of this section, the sliding mode controller and fuzzy sliding controller are presented for staring imaging attitude tracking of small satellite videos. In Section 3, the experiment results and some discussion are introduced. Finally, Section 4 concludes this article.

Materials and Methods
In this paper, in order to extract the image information, the ISTC is employed for moving object tracking. Based on this, an attitude controller based on image information feedback is designed to realize the gaze tracking control of moving targets. e structure diagram is shown in Figure 2. In a nutshell, this method is to conform the centroid coordinates and to calculate the position deviation from the image center.
erefore, the cumbersome process of camera 2 Computational Intelligence and Neuroscience calibration and relative pose estimation can be avoided so that the computation cost is reduced.

An Improved Spatio-Temporal Context Learning
Algorithm. For moving target video tracking, the local context is the background of the target and an certain area nearby. In fact, there is a strong space-time relationship in the local scene around the target between consecutive frames. According to the space-time relationship between the target and its surrounding area, the spatio-temporal context (STC) learning algorithm is constructed a spatiotemporal context model for the target and the nearby area based on the gray features of the image. Moreover, the confidence map of the target is to be calculated, and the maximum-likelihood probability in the confidence map can be found as the estimated target position. erefore, the ISTC algorithm is described in detail. e confidence map of the target position can be set as x * . e luminance feature set of x * is defined as where c(z) and I(z) are the luminance feature and the image intensity at position z, respectively. Ω c (x * ) denotes the context area of position x * . In the following, c(x) can be used to calculate the confidence map as follows: where P(x|c(z), o) is the conditional probability, which can represent the spatial relationship between the target position and its the context information. In addition, P(c(z)|o) is the prior probability, which can model the appearance of the local context. Furthermore, P(x|c(z), o) can be defined as where h sc (x − z) is the relative distance and direction function between the target position x and its local context information. Subsequently, P(c(z)|o) can be also defined as where ω σ (z − x * ) is a weight function, a is a normalized constant, which makes P(c(z)|o) range from 0 to 1 in (4), and σ is a scale parameter. Hence, the confidence map c(x) in (2) can be rewritten as where ⊗ is the convolution operator and β is an important shape parameter. Fast Fourier transform is utilized simultaneously on both sides of the equation for (5). erefore, (5) can be updated as where F is the fast Fourier transform (FFT) and ⊙ denotes the element-wise product. Subsequently, h sc (x) can be obtained as where F − 1 is the inverse FFT. In this way, based on (7), the spatio-temporal context model can be derived as where h sc t is the spatio-temporal context model at the k-th frame in (7). Hence, h sc t+1 can be also obtained by (7). us, the confidence map at the (k + 1)-th frame is expressed as e confidence map is maximized, so the location of the target can be obtained as

Computational Intelligence and Neuroscience
Since the target attitude may be changed in the process of target movement, the size of the target may be also changed. Besides, the background information may be different in each frame. erefore, the scale update strategy can be employed for the target, which is given as However, in Equation (11a), the denominator may be close to zero so that the results of moving object tracking may occasionally lead to overfitting. Due to this reason, an improved scale update strategy is introduced to avoid an abrupt change based on a penalty term p(s): where ς is a constant. In this way, the updated scale can be rewritten as  Figure 3. e inertial coordinate system of Earth is defined as O i − X i Y i Z i , where the coordinate origin O i is located at the center of mass of the Earth. e X i -axis is on the equatorial plane, which is pointing at the vernal equinox of the time.
e direction of Z i -axis is the consistent with the Earth rotation axis. e Y i -axis is in the equatorial plane and meets the right-hand orthogonal reference. e satellite body coordinate system is defined

e Attitude Solution
Based on Satellite Images. In this paper, we assume that the camera coordinate system O c − X c Y c Z c coincides with the satellite coordinate system e unit vector in the O b Z b direction is r � (0, 0, 1) T . As shown in Figure 4, the coordinates of target P is set as (u, v) in the pixel coordinate system I − xy. In the satellite coordinate system O b − X b Y b Z b , the target line of sight direction r P can be described as where (x P , y P , z P ) is the coordinate of the target in the . e focal length of the spaceborne camera is f and the pixel size is l.
e purpose is to ensure the target can image in the center of the image. Hence, it is necessary to coincide with r and r P . In the process of staring tracking, the images are required to remain stable and no rotation, which is convenient for image observation and analysis. Staring tracking imaging is the process of controlling r to track r P .
In the satellite coordinate system O b − X b Y b Z b , the following assumptions are given as  Figure 3: Schematic diagram of ground gazing attitude control of video small satellite. Figure 4: e diagram of target deviation on image plane. 4 Computational Intelligence and Neuroscience First, we rotate around the O b Y b axis so that r is coincided with r 1 . e rotation angle θ is expressed as en, we rotate around the O b X b axis so that r is coincided with r P . e rotation angle φ is demonstrated as e attitude quaternions q are defined as where q is the vector part, and q 0 is the scalar part. e quaternion is obtained as the equation (19) through rotating θ around (0, 1, 0) T .
rough rotating φ around (0, 1, 0) T , the quaternion is obtained as where ⊗ is the rotation multiplication operator of quaternions. ereby, the expected attitude error quaternion q e is expressed as erefore, the expected attitude quaternion q t is relative to the Earth inertial system can be expressed as where q b is attitude quaternion of satellite body coordinate system relative to the Earth inertial system. e attitude kinematics equation is shown as where q × is an antisymmetric matrix, and I 3×3 is an identity matrix. en, the following equation is verified as Using the equation (23a), the expected angular velocity ω t is inversely solved as e expected attitude error angular velocity is given as Computational Intelligence and Neuroscience If q � q 0 q is any quaternion, we can obtain where A(q e ) is the attitude matrix determined by q e .

Sliding Mode Controller.
In this paper, based on the actuator of three orthogonal mounted reaction flywheel, the satellite attitude dynamics equation is given as where J is the inertia moment of the satellite, ω b is angular velocity of satellite body coordinate system, h is the angular momentum of the flywheel, u is control torque, and d is external disturbance torque. In this paper, the sliding mode function is designed as where ζ � [ζ 1 , ζ 2 , ζ 3 ] T , K � diag(k i ), i � 1, 2, 3. If ζ ⟶ 0, the angular velocity of the system and the state of the angular velocity can be tracked. e approach function method is used to obtain the sliding mode control law. e exponential reaching law is used as where sgn(s) � [sgn(ζ 1 ), sgn(ζ 2 ), sgn(ζ 3 )], l � diag(l i ), and ε � diag(ε i ), i � 1, 2, 3. e control quantity u is obtained as where K 1 is controller parameters. We notice that the chattering of sliding mode controller is mainly caused by sgn(ζ) and d. Let P(ζ) � sgn(ζ). In order to reduce chattering, P(ζ) is rewritten as where the inflection point of hyperbolic tangent function is determined through the value of ε(ε > 0).

Stability Analysis.
In order to ensure that the state of the system move from any initial point to s � 0 in a finite time based on the designed sliding mode controller, the following assumptive conditions are given: e stability of the system is proved as follows. Lyapunov function is constructed as e derivative of the (33) is represented as where _ V � 0 if and only if ζ � 0. _ V is a seminegative definite function. erefore, the system is convergent by using sliding mode control.

Fuzzy Logic
System. Fuzzy logic system (FLS) is consisted of fuzzy rule base, fuzzy rule base, fuzzy inference engine, and defuzzifier, as shown Figure 5. In this paper, we assume that x 1 ∈ X 1 , x 2 ∈ X 2 , . . . , x p ∈ X p , and y ∈ Y are p input and an output, respectively. Furthermore, the fuzzy rule base is composed of k rules, expressed as where l � 1, 2, . . . , k. ereby, FLS can be employed to simplify fuzzy rules as mapping from fuzzy input sets F l 1 × · · · F l p to the fuzzy output set Y, denoted by F l 1 × · · · F l p � A l . In this way, (35) can be rewritten as e membership function μ R (l) (x, y) can be utilized to describe R (l) as where x � (x 1 , x 2 , . . . , x p ) T . Hence, (37) can be rewritten as where ⋆ represents that multiple antecedents are juxtaposed and connected with t norm. A x is a fuzzy set where p input of R l is given, and the membership function μ A x (x) is defined as Computational Intelligence and Neuroscience According to each fuzzy rule, a fuzzy set B l about the set Y is given as Meanwhile, based on commutativity of t norm, we can obtain the membership function μ B l (y) in (41) Singleton fuzzifier is employed into (41) and (41), which can be rewritten as Due to centroid defuzzifier, FLS output can be expressed as where B is output fuzzy set, and y c (x) is the clear output.

Fuzzy Sliding Mode
Controller. e input and output fuzzy sets of the system are defined as where ζ _ ζ is the input, and the variation of D 1 is represented ΔD 1 as the output. In Equations (44a) and (44b), NB, NM, NS, Z, PS, PM, PB are negative large, negative         Computational Intelligence and Neuroscience middle, negative small, zero, positive small, positive middle, and positive large, respectively. erefore, the following seven rules are designed as Besides, Figure 6 shows the input/output membership function of fuzzy control system. According to the value of ζ _ ζ, the centroid defuzzifier is employed and the value of ΔD 1 can be obtained as ΔD 1 . Meanwhile, D 1 can be rewritten as Hence, the fuzzy sliding mode controller can be designed as

Results and Discussion
In this section, in order to verify the performance of the proposed method, the numerical simulations are conducted for the video satellite, where Jilin-1 is selected. e experiments are implemented in Matlab R2018b and NVIDIA GeForce GXT 2080Ti GPU. Firstly, the ISTC algorithm is employed to extract the image information. As shown in Figure 7, the results of moving target tracking by ISTC are       Table 1. Besides, we can see that the image size is 4000 pixels × 4000 pixels, which is very large.
According to the above simulation parameters, we assume that the pixel coordinate system of the target located at (0, 510) in the initial time. Meanwhile, the traditional sliding mode control and the proposed controller are applied to obtain the variation curve of output torque, attitude angle, and angular velocity.
In Figure 8, we can see that T x and T y are used to represent the output torque in the x and y directions and can converge after about 40s based on the sliding mode controller. However, in Figure 9, T x and T y can converge after 25s based on the HTFSMC. Compared with the sliding mode control, the proposed controller can improve the convergence speed. Besides, T z is used to represent the output torque in the z direction and can converge faster after about 5s for these two controllers.
In Figure 10 and Figure 11, for the traditional sliding mode controller and the HTFSMC, θ can converge after about 30s and 20s, respectively. φ can converge after about 40s and 20s, respectively. Ψ can converge after about 20s.
In Figure 12 and Figure 13, for the traditional sliding mode controller and HTFSMC, ω 1 can converge after about 30s and 23s, respectively. ω 2 can converge after about 40s and 27s, respectively. ω 3 can converge after about 30s and 23s, respectively.
In Figure 14, the image information of the space target video is used as the input of the control loop. e image change caused by attitude adjustment is simulated. e image is scaled to 2000 pixels * 2000 pixels, which represents the satellite visual field and shown in black. Besides, we design that the size of the actual image is 4000 pixels * 4000 pixels. Meanwhile, the satellite visual field is embedded the actual image. e red * is demonstrated the target and the green box visual field center. It is can be seen that the attitude control based on image information feedback can be achieved by the proposed controller. Moreover, the target imaging can be located in the center of the image plane to realize the gaze tracking control of the space target effectively. Accordingly, Figure 15 shows the trajectory of the target in image plane based on fuzzy sliding mode control. In addition, we can see that the position of the visual field center is (2000,2000). At first, the target position is not in the visual field center. e target imaging is located in the center based on the feedback control of image information. Figure 16 shows the optical axis pointing error converging to 0 after about 60s based on the proposed controller. Figure 17 shows the simulation results of moving target gaze tracking in Jilin-1 video, in which the image changes caused by attitude adjustment are simulated and the moving target is the airplane. e moving airplane is not in the visual Computational Intelligence and Neuroscience field center at the initial moment. e airplane is imaged in the visual field center based on the proposed controller of image information. Accordingly, Figure 18 shows the trajectory of the target in image plane based on fuzzy sliding mode control. Moreover, Figure 19 shows the optical axis pointing error converging to 0 quickly based on the proposed controller. It means that the gaze tracking of space moving target is effectively simulated.

Conclusions
e staring imaging attitude tracking and control for satellite videos based on image information is studied in this paper. An ISTC algorithm is designed to obtain the image information. Based on this, we introduced a HTFSMC law to achieve the attitude tracking and control. Furthermore, the hyperbolic tangent function and fuzzy logic system are introduced into the sliding mode controller.
In the experiments, the image information of the space target video sequence captured by Jilin-1 in orbit is used as the input of the control loop. e control part is realized through simulation. Compared with the traditional sliding mode controller, the image change caused by attitude adjustment is achieved successfully and quickly based on the proposed controller, and the target imaging can be located in the center of the image plane to realize the gaze tracking control of the space target effectively. In the future work, the space target video sequences will be used as the input of the control loop directly.

Data Availability
e data used to support the findings of the study are available from the corresponding author upon request.

Conflicts of Interest
e author declares that there are no conflicts of interest regarding the publication of this paper.