Fault-Tolerant Optical Flow Sensor/SINS Integrated Navigation Scheme for MAV in a GPS-Denied Environment

An integrated navigation scheme based on multiple optical flow sensors and a strapdown inertial navigation system (SINS) are presented, instead of the global position system (GPS) aided. Multiple optical flow sensors are mounted on a micro air vehicle (MAV) at different positions with different viewing directions for detecting optical flow around the MAV. A fault-tolerant decentralized extended Kalman filter (EKF) is performed for estimating navigation errors by fusing the inertial and optical flow measurements, which can prevent the estimation divergence caused by the failure of the optical flow sensor. Then, the estimation of navigation error is inputted into the SINS settlement process for correcting the SINS measurements. The results verify that the navigation errors of SINS can be effectively reduced (even more than 9/10). Moreover, although the sensor is in a state of failure for 400 seconds, the fault-tolerant integrated navigation system can still work properly without divergence.


Introduction
In recent years, unmanned air vehicles have been widely used in military and civil fields.State-of-the-art navigation schemes for unmanned aerial vehicles (UAVs) typically rely on the high-quality global position system (GPS).However, the GPS signal is not always available (indoor places, for example), so the overreliance on GPS is becoming a prominent insufficiency of UAVs [1].As a result, the increasing interest in navigation for a GPS-denied environment has heightened the need for novel GPS-free integrated navigation schemes.Inspired by flying insects (e.g., honeybees) that have the ability to fly with great agility, scientists hope that micro air vehicles (MAVs) could fly like the insects without the aid of GPS signals [2].Experiments show that one of the navigation information sources is measuring the moving velocity of the image in the world appearing in the eye when honeybees fly to their destination [3].Researchers think that what the honeybees are mainly relying on is the apparent motion of the objects in their field of view, namely, optical flow, which contains egomotion information relative to the environment of the flying insects [4].
On the other hand, the requirement for more self-reliant (autonomous) navigation systems and the need for MAV with a greater understanding of their environment are becoming more and more urgent.It is obvious that insects like honeybees, even with their small brains and limited intelligence, have the accurately autonomous navigation ability described above.Accordingly, bioinspired optical flow sensors are developed with the advantage of smaller size, lighter weight, low power requirement, higher frequency, and lower cost compared to other equipments such as lidar, radar, and magnetometer [5].
Chahl [6] has also pointed out that the largest development opportunities may exist in small and micro UAV domains as a result of the novelty of aerospace engineering on a small scale.Therefore, autonomous navigation or pose estimation using optical flow sensors is valuable for MAVs.This has motivated many researches into optical-flow-based MAV navigation systems and algorithms.
Ruffier and Franceschini [7] have developed an optical flow regulation loop-based MAV which is able to take off, cruise, and land automatically.The MAV can also react appropriately to wind disturbances.This MAV can keep the downward optical flow at a constant value.Furthermore, the two-degree-of-freedom (DOF) tethered MAV was shown to land safely on a platform set into motion along two directions, vertical and horizontal, without ground height or groundspeed information [8].The results show that their MAV works very well.However, the MAV has only two DOF, which is a little far from the 6 DOF practical MAV.
As stated earlier, the optical flow contains the egomotion information of honeybees or MAVs.Consequently, it is a reasonable navigation frame that combines the optical flow and the inertial navigation system (INS) measurement information in order to improve the navigation precision.
Sabo et al. [9] performed an approach for bioinspired navigation using optical flow.Hallway navigation results show that the roll, pitch, and thrust can be tuned and controlled as expected.The deficiency is that the optical flow was calculated off-board using a computer vision algorithm, rather than taking advantage of optical flow sensors with small computation and short response time.
Zhang et al. [10] fused strapdown inertial navigation system (SINS) information with optical flow through a Kalman filter for correcting the SINS attitude when it is diverged.Simulation results show that the estimated vehicle attitude is of good performance with smaller error.Nevertheless, the placement scheme of optical flow sensors is not shown in this paper.In addition, it is considered that the pitch angle is sinusoidal, yet the roll angle and yaw angle are all zero, which is a little far from the practical situation.
In order to overcome the drift of INS, Shen et al. [11] designed an INS/optical flow/magnetometer integrated navigation scheme for a GPS-denied environment.The gyro, accelerator, and magnetometer information is properly fused for estimating the attitude of the MAV.The results show that the proposed scheme can effectively reduce the errors of navigation parameters and improve navigation precision.However, too many devices are involved in navigation, and the corresponding functions can be realized only by fusing optical flow and inertial information.
Lyu et al. [12] designed a fault-tolerant filter based on optical flow and inertial measurements.The velocity estimation accuracy is greatly improved.In want of perfection, the utilization of optical flow information needs to be further improved, not only for estimating velocity.
In this paper, a multiple optical flow sensor/SINS onboard integrated navigation scheme for a GPS-denied environment is proposed.Multioptical flow sensors are fixed on the given MAV, referencing an optimal placement scheme in order to make the optical flow sensors as sensitive as possible to the motion state change of a MAV.Considering that the MAV may make a big maneuverer so that some optical flow sensors may face the sky, which causes the failure of optical flow sensors, an adaptive decentralized filter with fault detection and isolation ability is therefore developed to conquer this problem.6 DOF flight simulations are carried out for validating the effectiveness of the navigation algorithm.
The rest of this paper is organized as follows.Section 2 introduces the mathematical model of optical flow sensors.The optical flow sensor/SINS integrated navigation algorithm based on the extended Kalman filter (EKF) is described in Section 3. Section 4 gives the optimal placement scheme of multiple optical follow sensors fixed on a MAV.Section 5 presents an application of the centralized EKF.A faulttolerant integrated navigation scheme is proposed in Section 6.In the last section, the author arrives at the conclusion that the fault-tolerant optical flow sensor/SINS integrated navigation algorithm is effective.

Optical Flow Sensor Model
The magnitude of optical flow is defined as the angular rate at which a point on an object point moves relative to an optical flow sensor.The perspective projection model of the optical flow sensor [10,11] is illustrated in Figure 1.

Optical Flow Definition. Let
T be a point on the ground in the three-dimensional optical flow sensor reference coordinate frame S OF o OF x OF y OF z OF .Let the optical axis of the optical flow sensor be the Z-axis of the optical flow sensor coordinate system.f is the focal length and the origin o OF is the centre of projection.W and H are the pixel numbers of the optical flow sensor image plane along the x OF and y OF directions, respectively.The projection of point P G on the image plane is given by Point P G has the following relative motion to the optical flow sensor under the sensor coordinate system S OF : Figure 1: Perspective projection model of an optical flow sensor.

Journal of Sensors
where ω is the angular velocity and T G the translational component of the motion relative to the optical flow sensor.Let (2) be expanded in three dimensions; we get the following equation: Taking the derivative of both sides of (1) with respect to time, the relation between the velocity of P G in the optical flow sensor coordinate system and the velocity or the optical flow of p in the image plane [13] is obtained.
Let (4) be expressed in x and y components, we get Substituting (3) into ( 5) and ( 6), the following equations are achieved: where v x and v y are the optical flow components on directions x and y, which can be calculated by a matching algorithm, such as the sum of absolute differences (SAD).When an optical flow sensor is mounted on a MAV flying in a three-dimensional space, the optical flow is a function of the MAV motion state (translation states v x , v y , v z and rotation states ω x , ω y , ω z ) and the distance d from the optical flow sensor to the object point, as shown in Figure 2.
In order to make the optical flow sensor work properly, the object plane should have adequate texture, and it should be sufficiently illuminated for the sensor to distinguish and track features.The optical flow sensor measures only two quantities which are the x and components of the mean optical flow within its field of view.The local optical flow vector OF in the viewing direction n (i.e., the direction of the optical axis) experienced by the optical flow sensor at velocity v and rotational rate ω is [14] where d is the distance between the object point and the optical flow sensor in direction n .For a simple example, Figure 2 depicts an optical flow sensor moving relative to its object plane at a right velocity v, a height above the ground h, and a yaw rate ω.The angle between the viewing direction (the direction of the optical axis) of the optical flow sensor and the plumb line is θ.The optical flow OF detected by the optical flow sensor in the plane formed by the optical axis and the velocity vector is [5] The physical unit of OF in ( 8) and ( 9) is "rad/s".

Reference Coordinate Systems.
In order to describe the 6 DOF motions of a blended wing body [15] MAV with multiple optical flow sensors in three-dimensional space, three reference coordinate systems as can be seen in Figure 3 [16] are defined.(i) The transformation matrix from the MAV body coordinate system S b to the optical flow sensor coordinate system S OF is where μ and η are the mounting angles of the optical flow sensor fixed on the MAV.Namely, μ is the angle around the x b axis for rotating S b to S OF , and then η is the angle around the y b axis.Thus, the transformation matrix from the local coordinate system S n to the optical flow sensor coordinate system S OF is Figure 3: Three coordinate systems used for modelling the blended wing body MAV [17].r nb , r bOF , and r nOF are used to express the relative position of the origin of each coordinate system.
where C b n is the transformation matrix from the local coordinate system S n to the MAV body coordinate system S b .

Expression of Optical Flow in Three-Dimensional Space.
As the measurements of optical flow sensors are based on the optical flow sensor coordinate system S OF , the motion states of the MAV in the inertial coordinate system S n need to be projected onto the optical flow sensor coordinate system S OF .
Therefore, based on the coordinate systems defined in the above section, the optical flow experienced by an optical flow sensor in Figure 3 is where OF x and OF y are the x and y component magnitude of OF defined in (8), which are the measurements of the optical flow sensor.V nOF and ω nOF are the velocity and rotation rate of the optical flow sensor in the inertial frame S n , respectively.Subscripts OF, x and OF, y denote the x and y components in the optical flow sensor coordinate system S OF , respectively.d OFg is the distance along z OF from the lens centre of the optical flow sensor to the ground.Next, we need to find the relationship between the optical flow measurements in the optical flow sensor coordinate system S OF and the motion state of the aircraft in the inertial coordinate system S n .
First, the velocity vector of the optical flow sensor in the inertial frame S n can be expressed as The projection of V nOF in the S OF coordinate system is As mentioned earlier, the navigation coordinate system S n can be considered as an inertial coordinate system, when the rotation of the Earth is neglected.Then, it should be noted that ω nb b is indeed the rotational angular velocity of the MAV in the inertial reference system, which can be directly measured by SINS.

4
Journal of Sensors Second, the direction vector of z OF is defined as k OF , which means that k OF OF = 0 0 1 T .Thus, the projection of k OF to S n is Therefore, the cosine of the angle between z OF and y n is − k OF n,z , and where C ij is the corresponding element of C b n .The distance along the axis z OF from the origin of the optical flow sensor coordinate system S OF (i.e., the focus of the optical flow sensor) to the ground is Third, the rotational angular velocity of the optical flow sensor in the optical flow sensor coordinate system is Finally, substituting ( 14), (17), and ( 18) into ( 12), the measurements of an optical flow sensor are The measurement noise of OF OF in ( 19) is modelled by a white noise vector with zero mean in the following simulation, whose variance matrix is R = diag r 2 OF r 2 OF , where r OF = 0 001 rad/s.

Information Fusion Using EKF
In order to improve the precision of SINS, the navigation errors are taken as the state vector in this study instead of navigation parameters.This is because the navigation errors are much smaller than the value of the navigation parameters, and the change of navigation errors is slow.Therefore, the state transmission of navigation errors can be expressed accurately with linear equations, and the estimation precision of the state can be easily guaranteed.
The state vector is taken as [17] where the subscripts E, N, and U stand for the components in the corresponding axis of the east-north-up local coordinate system S n ; x, y, and z stand for the components in the corresponding axis of the MAV body coordinate system S b ; δL, δ λ, δh denote the position errors of SINS; δV E , δV N , δV U denote the velocity errors of SINS; ϕ E , ϕ N , ϕ U denote the attitude errors of SINS; ε cx , ε cy , ε cz denote the random bias errors of the gyro in SINS; ε rx , ε ry , ε rz denote the random walk process errors of the gyro in SINS; and ∇ x , ∇ y , ∇ z denote the random walk process errors of the accelerometer in SINS.
Then, the state equation of the EKF can be represented as where F is a 18 × 18 SINS error matrix [17] and G is a 18 × 9 system noise allocation matrix given by 22 ω is the system noise vector with zero mean, that is,

23
where ω gx , ω gy , an ω gz are the gyro random white noises; ω rx , ω ry , and ω rz are the first-order Markov-driven white noises of gyro; and ω ax , ω ay , and ω az are the first-order Markov-driven white noises of the accelerometer.

Journal of Sensors
The covariance matrix of system noise vector ω is
The estimated navigation errors are expected to be used for correcting the navigation parameters of SINS.There are two methods to correct the navigation parameters [18]: one is to directly correct the outputs of SINS by taking the estimated value of the navigation error as the feedback, which is called the output correction method; the other way is to feed the estimation value of the navigation error into the SINS settlement process, which is called the feedback correction method.The output correction method only changes the accuracy of the output, rather than the error state inside the inertial navigation system.In this way, the errors will accumulate gradually over time, making the difference between the ideally mathematical model and the actual navigation system larger and larger, and the accuracy of the integrated navigation system becomes poor.Therefore, the feedback correction method is adopted in this paper in order to increase the accuracy of SINS.
Finally, the structure of the optical flow sensor/SINS integrated navigation scheme based on EKF is shown in Figure 4.It is mainly composed of the following four modules.
(1) SINS.The microelectromechanical system-(MEMS-) based SINS outputs the attitudes, position, and velocity measurements with drifts and high noises.As time goes by, the navigation errors of SINS will gradually increase, so some other navigation devices should take the role of correcting the SINS errors, such as the optical flow sensors introduced before.
(2) A Group of Multiple Optical Flow Sensors.Multiple optical flow sensors are mounted on a MAV at different positions with different viewing directions.As a result, the optical flow information in different directions around the MAV can be adequately detected.
(3) Prediction of the Optical Flow Using the SINS Measurements.Optical flow can be predicted by using (19) with the SINS measurements as the input.δOF is the difference of the optical flow sensor measurements and the predicted ideal optical flow.The optical flow difference δOF is taken as the measurements of EKF.So the measurement equation of EKF is where H is the measurement matrix [17], and υis the measurement noise vector with zero mean, whose covari- OF where r OF = 0 001 rad/s.
(4) Extended Kalman Filter (EKF).A centralized extended Kalman filter is first performed in this study for fusion of inertial and optical flow information.The state equation of EKF is expressed by (21), and the measurement equation is expressed by (25).The navigation errors of SINS can be corrected by using the navigation error estimation feedback of the EKF.

Sensor Placement Optimization
As mentioned in the previous section, multiple optical flow sensors are fixed on the MAV.So the placement scheme of optical flow sensors should be taken into account, which consists of the following three issues: 6 Journal of Sensors slightly smaller than the ones in other cases, which means that the navigation performance is slightly better.

Mounting Orientation of Optical Flow
Sensors.We expect that the optical flow sensors can be as sensitive as possible to the motion state changes of MAV.From ( 15), ( 16), ( 17), (18), and (19), it can be found that the magnitude of optical flow is a function of the MAV motion state x and the optical flow sensor mounting angles μ, η .Thus, where g • is a function of x, μ, and η.
Let ΔOF represent the optical flow sensitivity, the sensitivity ΔOF can be defined as where x 0 is the motion state of MAV at a certain moment and Δx 0 is the motion state change of MAV.Here, the motion state and its change of a MAV for investigating the influence of mounting angles on the sensor sensitivity are shown in Table 5.
Based on the current state shown in Table 5, the mounting angles μ and η change from 120 to 240 degrees and −60 to 60 degrees, respectively.The relationship between the measuring sensitivity and the mounting angles of optical flow sensors is illustrated in Figure 6.The results indicate that when the optical flow change is the most obvious (in other words, when the sensitivity of the optical flow sensor is the highest), the ranges of mounting angles are 3. Number of Optical Flow Sensors.As we know, more optical flow sensors provide more measurement information, and the redundancy of the navigation system can also be enhanced, making the system more reliable.However, too much information input from optical flow sensors will  increase the computational burden of the on-board computer on MAV.As a result, the real-time performance of the integrated navigation algorithm may fall.Moreover, too many optical flow sensors will also increase the payload mass of MAV.In contrast, too few optical flow sensors will lead to poor estimating accuracy of the EKF filter.
Different quantities of optical flow sensors are fixed on the MAV for investigating the influence on the navigation performance.The first one, the first two, the first three, and all optical flow sensors in Table 4 are installed on the MAV in turn.
Taking the longitudinal velocity error as an example, the results shown in Figure 7 indicate that, with the aid of optical flow information, the navigation error is obviously smaller.If the number of optical flow sensors is more than one, the navigation error will be further reduced, about 1/10 of the navigation error from the one optical flow sensor-aided inertial navigation system.
Moreover, if the quantity of optical flow sensors is greater than two, the navigation performance of the integrated navigation system is the same as the inertial navigation system aided with two optical flow sensors.Considering the navigation accuracy, computational cost, payload mass, and redundancy, three optical flow sensors are recommended to be mounted on the MAV.
In summary, the optimal placement scheme shown in Table 6 is that three optical flow sensors are mounted on the MAV, as far apart as possible from each other and the gravity centre of the MAV.The optimal mounting angles can be referred to Formula (29).

Numerical Simulation Using a Centralized EKF
To test the performance of the optical flow sensor/SINS integrated navigation scheme, three optical flow sensors are mounted on the MAV using the optimal placement scheme shown in Table 6.Some other initial parameters for this simulation can be seen in Table 7. Navigation errors of the optical flow sensor/SINS integrated navigation scheme using a centralized EKF compared with the navigation system using SINS only are displayed in Figure 8.The results indicate that the navigation errors (including position, velocity, and attitude errors) become  In contrast, the premise that the optical flow sensor can work well is that the detection plane has sufficient texture, and the illumination is sufficient.Hence, if the MAV makes a big maneuver, some optical flow sensor(s) may face the sky, which leads to the failure of optical flow sensors as a result of zero measurement.In this case, the navigation system will diverge, suffering from inputting wrong information.
For example, let the output of the third optical flow sensor in Table 6 be zero from 300 s to 700 s; the longitude error of the integrated navigation system using centralized EKF is illustrated in Figure 9.
As can be seen in Figure 9, the navigation error immediately diverges, when one of the optical flow sensors fails.Accordingly, an integrated navigation scheme with fault detection and isolation ability is proposed in the subsequent sections.

Fault-Tolerant Integrated Navigation Scheme
6.1.Decentralized Filter.There are two ways to achieve an optimal integrated navigation system based on the extended Kalman filter technology: one is centralized Kalman filter, and the other is decentralized Kalman filter.
The centralized Kalman filter is the method used in the previous sections, which utilizes only one Kalman filter to process all the information of all the navigation subsystems 9 Journal of Sensors and optical flow sensors).The centralized EKF integrates the SINS subsystem with the optical flow sensor subsystems optimally to estimate the navigation error state and then correct the SINS subsystem by using the optimal estimation of the navigation error state.Thus, navigation precision can be improved relative to the navigation system using the SINS only.
The decentralized filter used in this study is a federated filter.By using the principle of "information distribution," the best compromise can be obtained through adjusting the distribution weights of subsystem information in the navigation system.Therefore, the federal filter used in this project can achieve the following advantages against the centralized filter: (1) The fault-tolerant performance of the decentralized filter is better.When one or more navigation subsystems fail, the fault can be detected and isolated.Afterwards, the remaining normal navigation subsystems can be reassembled or reconfigured quickly to get the required accurate filtering solution.
(2) For the centralized filter, the increase in state dimension will bring the so called curse of dimensionality, which makes the computational cost increase dramatically.In addition, reducing the filter dimension will lose the filtering accuracy and even cause filtering divergence.As for the decentralized filter, the synthesis (fusion) algorithms of both local filters and the master filter are simple, with less computation, so as to facilitate the real-time implementation of the navigation algorithm.
Referring to Figure 10, the workflow of the federated filter can be summarized as follows: (1) The initial estimated covariance matrixes of local filters and the master filter are set to γ i (i = 1, 2, … , N, m) times that of the integrated navigation system; γ i satisfies the principle of information conservation equation, that is, (2) The process noise covariance matrixes of local filters and the master filter are set to γ i times that of the combined system process.
(3) Each local filter obtains the local estimation by processing its own measurement information.
(4) The local estimation of each local filter and the estimation of the master filter are integrated according to where Xi is the ith local state estimation and P ii is the corresponding estimated covariance matrix.The local estimates are not related to each other, namely, P ij = 0 i ≠ j .Xg is the global optimal estimate, whose corresponding covariance matrix is P g .
(5) The global optimal estimate is used to reset the state estimation and covariance matrix of local filters and the master filter.
The determination of the information distribution coefficient β i = 1/γ i is crucial to the decentralized filter [19].In this study, considering the fault tolerance and computational complexity of the decentralized filter, the information distribution coefficient of the master filter is β m = 0, and the information distribution coefficient of each local filter is … , N, m).Therefore, the estimation of the master filter is actually the global estimation, namely, 6.2.Fault Detection and Isolation.In the fault-tolerant integrated navigation system, the validity of the measurement information of each local filter must be determined in real time so as to determine which local state estimations are used to calculate the global state estimation.This requires that a real-time fault detection and isolation algorithm should be performed in the local filtering process.As a result, once a fault is detected, the fault can be isolated.Consequently, by reconstructing the system information, the whole navigation system will not fail due to the failure.Journal of Sensors A discrete system model with failure expressed by where Z k ∈ R m is the measurement of the navigation system.X k ∈ R n is the system state.Φ k,k−1 ∈ R n×n is the one-step transformation matrix of the system state.Γ k−1 ∈ R n×r is the system noise matrix.W k ∈ R r and V k ∈ R m are mutually independent Gauss white noise sequences with the following constrains: where δ kj is the Dirac δ function.γ is a random vector, which indicates the degree of a fault.f k,φ is a piecewise function where φ is the occurrence moment of the fault.
Here, the residual χ 2 detection method is used to detect and isolate the faults of the navigation system.
The residual of each local Kalman filter is where the forecast value Xk/k−1 is It can be proved that the residual of Kalman filter r k is a Gauss white noise whose mean is zero when no fault occurs, and its variance is When a fault occurs, the mean of residual r k is no longer zero.Therefore, by monitoring the mean of residuals, it is possible to determine whether the navigation system has failed.
The fault detection function is defined as where λ k is subject to χ 2 distribution with m degrees of freedom, namely, where m is actually the dimension of measurement Z k .
Here, m = 2. Finally, the fault criteria are as follows: where the preset threshold T D is determined by the false alarm rate P f .Given the false alarm rate P f = α, the χ 2 distribution can give out the threshold value T D .
In this study, P f = α = 0 001 and m = 2, then according to the χ 2 distribution, the threshold value T D = 13 82.
As each local filter is designed for the measurement information from each optical flow sensor, the fault detection and isolation algorithm in the local filter can be used for detecting the invalid measurement information and the corresponding local error state estimation.Using the remaining correct local state estimation, the reliable state estimation of the whole navigation system can be obtained according to the decentralized filter proposed above.The structure of the above proposed fault-tolerant optical flow sensor/SINS integrated navigation scheme is shown in Figure 11.

Numerical Simulation
Using the Fault-Tolerant Integrated Navigation Algorithm.As before, three optical flow sensors are mounted on the MAV using the optimal placement scheme shown in Table 6.We assume that the third optical flow sensor in  11 Journal of Sensors as can be seen in Figure 12.Some other initial parameters are shown in Table 7.
As shown in Figure 12, when the optical flow sensor fails, the value of fault detection function λ 3 sharply increases, much greater than the threshold value T D = 13 82.
In the subsequent figures (Figures 13-15), the "fault zone" is the period that the optical flow sensor fails, and the normal working state is restored after 700 s.
The meanings of legends in each figure are as follows: (1) "DF-no fault": the navigation error of using the decentralized filter for MAV navigation.During the flight of MAV, the optical flow sensors have no fault.
(2) "DF-FDI": the navigation error of using the decentralized filter with fault detection and isolation (FDI) capability for MAV navigation.During 300 s to 700 s, the outputs of the third optical flow sensor in Table 6 are set to zero.
(3) "DF-no FDI": as in the previous case, the outputs of the third optical flow sensor in Table 6 are set to zero during 300 s to 700 s.The difference is that the decentralized filter does not have the FDI capability.
(4) "CF-no fault": the navigation error history of using the centralized filter for MAV navigation.During the flight of MAV, the optical flow sensors have no fault.
(5) "CF-fault": the navigation error of using the centralized EKF for MAV navigation.Of course, the centralized filter does not have the FDI capability.During 300 s to 700 s, the outputs of the third optical flow sensor in Table 6 are set to zero.For the centralized filter and decentralized without FDI capability, the global estimation immediately diverges when the optical flow sensor fails, causing the failure of the whole system, namely, "failure of FDI-free filters," shown in the following figures.In contrast, the decentralized filter with the FDI capability can detect and isolate the sensor failure in time to make the navigation system work properly, namely, "effectiveness of the FDI filter" as can be seen in the following figures.The results verify the correctness and effectiveness of the designed fault-tolerant optical flow sensor/SINS integrated navigation scheme in the presence of system-level failures.
In addition, as can be seen in the figures, the curves of "DF-no fault" and "CF-no fault" are consistent, and the curves of "DF-no FDI" and "CF-fault" are consistent, which indicates that the centralized filter and decentralized filter are mathematically equivalent.
In order to detect the effect of the sampling period on the performance of simulation, the 1200 s MAV flight simulation using the flight navigation scenarios proposed in this manuscript are performed in desktop environments (Intel® Core™ i7-6700 CPU at 3.40 GHz, Matlab R2015a), under different sampling periods (0.1 s, 0.02 s, 0.01 s, 0.002 s, and 0.001 s).The simulation results are obtained from 50 runs of Monte Carlo.
Three navigation schemes are considered in the simulation: (1) Navigation only based on INS (inertial navigation system), called "INS only" in Table 8 (2) Optical flow sensor/INS-based integrated navigation scheme using the centralized EKF, called "centralized EKF" in Table 8  8 Here, two criteria are used for evaluating the simulation performance.One is the computing time and the other is the navigation error.Furthermore, the criterion employed for evaluating the navigation error with respect to the sampling period is the RMSE (root mean square error), over M Monte Carlo runs at a given time k, which is defined by where superscript i denotes quantities on the ith run.
denotes the truth of one of the MAV motion states at time k and for the ith run of Monte Carlo simulation; the same is x i k but being the estimation of the state.Taking the RMSE of V N as an example, as indicated from Figure 16, the shorter the sampling period, the smaller the navigation error.On the other hand, the average computing time spent in 50 Monte Carlo simulations of different navigation algorithms under different sampling periods is shown in Table 8.It can be found that the shorter the sampling period, the longer the computation time used for simulation.When the decentralized EKF with FDI ability is deployed under the 0.001 s sampling period, the computing time (1911.1021s) has even exceeded the simulation time (1200 s), which means that the navigation algorithm cannot be implemented in real time.
Conversely, as can be seen in Table 8, the larger the sampling period, the shorter the computing time.Actually, increasing the sampling period makes the amount of computation smaller.However, as shown in Figure 16, if the sampling period is too long (e.g., 0.1 s), the filter will diverge.Accordingly, a series of sampling periods are employed in the simulation in order to determine an appropriate one for the flow sensor/INS-based integrated navigation scheme using decentralized EKF with FDI ability (FDI-EKF).As mentioned above, the filter diverges if the sampling period is too long.For the FDI-EKF, after many simulations, it is found that the navigation scheme can work correctly when the sampling period is equal to or less than 0.02 s.
As portrayed in Figure 17, when the sampling period is reduced from 0.02 s to 0.01 s (i.e., reduced by 1/2), the navigation error is greatly reduced.However, although further reduction in the sampling period will result in a slight reduction in the navigation error, Table 8 indicates that the computing time will multiply exponentially, even more than the 1200 s simulation time.Finally, 0.01 s is selected as the sampling period in Table 7 after the trade-off between the navigation accuracy and computing time.

Conclusions
In this paper, an optical flow sensor/SINS integrated navigation scheme for MAV is developed for reducing the  15 Journal of Sensors navigation error of the inertial navigation system through the decentralized EKF with fault detection and isolation ability.The EKF fuses the inertial and optical flow measurement information for estimating the navigation error.Then, the estimated navigation error is used for correcting the SINS measurements.The results show that the integrated navigation algorithm can effectively reduce inertial navigation errors and isolate sensor failures.Some experiments will be done to test the performance of the fault-tolerant optical flow sensor/SINS integrated navigation scheme.

16
Journal of Sensors

Figure 2 :
Figure2: An optical flow sensor is moving relative to its object plane at a velocity v, a height above the ground h, and a yaw rate ω.The angle between the optical axis and the plumb line is θ.f is the focal length of the optical flow sensor.

Figure 6 :
Figure 6: The relationship between the sensitivity and the mounting angles of optical flow sensors.

Figure 7 :Table 6 :
Figure 7: The influence of the optical flow sensor amount on the estimation performance.The blue, red, yellow, purple, and green curves are the navigation errors of SINS only, one optical flow sensor-aided, two optical flow sensor-aided, three optical flow sensor-aided, and four optical flow sensor-aided inertial navigation system, respectively.

Figure 8 :
Figure 8: Navigation errors of SINS only and optical flow sensor-aided integrated navigation system: (a) position errors, (b) velocity errors, and (c) attitude errors.

Figure 9 :
Figure 9: Centralized EKF will rapidly diverge if one optical flow sensor fails (e.g., when the output of the optical flow sensor is zero).

Figure 10 :
Figure 10: Block diagram of the decentralized filter.

Figure 11 :Figure 12 :
Figure 11: Structure of the fault-tolerant optical flow sensor/SINS integrated navigation scheme.

Figure 16 :
Figure 16: The RMSE of V N from 50 Monte Carlo runs when the centralized EKF is deployed for optical flow sensor/INS integrated navigation.The shorter the sampling period, the smaller the navigation error.A too long sampling period (0.1 s) will lead to filter divergence.

Figure 17 :
Figure 17: The RMSE of V N from 50 Monte Carlo runs when the decentralized EKF with FDI ability is deployed for optical flow sensor/INS integrated navigation.
Navigation coordinate system (S n − o n x n y n z n

Table 5 :
Motion state of the MAV at a certain moment and its change.
Table 6 fails (or no apparent texture on its detected plane) during 300 s-700 s with zero

Table 8 :
Computing time under different sampling periods.