Robust Centroid Estimation Method with the Rough and Refined Estimation for Interplanetary Optical Navigation

,


Introduction
In March 2019, the U.S. released the Artemis program for returning to the moon.The program is aimed at achieving a permanent, sustainable human presence in cislunar space and preparing for crewed missions to Mars.During this period, the program will land the first woman and the next man on the moon in 2024.To ensure the security of the astronauts, crewed missions need to be able to operate independently in case of a loss of communication with the ground.It means the vehicle must perform navigation functions independently of the ground.Therefore, Artemis I, which is uncrewed, has been designed with an optical navigation (OpNav) system.The OpNav system can execute fully autonomous navigation, allowing the crew to return safely to Earth without ground station assistance.
The OpNav system is one of the most important navigation systems currently available in deep space.It can autonomously extract OpNav observations from the images taken by the onboard camera to determine the spacecraft's position.Many OpNav measurement types are commonly used for autonomous navigation.These measurements include the line-of-sight (LOS) vector, star horizon, star occultation, and apparent diameter.The idea of extracting the LOS vector is conceptually simple.If the centroid of the celestial bodies can be calculated correctly in the image, then the LOS direction from the onboard camera to the celestial bodies is also known.Consequently, the navigation performance of the OpNav system is directly influenced by the accuracy of the navigation observations.Celestial geometry model fitting is one of the fundamental image processing problems in OpNav systems.In order to improve the performance of ellipse fitting, some significant works have been released in this domain.The random sampling consensus method (RANSAC) has attracted the attention of many scholars due to its simple principle.Sugaya [1], Xie and Ohya [2], and Mai et al. [3] proposed their respective two-step RANSAC algorithms: (1) dividing the edge point pixels into multiple groups, (2) performing ellipse fitting for each group separately, and finally selecting the optimal ellipse model.However, these approaches are very sensitive to noise and computationally inefficient, wasting onboard computational resources.In order to improve computational efficiency, Kwon et al. [4] introduced a fast RANSAC ellipse detection method that uses the geometric properties of three points.The approach uses normal and differential equations to solve for elliptic parameters, which require three points based on their locations and edge angles.The problem is that the edge angle relationship of the three points is unknown and needs to be calculated so the algorithm's computational complexity is not reduced.Fraga and Dominguez [5] presented a robust RANSAC fitting method without eliminating outliers.The author uses the sum of orthogonal distances (instead of the sum of the squares of the distances) to reduce the influence of noise and utilizes differential evolution to solve the nonlinear problem.Nevertheless, when the outliers are gathered near the edges of the ellipse, the accuracy of the ellipse fitting will be severely degraded.To decrease the impact of noise, Yu et al. [6] and Shao et al. [7] proposed their outlier elimination algorithms for RANSAC ellipse fitting.Based on the algebraic distance between edge pixel points, they constructed the proximity matrix to remove outliers.However, the authors only performed outlier elimination experiments and did not test the performance of the ellipse fitting.For hardware acceleration, Liu and Yang [8] suggested a real-time ellipse-fitting approach based on the GPU, which provides an idea for implementing the RANSAC algorithm in engineering.The disadvantage is that the performance of the RANSAC fitting algorithm is very sensitive to noise.
The optimization problem is an old topic, and the solutions include heuristic algorithms and clustering algorithms.Heuristic algorithms solve the optimal global solution based on computational experience, including genetic algorithms [9], ant colony optimization algorithms [10][11][12], and neural network algorithms [13].The essence of the heuristic algorithm is a greedy strategy, and a stochastic process is added to avoid falling into a local optimum.The clustering algorithm is an unsupervised learning method and a common data analysis technique used in many fields [10][11][12].The idea of the clustering algorithm is to gather similar elements into a class [10][11][12].For our application, the model of a celestial body is equivalent to the individual of the algorithm, and the points used to compute the model are equivalent to the individual's genes.Therefore, the genetic algorithm is most suitable to be applied to improve the performance of centroid estimation.
To optimize the performance of the OpNav system, we propose a high-precision centroid estimation algorithm to compute the optical observations from the original images.
Our approach includes the following innovations: (1) A fast and efficient idea is proposed to identify valuable contours of the celestial body based on edge gradient.(2) Rough estimation is executed using the RANSAC to remove the outliers.
(3) The hybrid genetic algorithm conducts refined estimation to improve the accuracy of centroid estimation.The remainder of this paper is organized as follows.Section 2 introduces the fundamentals of OpNav.Section 3 describes the methods for estimating navigation observations.Section 4 presents the simulation and analysis.Finally, Section 5 gives our conclusions.

Fundamentals of OpNav
2.1.Model of the Observed Object.The mathematical model of the celestial body is a triaxial ellipsoid.So many works using the triaxial sphere as the model will have significant errors [14][15][16].In this paper, the navigation object is modeled as a triaxial ellipsoid, forming a two-dimensional ellipse in the image plane (see Figure 1).
The general representation of the ellipse is [17]: where the Fðx, yÞ describes an ellipse if 4AC > B 2 , and the ðx, yÞ is the horizon point of the celestial body, and the A, B, C, D, E, F are the coefficients of the elliptical model.It can be given in a matrix as It can also be written down as where v = ½x 2 xy y 2 x y 1 and α = ½A B C D E F. When all points ðx i , y i Þ are on the ellipse, there is Fðx i , y i Þ = 0. Thus, the problem is converted to Once the optimal ellipse model is found, we can solve other essential parameters of the ellipse, such as the center 2 International Journal of Aerospace Engineering coordinates ðx 0 , y 0 Þ, semimajor axis a, semiminor axis b, and orientation ϕ, 2.2.LOS Vector Model.The optical sensors of the Artemis I include 2 star trackers and one optical camera.The star tracker is mainly applied to determine the spacecraft's attitude and will not be discussed here.This paper focuses on processing optical observations captured by an optical camera.Four coordinate frames will be used to identify the orbit information in the OpNav system.The data can be converted between the four coordinate frames by means of rotation matrices.We need to compute the LOS observation from the original image to perform autonomous navigation.
The imaging principle of a monocular optical camera is the pinhole model.If f denotes the focal length of the camera, the relationship between the point The LOS observation from the camera to the raw image can provide critical orientation information for the OpNav system, as shown in Figure 2. If the optical axis of the camera is aligned with the origin of the image plane, the LOS direction can be expressed by [18] e where e C k is the LOS vector in the camera frame O c − X c Y c Z c .Then, according to the rotation relation, we can easily access the LOS direction in the inertial frame where T B C is the conversion matrix from the camera frame to the body frame, and T I B is the conversion matrix from the body frame to the inertial frame.4 International Journal of Aerospace Engineering

Methods for Estimating Navigation Observations
In estimating the models of the celestial bodies, the RAN-SAC method tends to fall into local optima and is very sensitive to anomalous data.The genetic algorithm is suitable for such nonlinear problems, so we propose a hybrid genetic algorithm to improve the performance of centroid estimation.The following section first describes the improvement of the estimation method using a hybrid genetic algorithm.

Improved Estimation Method Based on Hybrid Genetic
Algorithm.In the initialization process, n models MðS i Þ are considered as the initial population MðS i Þ ∈ M and i = 1, 2, ⋯, n.The S i is the set of m edge points randomly selected from the sample space S, which is used to calculate the model of the celestial body.For simplicity, It means that the precision of an ellipse model M i can be evaluated in terms of the number of inliers N in ðM i Þ.Hence, we define the fitness evaluation function F as follows: where w is the reward function to evaluate the number of inliers, and h is the distance scale function to assess the distribution of outliers around the model.In the distance scale function, DðM i , SÞ is the total distance error of the model M i deviation from each point in the sample space S, where the minimum value is 0 and the inflection point is set to τ 4 .
If DðM i , SÞ = 0, it is evident that the current model M i is the best solution.When the proportion of inliers contained in the model M i in the total sample is ninety-six percent, we consider the model M i to be very close to the celestial body model.Hence, the termination condition F * of the genetic algorithm can be obtained by calculating the corresponding w and h.
In the selection operation, we set the probability of an individual being selected to be proportional to their fitness.This strategy can automatically identify the better individuals and does not discard any individuals in the population.In this application, the probability P of an individual is set as follows: where k 1 and k 2 denote the factors of the reward function and the distance scale function, respectively.The F w and F h denote the values of the reward function and the distance scale function, respectively.We can control the properties of the selection operation by adjusting two function factors.The crossover of two chromosomes is manipulated to evolve better individuals.Parental chromosomes are mated to produce two offspring chromosomes randomly.We place a crossover factor α to control the crossover node of genes and its range to be α ∈ ð0, 1Þ.In each rendezvous, the parental chromosomes mate only once.

International Journal of Aerospace Engineering
To prevent the algorithm from converging to a local optimum, we introduce the annealing mutation operator, a two-way and probabilistic operation.The factor β is set to control the probability of performing the mutation operation, and its range is β ∈ ð0, 1Þ.If a mutation is executed, we will evaluate the outcome according to the improved Metropolis criterion.When F i+1 < F i , we will count the reception probability based on the reward function.Based on the characteristics of the task, we improve the Metropolis criterion as follows: After the annealing mutation operation, the algorithm can effectively avoid convergence to a local optimum.
The above are the critical steps of the hybrid genetic algorithm, and the complete procedure is shown in Figure 3.The threshold α and β tests are shown in Section 4.1.The following section presents the complete image processing pipeline to extract navigation observations, which includes our improved hybrid genetic algorithm.

Image Processing Pipeline for Extracting Navigation
Observations.This section describes an image processing pipeline for estimating the centroids of the celestial bodies from single-channel grayscale images.In this application, the moon's images are considered processing targets captured by the onboard optical camera.
In the first step, we set an adaptive threshold τ 1 to binarize the images.Since the background (deep space) is much darker than the moon, the grayscale distribution of the foreground (the moon) can be easily calculated.We determine the adaptive threshold τ 1 by computing the maximum variance of the background and foreground.As shown in In the second step, we propose open (○) and close (•) operations to exclude interference objects generated by high contrast, such as background stars, inconsistent lighting conditions, and irregular textures.This procedure helps the algorithm to find the target quickly to reduce the computational effort.Different thresholds have a severe impact on the performance of the operation.It is worth noting that the threshold τ 2 mainly affects the performance of removing stars' contours, and the threshold τ 3 chiefly controls the operation of removing irregular textures from the lunar surface.After many tests (tests will be presented in Section 4.2), the two thresholds of τ 2 = I 4×4 and τ 3 = I 8×8 are set to cancel the interference areas (see Figure 5).It can be seen that the operation effectively suppresses the false contours and does not distort the lunar outlines.The next step will remove the remaining false contour, which is larger.
In the third step, we locate all the edges in the grayscale image and find the outer contour of the observed object.Based on the grayscale distribution, the Canny algorithm is applied to calculate the gradient (magnitude and direction) 7 International Journal of Aerospace Engineering to detect the lunar margins.Assuming the optical camera is aimed at the moon, we can identify the planetary contours by calculating the area.The performance of the edge detection operation is shown in Figure 6.
In the fourth step, we divide the moon's contour into k feature curves to identify valuable profiles.As the light conditions of the observed object, we locate the valuable arc by setting k = 2, as shown in Figure 7 for the blue and red curves.To discover the valuable edges (blue curve), we develop a quick and efficient way: (1) Finding the edge with the steepest gradient variation on both sides (p 1 and p 2 ) and p 1 p 2 with maximum distance.(2) Splitting the moon's contour into two parts (blue and red arcs).( 3) Calculating mul-tiple sets of distances (d 1 and d 2 ) to determine the valuable margins.Finally, we output the ðx, yÞ image coordinates of the blue arc employed to estimate the ellipse model.Compared with other author's method of cutting area [15], our method has lower computational complexity and higher accuracy.
The fifth step performs rough estimation using the RANSAC method to remove the outliers.In OpNav tasks, the raw image can easily introduce various noises in some cases, such as platform vibrations or signal interference.Noise near the celestial limbs cannot be eliminated by using edge detection algorithms alone.Therefore, we propose a rough estimation using RANSAC to eliminate the spurious   Then, we set a threshold Θ to control the performance of removing outliers.For a smaller Θ, the sample tends to form strong connections, and the outliers are easily ignored; for a larger Θ, the inliers are easily misjudged as outliers.Hence, we perform six comparison trials to find the optimal value Θ * (see Figure 8).In the tests, we vary the threshold value Θ and keep other experimental conditions constant.It can be found that the best performance of the outlier rejection is achieved when Θ * = 3.
In the sixth step, we perform refined estimation using the improved hybrid genetic algorithm to extract the moon's centroid.The maximum number of iterations is set to where p is the probability that the model meets the requirements and ω is the probability that the inliers are selected from the sample.After many iterations, we can use Equation (7) to calculate the LOS observation for optical navigation based on the solved optimal model and the navigation camera's parameters.
To exhibit the performance of the image processing pipeline, the complete flow of our method takes a very dark moon as the scene, as shown in Figure 9.

Simulation and Analysis
4.1.α, β Test.Since the parameters α and β have significant effects on crossover and annealing mutation operations, respectively, we perform this test to determine the optimal α and β.First, the inlier function G calculated under parameters α, β is set as where M α,β denotes the model computed under parameters α and β, N t in ðM α,β Þ represents the number of inliers of the model, and t is the number of iterations.Then, the average performance curve L is defined as where n is the size of the population, representing n models during each iteration.The function L t ðα, βÞ denotes the average performance of the n models in each iteration.We need to consider the impact of both parameters because they are coupled.Finally, the relative goodness curve L is defined to express the performance graph where min ∀α′,β′ Gðα ′ , β ′ Þ represents the minimum value of the number of inliers during the historical iterations, and max ∀α′,β′ Gðα ′ , β ′ Þ means the maximum value of the number of inliers during the historical iterations.In the test, we set the parameter range as α, β ∈ ð0, 0:95.As shown in Figure 10, the value of L rises from 0.3824 in the 15th 9 International Journal of Aerospace Engineering iteration to the maximum value of 0.6384.Consequently, the optimal parameters are found to be α = 0:55 and β = 0:15.In this case, the hybrid genetic algorithm prefers to evolve outstanding individuals.

τ 2 , τ 3
Test.In open and close operations, appropriate thresholds τ 2 and τ 3 can effectively remove the interference contours without destroying the celestial limbs.Therefore, we build a dataset of 30 lunar images to test the optimal threshold.The dataset contains various scenarios, such as lunar images with irregular textures on their surfaces and many stars in the field of view.It is worth noting that the threshold τ 2 mainly affects the performance of removing stars, and the threshold τ 3 chiefly controls the operation of eliminating irregular textures from the lunar surface.As a result, we need to perform two experiments to determine Figure 11 shows two lunar images randomly selected from the dataset for testing.
First, experiments are conducted to determine the optimal threshold τ 2 .The stars are very far from the spacecraft, resulting in the small sizes of the stars in the images.Thus, we take three relatively small thresholds of τ 2 = I 2×2 , τ 2 = I 4×4 , and τ 2 = I 6×6 to remove the stellar contours.The results are shown in Figure 12.When the threshold τ 2 is small, the operation cannot eliminate all the stellar contours.But when the threshold τ 2 is large, the operation would damage the moon's limbs.So, the optimal threshold is τ 2 = I 4×4 allowing the algorithm to efficiently cancel the stellar contours from the field of view without disrupting the moon's limbs.
Next, we carry out tests to identify the optimal threshold τ 3 .Since the relative motion between the camera and the moon, the irregular textures of the lunar surface in the images are not regular.We need to predict the proper threshold τ 3 depending on the size of craters on the lunar surface.In this application, the thresholds are set to τ 3 = I 6×6 , τ 3 = I 8×8 , and τ 3 = I 16×16 to verify the performance of the operation (see Figure 13).After many trials, it is undesir-able to sharply increase τ 3 to eliminate irregular textures, as the moon's limbs would be destroyed.To protect the limbs of the navigation object, we assign the optimal threshold to be τ 3 = I 8×8 .

Centroid Estimation Test.
To test our method, we conduct a series of experiments on a dataset produced by the software Celestia v1.5.3, the authoritative astronomical software developed by NASA.The dataset consists of 100 synthetic 784 × 541 grayscale lunar images.The camera's distance from the observed object varies from 20,000 km to 330,000 km.And considering different lighting conditions and observation locations, the dataset contains various experimental scenarios, such as a bright full moon and a dim partial moon (see Figure 14).
Our method estimates 100 lunar images in the dataset to calculate the accuracy of the centroid extraction.For each image, we repeatedly compute 100 times to gather the mean error and standard variance of the centroid estimation.The qualitative results of some samples are shown in Figure 15.However, the pixel error of centroid extraction cannot directly express the accuracy of optical navigation.We need  11 International Journal of Aerospace Engineering to compute the LOS direction used for OpNav according to Equation (7).In this application, we suppose that the camera parameters are the focal length f = 200 mm, the field of view ϑ = 20 °, and the scale factor dx = 78:4 pixel/mm and dy = 67:8 pixel/mm.The quantitative statistics of some samples are given in Table 1.LOS pointing accuracy indicates that our method can meet the needs of optical navigation in deep space.
This section tests the image processing performance of the proposed method on real image data.In the test, lunar images are regarded as experimental objects taken from the International Space Station (ISS) at 420 km from the Earth.It should be noted that the presented approach can be applied to other ellipsoidal celestial bodies since the algorithm proposed in this paper considers the ellipsoid as a mathematical model.To enrich the diversity of the sample, the experiment is executed on three types of lunar images: full moon, half moon, and crescent (Figure 16).The method in this paper can still estimate lunar models accurately on real data.
To test our method's accuracy and robustness, we add disturbances to the original images, including Gaussian noise, transmission interference, and motion blur (see Figure 17).These disturbances are designed to reproduce realistic scenarios that spacecraft may encounter in deep space, such as cosmic rays, channel noise, and geometric distortion.Our method is executed on these noisy images to estimate the lunar model, as shown in Figure 18.
In this section, we compare the performance of our method and the least squares method for estimating the models of the navigation object.The two methods perform three experiments with the noisy dataset, respectively.For each noisy image, the two methods are repeatedly performed 50 times to collect the centroid estimation's mean error and standard variance (see Figures 19-21).When the noise intensity of the transmission interference is low, the two methods perform similarly.However, with increasing noise intensity, our method exhibits better robustness than the LS method.Motion blur noise can seriously undermine the performance of two methods among the three disturbances.The Gaussian noise is introduced to mimic disturbances in the image collection process, such as channel noise and camera noise.We control the Gaussian noise intensity by varying the standard variance from 0 to 1. Figure 19   There are many cosmic rays in deep space, are electromagnetic radiation from the depths of the universe and have a high energy level.We simulate the effect of cosmic ray interference by artificially corrupting the image.
We present different levels of interference textures in the image to simulate the effects of cosmic rays (see Figure 20).
During the maneuvering process, geometric distortion can seriously degrade the imaging quality of the optical   This section quantifies the time spent on the critical steps of our approach.The experiments are conducted on a Windows system platform with a six-core processor and 8G RAM.After 50 experiments, it can be seen from Table 2 that the most time-consuming step is the refined 15 International Journal of Aerospace Engineering estimation which accounts for almost 56.81% of the time (753:48 ± 45:32 ms).Specifically, the rough estimation operation takes about 216.55 ms, and the refined estimation operation costs about 428.05 ms.The algorithm will be more efficient if the performance of the hardware device is improved.

Conclusions
In this paper, we propose a robust image-processing pipeline to estimate the models of celestial bodies.We improve the accuracy of centroid extraction through rough and refined estimation operations.After performing many trials repeatedly, we analyse the accuracy of pixel-level centroid extraction and LOS direction under different conditions.In addition, we assess the robustness of our method to various perturbations and display the outcomes of processing numerous noisy images with Gaussian noise, data corruption, and motion blur.We conclude that a simple and efficient optical observation extraction method has been discovered.

Figure 1 :Figure 4 :
Figure 1: Optical imaging process and elliptical model in the image plane.(a) Understanding the imaging process from an inertial perspective.(b) Learning the imaging process from the camera's viewpoint.

Figure 5 :Figure 6 :
Figure 5: Output of open and close operations under different scenarios.The two thresholds are τ 2 = I 4×4 and τ 3 = I 8×8 .(a) The moon's edge is not affected.(b) The contours of stars and irregular textures are removed.(c) Relatively small false contours are canceled.

Figure 7 :Figure 2 :
Figure 7: Schematic illustration for identifying the moon's outer contour.We set k = 2, implying the moon's contour is divided into orange and blue curves.The blue arc is the valuable edge that we are looking for.

Figure 3 :
Figure 3: The complete procedure of the refined estimation.Input and output are marked with orange and green arrows, respectively.

Figure 9 :Figure 4 ,
Figure 9: Complete image processing routines.(a) The full moon is close to the Earth and very dim.(b) Binarizing the image by an adaptive threshold τ; (c) Open (○) and close (•) operations for removing interference areas.(d) Detecting all edges and identifying the moon's contour.(e) Searching for the valuable moon's contour (blue arc) based on gradient and distance information.(f) Rough estimation using RANSAC for removing outliers (red points).(g) Refined estimation with the hybrid genetic algorithm.

Figure 10 :
Figure 10: Output of relative goodness L. (a) The value is L = 0:3824 after 15 iterations.(b) The maximum value is L max = 0:6384 after numerous iterations.

Figure 11 :
Figure 11: (a, b) Two lunar images in the dataset.

Figure 16 :
Figure 16: Experiments on lunar images taken from the ISS.(a) Full-moon image was captured from ISS on March 8, 2015.(b) Half-moon image was captured from ISS on January 29, 2019.(c) Crescent image was captured from ISS on January 2, 2020.

Figure 17 :
Figure 17: Planetary image affected by various noises.

Figure 19 :
Figure 19: (a) Output estimation for lunar image disturbed by additive Gaussian noise.Our estimation is in red and LS estimation is in cyan.(b) Estimation errors for the increased standard deviation of the Gaussian noise.

Figure 20 :
Figure 20: (a) Output estimation for lunar image disturbed by data transmission noise.Our estimation is in red and LS estimation is in cyan.(b) Estimation errors for an increasing percentage of transmission interference.

(Figure 21 :
Figure 21: (a) Output estimation for lunar image disturbed by motion blur noise.Our estimation is in red and LS estimation is in cyan.(b) Estimation errors for increasing pixels of motion blur.

Table 1 :
The mean error and standard variance of the centroid extraction for samples in the dataset and the corresponding LOS direction errors.
displays the results of the methods performed on noisy images.