Stereo-Vision-Based Relative Pose Estimation for the Rendezvous and Docking of Noncooperative Satellites

Autonomous on-orbit servicing is expected to play an important role in future space activities. Acquiring the relative pose information and inertial parameters of target is one of the key technologies for autonomous capturing. In this paper, an estimation method of relative pose based on stereo vision is presented for the final phase of the rendezvous and docking of noncooperative satellites. The proposed estimation method utilizes the sparse stereo vision algorithm instead of the dense stereo algorithm. The method consists of three parts: (1) body frame reestablishment, which establishes the body-fixed frame for the target satellite using the natural features on the surface and measures the relative attitude based on TRIAD and QUEST; (2) translational parameter estimation, which designs a standard Kalman filter to estimate the translational states and the location of mass center; (3) rotational parameter estimation, which designs an extended Kalman filter and an unscented Kalman filter, respectively, to estimate the rotational states and all the moment-of-inertia ratios. Compared to the dense stereo algorithm, the proposed method can avoid degeneracy when the target has a high degree of axial symmetry and reduce the number of sensors. The validity of the proposed method is verified by numerical simulations.


Introduction
The autonomous on-orbit servicing is expected to be one of the most challenging and exciting space activities in the future [1,2].In the past, many expensive satellites were out of service in orbit due to various failures, such as solar panel undeployment and gyro malfunction.However, in these cases, most of the other parts of the satellites were still functional [3,4].The on-orbit servicing, which is the execution of repair, refueling, orbit maintenance, and reorbiting, can extend the life of malfunctioned satellites and save a large amount of expense.
Therefore, more and more attentions have been paid to the technologies of autonomous on-orbit servicing.Several on-orbit servicing projects including on-orbit servicing demonstration experiments and conceptual on-orbit servicing systems have been carried out [5,6].
As malfunctioned satellites are generally noncooperative targets tumbling in space and have no equipment which can be used for relative pose measurement [7], it is necessary to develop the method of relative pose measurement without the cooperation of target satellites.For example, the purpose of the Spacecraft for the Universal Modification of Orbits (SUMO) program is to demonstrate the integration of machine vision, robotics, mechanisms, and autonomous control algorithms to accomplish autonomous rendezvous and grapple of a variety of interfaces traceable to future spacecraft servicing operations [8].In the SUMO program, the main concept is to be able to capture an unaided target satellite; that is, target satellite is equipped without special facilities such as grapple fixtures or reflectors which is compatible with the SUMO.SUMO was later renamed as the Front-End Robotics Enabling Near-term Demonstration (FREND) to develop a series of key components including a seven degree-offreedom (DOF) flight robotic arm which was demonstrated in a test bed with a stereo photogrammetric imaging system [9].Similarly, DEOS (Deutsche Orbitale Servicing Mission) project focuses on finding and evaluating the procedures and techniques for rendezvous, capture, and deorbiting of a noncooperative target satellite [10].In the automatic mode, 2 Mathematical Problems in Engineering a number of images are taken and dumped to the ground segment, and then target motion is estimated by the control software in the ground station.
The relative pose information of two involved satellites is of vital importance for the end-effector to successfully grasp the tumbling satellite.If the dynamic model of the target is known, a filter can be designed to suppress the measurement noise and estimate the states which cannot be measured directly.Furthermore, the dynamic model can also be used to predict the motion of the target satellite, and then the optimal path planning can be conducted to guide a robotic manipulator to capture the tumbling target at a rendezvous point with the same velocity [11].Vision system is more and more commonly utilized for the relative pose measurements in the rendezvous missions due to the low cost, low mass, and low energy requirement of CCD/CMOS cameras [12,13].Nagamatsu et al. proposed a 12th order extended Kalman filter to estimate the position and attitude of a torque-free target satellite by using a simplified dynamical model [14].NASA investigated the feasibility of robotic autonomous servicing of the Hubble Space Telescope (HST).A nonlinear estimator was developed to estimate the HST body rates by using vision-based sensors which can output relative quaternion [15,16].NASA/GSFC continued to develop a Goddard Natural Feature Image Recognition (GNFIR) algorithm.A 6-DOF pose estimation was demonstrated in Robotic Refueling Mission in order to augment the traditional overlays [17].Tweddle described a new approach to solve a SLAM problem for unknown and uncooperative objects that are spinning about an arbitrary axis.An experimental test-bed known as "Goggles" with computer vision-based navigation capability by the stereo cameras is used to test the algorithm [18].Lichter and Dubowsky proposed an important and effective architecture of estimating dynamic state, geometric shape, and model parameters of objects in orbit [19].A team of cooperating 3D vision sensors captures sequences of range images to determine the rough target poses before filtering.The relative positions and orientations between sensors mounted on different satellites are required to be known with high accuracy, which is perhaps a cost and operation burden for on-orbit servicing.A model-based pose refinement algorithm was proposed to perform the relative pose estimation of a floating object from the visual information of a stereo-vision system [20].Du et al. proposed a method based on two collaborative cameras sharing the recognition task to determine the pose of a large noncooperative target [21], and details of image processing were also discussed.
In this paper, the problem of relative pose estimation for the final phase of the rendezvous and docking of noncooperative satellites is investigated.Instead of the dense stereo algorithm presented in the literature [17], here the relative pose estimation method is based on the sparse stereo algorithm.The body-fixed coordinate system of the target satellite is reestablished by utilizing the natural features on the target surface.Two relative attitude measurement methods based on TRIAD and QUEST are presented.Then, a standard Kalman filter is designed to estimate the translational states and the location of the mass center.An extended Kalman filter (EKF) and an unscented Kalman filter (UKF) are designed, respectively, to estimate the rotational states and the moment-of-inertia ratios including the ratios of product of inertia, and their performances are compared with each other.
This paper is structured as follows.Section 2 defines the coordinate systems and transformation matrixes used in the relative pose estimation.Section 3 presents two algorithms for relative attitude measurements including the scheme for updating the target feature coordinate system when target is tumbling.Then, an approach based on Kalman filter to estimate the location of mass center and translation parameters is proposed in Section 4. In Section 5, the EKF and UKF are designed, respectively, to estimate the moment-of-inertia ratios and orientation parameters.Numerical simulation is conducted to verify the proposed algorithm and results are discussed as well in Section 6.The last section contains conclusions.

Coordinate Systems and Transformation Matrixes
As various physical quantities are usually defined in different coordinate systems, the coordinate systems and coordinate transformation matrixes are defined to illustrate the relative geometrical relationship between the target and chaser.The relative geometrical relationship and coordinate systems are illustrated in Figure 1.

Inertial Coordinate System (Σ 𝑖
).The origin of the inertial coordinate system   −       is centered on Earth, the   axis is aligned with the rotation axis, and the   axis is defined to point toward the Vernal equinox.The   axis completes the right-handed orthogonal coordinate system.The attitude of target satellite and service satellite can be described as the rotation from the inertial frame to the body frame.

Local Orbital Coordinate System (Σ 𝑜 & Σ 𝑜 󸀠
).The origin of the local orbital coordinate system   −       (Σ  ) lies in the mass center of service satellite.The   axis is in the opposite direction of the Earth center, the   axis is in the flight direction, and the   axis which completes the righthanded orthogonal coordinate system is in the direction of the angular momentum of the orbit.
Similarly, the local orbital coordinate system Σ   is attached to target satellite.The rotation matrix between the two satellite orbital frames is denoted as C    .As two satellites are almost in the same orbit and the service satellite is very close to the target, C    is nearly a 3 × 3 identity matrix.For instance, if the target is a leading satellite in the circular orbit of 700 km altitude and the chaser is 100 m behind the target in the same orbit, the rotation angle between two orbital fames is only about 3  .

Chaser Body-Fixed Coordinate System (Σ 𝑠
).The origin of chaser body-fixed coordinate system   −       lies in the mass center of service satellite, and the three body axes of symmetry are defined as three coordinate axes   ,   , and   .The orientation of the Σ  relative to Σ  or Σ  is determined by attitude and orbit determination system which is equipped with gyro, star sensor, GNSS receiver, and so on.The coordinate transformation matrix from inertial coordinate system Σ  to Σ  is denoted as C   , and the coordinate transformation matrix from Σ  to Σ  is denoted as C   .

Stereovision-Fixed Coordinate System (Σ 𝑐
).The origin of stereovision-fixed coordinate system   −       lies in the stereo vision system.The axis   is parallel with the optical axis of the cameras,   is vertical to the optical axis, and the direction of axis   obeys the right-hand role.The measurement information (coordinates of feature points) of stereo vision system is described in this coordinate system.The rotation matrix C   and the translation vector    from Σ  to Σ  can be acquired by calibration on the ground.Therefore, relationship between the coordinates of feature points in Σ  and Σ  can be described as The natural features are usually on frameworks of the antenna backboard, solar panels, and so on.The frame Σ  is applied to determine the relative pose between target and chaser satellite.The methods to establish the frame Σ  are described in Section 3.

Target Body-Fixed Coordinate System (Σ 𝑡
).The origin of target body-fixed coordinate system   −      lies in the mass center of target satellite, and the three coordinate axes   ,   , and   are set to be parallel to   ,   , and   , respectively.Obviously, the axes   ,   , and   are probably not aligned with three body axes of symmetry of target.

Target Nominal Body-Fixed Coordinate System (Σ 𝑟
).The origin of target nominal body-fixed coordinate system   −       lies in approximate mass center of target satellite, and the three coordinate axes   ,   , and   are parallel to   ,   , and   , respectively.Approximate location of mass center is estimated by ground staff using teleoperation loop or estimated automatically by service spacecraft.For example, the approximate mass center of target satellite can be supposed to lie in the target centroid.Then, the vector   →     is known and the offset   →     which is denoted as b will be estimated.

Algorithms of Relative Attitude Measurement
Natural feature points are utilized to establish the target feature coordinate system Σ  .As image processing including image filtering, edge detection, feature point recognition, and matching is discussed [19], it is assumed in this paper that the coordinates of natural feature points in frame Σ  are outputted by the stereo vision system.

Attitude Determination Based on TRIAD.
There are several feature points tracked by the stereo vision system, which is illustrated in Figure 1.Three points  1 ,  2 , and  3 are selected according to geometry and imaging quality of natural feature points to establish the frame Σ  by righthanded rule.The  1 is the origin of Σ  , the axis   is in the direction of vector   →  1  2 , and the axis   is vertical to the plane which contains the three points.The direction of axis   obeys the right-hand role.The rotation matrix C   from the frame Σ  to Σ  can be determined by direction cosine matrix as follows: where | | is the magnitude calculation and a × b is the representation of the operation of vector cross product.The error of TRIAD can be described as rotation angle  about the Euler axis where  is the angle between   →  1  2 and   →  1  3 , and it produces minimum error when it is 90 ∘ . 1 and  2 are pointing errors of   →  1  2 and   →  1  3 , respectively, and the pointing errors will be reduced with the extension of the line segments.Then, it is concluded that the three points which contribute the largest area are to be selected to obtain better accuracy in practice.

Attitude Determination Based on QUEST.
As there are often more than 3 feature points available in the view of stereo vision system, it is better that all available points (the number of points is denoted as  + 1) in view are utilized to improve the accuracy of relative attitude determination.Then coordinates of all feature points except  1 in the frame Σ  can be calculated as follows: where a ⋅ b is the vector inner product.As measuring is repeated, an average processing is utilized and then more accurate coordinates X   are determined in the frame Σ  .The components of X  are described as where  is the number of measurements.A set of observation unit vectors W 1 , . . ., W  are defined as Similarly, a set of reference unit vectors V 1 , . . ., V  are defined as According to the Quest algorithm, an attitude matrix C   can be found to minimize the loss function where   is the weight of the th vector pair which is defined as The algorithm of optimal processing for ( 8) is presented in the literature [22].

Strategy for Updating the Target Feature Coordinate System.
As the target satellite is tumbling in space, some feature points will be lost and new feature points will appear in the view of the stereo vision system.The new target feature coordinate system Σ ,1 is established dynamically according to the measurement requirements.It is necessary to determine the offset and orientation between the frames Σ ,1 and Σ ,0 (Σ ,0 is the original target feature coordinate system and the rotation angle between the frame Σ ,0 and Σ  is 0) to keep continuous pose estimation.
According to (4) and ( 5), accurate coordinates of the feature points in Σ ,0 are obtained.Three new feature points   ,   , and   are selected among the feature point set to establish Σ ,1 with   as the new origin.The offset of the origin of the frame Σ ,1 in Σ ,0 is ( x , ŷ , ẑ ), which is denoted as  ,1 ,0 .Similar to (2), the orientation of the frame Σ ,1 relative to Σ ,0 can be described as In general, the relationship between the coordinates of the feature points in Σ , and Σ ,−1 can be described as where  is the number of establishing target feature coordinate systems, p between the frames Σ , and Σ ,0 can be derived according to (11), the coordinates of any natural feature point in the frame Σ ,0 can be obtained even if feature points which are used to establish the frame Σ ,0 are lost in view.

Estimation of the Location of Mass Center and Translation Parameters
In order to estimate the location of the mass center for target satellite and the relative position and velocity between chaser and target, a standard Kalman filter is designed to process the coordinates of feature points with the CW equations as the dynamic model.

CW Equations.
When two satellites are nearly in the same circular orbit with close distance, the CW equations can perfectly describe the translational motion dynamics, where n = [ ; it is due to the effect of orbital mechanics, 0 × and I × are identity and zero matrices, respectively,  is matrix dimension, and  is the force disturbance on a unit mass. is the displacement vector from the target mass center to the chaser mass center and it is expressed in the target orbital frame Σ   , and ρ is the relative velocity between target and chaser.

Measurement Model.
The estimation of relative position and velocity is also the process of locating the target mass center.When a series of feature points are continuously tracked, the coordinates of feature points are outputted in the frame Σ  .The relative attitude between the target satellite and the stereo vision system is determined according to methods in Section 3.According to the geometrical relationship, the displacement vector  is described as where r  is the coordinate value of the th natural feature point in the frame Σ  , b is the location error of mass center which is illustrated as   →     in Figure 1,   is the vector from the origin of the frame Σ  to the th feature point and is expressed in the frame Σ   , and C    is the rotation matrix from the frame Σ   to Σ  .As the target is a malfunctioned satellite which is noncooperative, no information including the attitude parameters can be provided to the service spacecraft.An available way to measure the target attitude is combining the relative attitude and service spacecraft attitude.It is written as where C   is the rotation matrix from the frame Σ  to the frame Σ  which will be provided by attitude and orbit control system (AOCS), C    is ignored as it is nearly a 3 × 3 identity matrix, C   is defined in Section 2.4, C  , is determined by methods described in Sections 3.1 and 3.2, and C ,  is rewritten as can be described as where  , is the coordinates of the th natural feature point in the frame Σ  , which is measured by stereo vision system, and C   and    are defined in Section 2.4.There are several natural feature points on the surface of the target spacecraft.If stereo vision system tracks these feature points simultaneously during the relative pose estimation, better measurements will be achieved by an averaging operation.Substituting ( 14) into ( 13), the observation equations can be rewritten as where  is the measurement noise.If stereo vision system is carefully calibrated, the coordinate error of a feature point can be approximately regarded as white noise.Therefore,  is approximately modeled as zero mean white Gaussian process with covariance matrix of R 1 .Obviously, 4.3.Filter Design.In order to suppress measurement noise and estimate states which cannot be measured directly, a standard Kalman filter is designed.Define state variables as follows: Then, the relative position and velocity estimation dynamic equations can be concluded as follows: The math model of estimating mass center location and translational states is a linear system according to (17) and (20).Setting the linear system (20) into the standard statespace form Ẋ1 = A 1 X 1 + B 1 , the solution to the corresponding state transfer matrix Φ 1,/−1 and covariance matrix of the discrete-time process noise Q 1, is as follows: Mathematical Problems in Engineering Equation ( 17) is rewritten as measurement equations: Initialization is X1,0 =  [X 1,0 ] , Measurement update is (25)

Estimation of the Moment-of-Inertia Ratios and Orientation Parameters
As noncooperative target satellite is often a malfunctioned satellite that is tumbling freely with unknown rates in space, the moment-of-inertia ratios can be estimated instead of the moment-of-inertia.Because the axes of the frame Σ  are not aligned with three body axes of symmetry, the products of inertia cannot be neglected.

Measurement Model.
The AOCS of the service spacecraft can also output the attitude relative to inertial system.A measurement of the target attitude will be provided by combining the relative attitude and service spacecraft attitude.Then, where C   is the attitude of service satellite relative to the frame Σ  , C   is described in Section 2, and which is described in (14).The rotation matrix C   can be rewritten in quaternion form q   and q   can be expressed as where q   is target quaternion measurement, q is the quaternion which defines the rotation from inertial to the target body frame, and q  is the measurement noise.The symbol ⊗ designates the multiplication of quaternion.

System Dynamic Equations.
The time-derivative of the quaternion can be expressed as follows: where  = [0   ]  , and  is angular velocity of target satellite with respect to inertial space, resolved in target bodyfixed coordinate system Σ  .
According to Euler's equation, dynamics of the rotational motion for a rigid target satellite can be expressed as where w  is disturbing torque vector and I  is the inertia matrix, We define the element    as 1, and then we obtain the moment-of-inertia ratios matrix as Then, (29) can be rewritten as where w = w  /   and the elements   ,   ,   ,   , and   are constants.For the dynamic equations are nonlinear system, EKF and UKF will be employed to estimate the moment-of-inertia ratios and the relative attitude.

EKF Filter Design. Define state variables as follows:
where Ι  V = [  ,   ,   ,   ,   ].When dynamic equations are linearized,  0 is ignored as  0 is not an independent variable and it has variations of only the second order.Then, the error-state vector X 2 is described as where the components of X 2 are defined as follows: where q = [1 q   13 ] .The dynamic equations can be given as For simplicity, process equations (36) and measurement equations ( 27) are rewritten as Mathematical Problems in Engineering 7 where the state noise w  and the measurement noise q , are assumed to be independent of each other, white noise.Define the state transfer matrix Φ 2,/−1 where A 2,−1 is the Jacobian matrix of partial derivatives of ( ) with respect to state variable X 2 ; that is, where If ( 27) is quaternion multiplied by q−1 in left, the linearized measurement equation is obtained by ignoring the high order terms, q  ,13, ≈ q 13, + q ,13, .
For simplicity, (41) is rewritten as Initialization is Time update is Measurement update is Therefore, the estimated states q , ω , and Î can be innovated right after measurement update according to (35).

UKF Filter Design.
Although the EKF is the estimation algorithm which is widely used for nonlinear systems, UKF is more accurate when the model is highly nonlinear [23,24].Therefore, a UKF filter is designed to achieve a better estimation.
The initialization of UKF is the same to EKF.For  = 1, 2, . .., calculate sigma points: where  is the number of states, and  is the scaling factor for the sigma points which is calculated as where  is the scaling parameter which determines the spread of the sigma points around and is usually set to be a small positive value (0.0001 <  < 1), and  is a secondly scaling parameter which is usually set to 0. Time update is where  ()  and  ()  are the weighted coefficients and defined as follows: where  is the scaling parameter and is used to incorporate prior knowledge of the distribution of X 2 (for Gaussian distribution,  = 2 is optimal).Table 1: Initial orbit elements.
Measurement update is where z 2, , q , ω , and Î are calculated by using the same method described in Section 5.3.

Simulation and Evaluation
The computer simulation is developed to verify the validity of the proposed algorithms in this paper.The simulation software integrates the orbital and attitude dynamics, camera photograph model, and relative pose estimation algorithms.The simulation is realized in MATLAB and Figure 2 shows the simulation blocks in the MATLAB environment.
It is assumed that the service satellite and the malfunctioned satellite are in the same orbit with different orbital phases.The initial orbit elements of the target and chaser are given in Table 1.
So the orbit altitude is about 300 km and the service satellite stands at the position which is about 10 m behind target satellite.The target satellite is under torque-free tumbling motion.The target satellite is initially aligned with orbital frame and initial angular velocity is assumed as   = [2.5 5.0 3.0] ∘ /s.
The moment-of-inertia matrix in the target body-fixed coordinate system (Σ  ) is set as As the image processing including image filtering, edge detection, and feature matching is discussed in many papers, the image processing is assumed as a solved problem in this paper, and then coordinates of natural feature points in image plane are regarded to be available for the estimation processing.The specifications of feature points in frame Σ  are shown in Table 2.
Two cameras are parallel to each other and the baseline is 0.5 m.The focus length of camera is set as 2.5 cm.The two cameras of the stereo vision system are mounted at The first 3 points listed in Table 2 are selected to establish the frame Σ ,0 by TRIAD and number 1 point is used as the origin.Assume that number 1 point is lost at 10 s and number 2-4 points are selected to establish the new frame Σ ,1 with number 4 point as the origin.According to (10) and (15), target attitude with respect to chaser (the orientation of Σ ,0 relative to Σ  ) can be continuously measured.The attitude determination errors are plotted in Figure 3.In Figure 3, the errors in period of 10 s-300 s vary similarly to the errors in period of 0 s-10 s, which proves that method described in Section 3.3 is feasible.
Attitude determination errors using QUEST are shown in Figure 4. Obviously, QUEST algorithm achieves a higher precision than TRIAD, because all feature points tracked are utilized.The standard deviations of TRIAD are 0.27 ∘ , 0.11 ∘ , and 0.26 ∘ , and the standard deviations of QUEST are 0.19 ∘ , 0.04 ∘ , and 0.18 ∘ .Then, the TRIAD algorithm is applied to the following simulations.
The estimation errors of relative position and relative velocity are plotted in Figures 5 and 6, respectively, and the estimation of mass center location is displayed in Figure 7.The convergence time for recognition of the mass center location and estimating the relative position and velocity is less than 20 s.The accuracy of relative navigation is fairly high with the triaxial relative position errors less than 2.5 mm, 5.5 mm, and 3.0 mm, respectively, and the triaxial relative velocity errors less than 0.2 mm/s, 0.2 mm/s, and 0.2 mm/s, respectively.The location errors of the mass center are less than 2.0 mm, 6.0 mm, and 1.1 mm in three axes.It also should be noticed that  axial errors are larger than the other two axial errors.The corresponding reason is that the positioning errors of stereo vision system are bigger along the optical axis direction.
The EKF and UKF are utilized to estimate the momentof-inertia ratios and the orientation parameters.In order to compare performances of the two filters, the filter parameters are defined the same as P 2,0 = [ which are attitude estimation errors, are plotted together in Figure 8.The simulation results show that both estimation algorithms can assure the convergence.Comparing UKF output errors with those from EKF, the accuracy of UKF is better than the accuracy of EKF.In the period of 50 s-300 s, the attitude determination errors of UKF are less than 0.08 ∘ , 0.12 ∘ , and 0.08 ∘ and errors of EKF are less than 0.38 ∘ , 0.52 ∘ , and 0.34 ∘ .
Figure 9 shows the estimation errors of angular velocity.The estimation performances of angular velocity of the two algorithms are similar to the performances of attitude.In the same time period, the estimation errors of UKF are less than 0.01 ∘ /s, 0.025 ∘ /s, and 0.01 ∘ /s and errors of EKF are less than 0.038 ∘ /s, 0.11 ∘ /s, and 0.038 ∘ /s.
Estimation errors of moment-of-inertia ratios are shown in Figures 10 and 11.Although Ι V = 0 is the linear model, the convergence time of UKF is shorter than that of EKF as the UKF estimation accuracy for angular velocity is better.However, UKF and EKF achieve similar accuracy for estimating the moment-of-inertia ratios in the period of 100 s-300 s.At the same time, it is noticed that the convergence time of estimating moment-of-inertia ratios is longer than attitude and angular velocity.The reason why the estimation of moment-of-inertia ratios needs a longer convergence time lies in that the moment-of-inertia ratios cannot be observed directly and more time will be required to measure attitude errors which are produced by the errors of moment-ofinertia ratios.The estimation errors are displayed in Table 3 according to errors of UKF in the period of 200 s-250 s.

Conclusions
In order to capture the malfunctioned satellite, this paper proposes an algorithm for estimating the relative pose and inertial parameters of the target satellite by using a single stereo vision system.Compared to the dense stereo algorithm presented in literature 17, the proposed method can avoid degeneracy when the target has a high degree of axial symmetry and reduce the numbers of sensors and the complexity of the operation.As UKF achieves a better performance than EKF, the UKF is recommended for the rotational estimation.
The simulation results demonstrate that the developed algorithm can estimate the relative pose, the location of mass center, and the moment-of-inertia ratios with high precision.However, the proposed method also has degeneracy; if there are no obvious features on the target surface, it is a better choice that the two methods are jointly used in practice.The experimental verification of the proposed algorithm should be conducted to ensure the practical application, which will be a focus in the further study.

𝑓
] are the coordinates of th Mathematical Problems in Engineering 5 feature point in the frame Σ , , p in Σ ,−1 , and  , ,−1 is the translation vector between Σ ,−1 and Σ , .As the rotation matrix C ,0 , and translation vector  , ,0

Figure 2 :
Figure 2: Realization of the simulation software.
corresponding moment-of-inertia ratios matrix is as follows:

Table 2 :
Feature points specifications in the simulation.