In the SLAM application, omnidirectional vision extracts wide scale information and more features from environments. Traditional algorithms bring enormous computational complexity to omnidirectional vision SLAM. An improved extended information filter SLAM algorithm based on omnidirectional vision is presented in this paper. Based on the analysis of structure a characteristics of the information matrix, this algorithm improves computational efficiency. Considering the characteristics of omnidirectional images, an improved sparsification rule is also proposed. The sparse observation information has been utilized and the strongest global correlation has been maintained. So the accuracy of the estimated result is ensured by using proper sparsification of the information matrix. Then, through the error analysis, the error caused by sparsification can be eliminated by a relocation method. The results of experiments show that this method makes full use of the characteristic of repeated observations for landmarks in omnidirectional vision and maintains great efficiency and high reliability in mapping and localization.
The first problem of a mobile robot is that it has to deal with a complicated environment when the robot fulfills its assignments. Simultaneous localization and mapping (SLAM) is one of the key enabling technologies for a mobile robot’s autonomous ability. SLAM addresses the problem of building up a map within an unknown environment, while at the same time keeping track of their current location. Many popular SLAM implementations use a distance sensor such as a laser range finder or sonar to explore the environments [
Many vision sensor applications have been developed in recent years. Vision sensors can provide continuous image data. Until now, the research work about vSLAM mainly used stereo vision or monocular vision [
However, there are limitations in the view angle of ordinary vision sensors. In contrast, omnidirectional vision has a 360 degree field of view and it has been widely applied in robot navigation, video conferencing, and surveillance [
Omnidirectional vision not only provides opportunities for studying vSLAM problems, but it also provides several challenges. First, the large amount of information will increase the computational complexity of the SLAM algorithm; second, the distortion of the omnidirectional image is relatively large, which is very difficult to extract and match features directly. As a result, vSLAM methods based on omnidirectional vision have seldom been applied in the case of localization and mapping for an unknown environment. Therefore, the omnidirectional vision based SLAM is not only able to promote vSLAM technology, and provide a new idea, but it also can broaden the application field of omnidirectional vision (e.g., environment exploration and rescue).
The extended information filter SLAM (EIFSLAM) algorithm [
The structural diagram of the proposed IEIFvSLAM algorithm is shown in Figure
IEIFvSLAM algorithm structure diagram.
In the module of features detection, a HarrisSIFT feature extraction method based on SIFT [
In the IEIF module, the information variables based on the observation data will be updated in blocks of observation update. According to the movement of the robot, the state variables will be estimated in the block of motion update. In fact, the computational burden of IEIFvSLAM is mainly concentrated on both of the processes of observation and of motion update. Due to the wide scale of information and a huge amount of omnidirectional image feature, the dimension of information variables rapidly increases. This situation will heat up if the environment becomes larger [
At time
As shown in (
The IEIFvSLAM algorithm estimates the robot pose and landmarks positions according to the update of information vector
As shown in Figure
In (
In an omnidirectional vision system,
Figure
Omnidirectional vision schematic plot.
Then,
As shown in Figure
The reflected light goes through the center point of the lens:
Then the intersection point
According to the geometrical relation [
So, the new observations
According to (
Therefore,
According to the process description of motion update, observation update and add features, the structural characteristics [
information matrix is Hermit matrix of symmetric positive definite [
according to (
according to (
Then, the structural characteristics of information matrix are obtained as follows:
two elements in the diagonal line denote the link strength of two notes;
if the shortest link between two nodes is longer, then that from the main diagonal in the information matrix of the relevant elements is farther;
the endpoints of vice diagonal denote the link strength between the current state
From the above analysis, most of the elements in the information matrix of omnidirectional vision SLAM are nearly zero. It is reasonable for the structure of the information matrix to be sparsed.
Because of th0e required omnidirectional vision’s observation model linearization and data association, the information matrix should be calculated in these processes (motion update, observation update, and add features). Among them, the main computation is concentrated on solving (
With an increase of landmarks is in the map, the computation cost of solving linear equations will increase greatly. As mentioned above, the information matrix is an almost sparse matrix. If we can reasonably use the sparse characteristic of the information matrix, the efficiency of the calculation of extended information filtering will be enhanced.
Compared with the dense matrix, the efficiency of linear equations solving through a sparse matrix will be improved significantly. The sparse matrix operation method [
The improved sparsification rule is proposed in this chapter to reduce the computation cost of EIFSLAM. According to the analysis of the omnidirectional vision observation model and the structural characteristics of the information matrix, this rule is described as shown in Algorithm
while
end
Where,
Only motion update will make the information matrix dense [
In SLAM application, the error caused by sparsification can be eliminated by a loop closure method. A closed loop environment should be designed for a robot to navigate; however it is unrealistic in practice. In information matrix, the correlations are expressed as the elements of matrix. The difference of different correlations is large due to the distortion and wide range view of omnidirectional vision. So, the sparsification rule has been improved in this chapter. The sparse observation information has been utilized and the strongest global correlation has been maintained.
Based on the improved sparsification rule, a relocation method is used to eliminate the error. Relocation refers to the robot finding landmarks which have been recorded in its database (this area has been explored before).
In the process of sparsification, environmental features map are divided into three independent parts [
Among that,
In SLAM, the posterior probability of state vector is
In (
In the traditional sparsification rule,
Among that,
Among that,
As shown in Figure
Correlation description in omnidirectional vision SLAM.
According to the extended information filtering algorithm [
Repeated landmarks information in two continuous images.
Intelligent wheelchairJiaoLong is used to evaluate the proposed IEIFvSLAM algorithm. It is developed based on a commercial powered wheelchair and equipped with two encoders (one encoder for one driven wheel), a smart motion controller, an onboard PC, a laser range finder, and an omnidirectional vision system [
Prototype and hardware structure of the JiaoLong wheelchair.
The omnidirectional vision system which has a Point Grey Chameleon camera with an image resolution of 1296(H) × 964(V) pixels is used in this experiment. The camera’s frame rate is 10 Hz.
As a differentialdriven mobile robot, JiaoLong’s motion model is expressed as
The observation model of omnidirectional vision is expressed as
In the proposed IEIFvSLAM algorithm, the symmetric form of observation model is also used:
Then,
The wheelchair is placed in an environment with a room, a door, and a long corridor. A map built by a laser range finder [
Environment and trajectory for experiment.
As shown in Figure
In Figure
Feature map and omnidirectional image.
Odometer trajectory and localization of IEIFvSLAM.
In order to analyze the localization accuracy of IEIFvSLAM, as shown in Figure
Section I: from the starting point to the door exit;
Section II: from the door exit to the entrance of the last corridor;
Section III: from the entrance of the last corridor to the destination.
In Figure
Localization error of IEIFvSLAM.
Section I  Section II  Section III  

Max.  Avg.  Max.  Avg.  Max.  Avg.  
1st circle  

0.432  0.296  0.633  0.437  0.546  0.349 

0.395  0.247  0.578  0.432  0.430  0.271 
Angle error (°)  3.7  1.9  4.3  2.5  3.6  2.2 
2nd circle  

0.329  0.267  0.541  0.363  0.451  0.273 

0.318  0.231  0.481  0.365  0.318  0.231 
Angle error (°)  2.9  1.6  3.5  2.4  2.5  1.5 
3rd Circle  

0.401  0.235  0.502  0.253  0.471  0.245 

0.264  0.172  0.467  0.236  0.286  0.217 
Angle error (°)  2.4  1.1  3.2  1.8  2.3  1.2 
In Section I, it is a complex indoor environment. There is rich furniture, such as tables, chairs, and various objects. There are more features with more dispersed distribution. Therefore, some more accurate results are obtained.
In Section II, much of the environment is a white wall with windows with similar features. There are fewer features. It leads to inaccurate localization which is caused by lack of relocation.
In Section III, there are more feature points and their distribution is relatively concentrated, which easily leads to missmatch. Therefore, positioning errors are some bigger than that of Section II.
As shown in Table
In order to verify the validity of the proposed IEIFvSLAM algorithm, the EIF [
Through the results in Table
Quantitative results of different algorithms.
EIF algorithm  IEIFvSLAM  

Max.  Avg.  Max.  Avg.  

0.714  0.351  0.502  0.246 

0.571  0.249  0.467  0.198 
Angle error (°)  3.4  1.9  3.2  1.6 

0.487  0.279 
An IEIFvSLAM method based on omnidirectional vision has been proposed in this paper. Both of the characteristics of information matrix structure and omnidirectional vision’s repeated observations for landmarks are analyzed. Based on these analyses, the sparsification rule has been improved. The sparse observation information has been utilized and the strongest global correlation has been maintained. Both the computation efficiency and accuracy of the estimated results have been improved by using proper information matrix sparsification. Before real platform experiments, through the error analysis, the error caused by sparsification can be eliminated by proposed method. The results of the experiments show that this method which uses omnidirectional vision’s characteristic of repeated observations for landmarks can be used for mobile robot map building and localization.
The authors declare that there is no conflict of interests regarding the publication of this paper.
This work is partly supported by the National High Technology Research and Development Program of China under Grant 2012AA041403 and the Natural Science Foundation of China under Grant 61175088.