You are here: Home / ICG Publications / Large-Scale Robotic SLAM through Visual Mapping

Large-Scale Robotic SLAM through Visual Mapping

Authors Hoppe Christof, Pirker Katrin, Rüther Matthias, Bischof Horst
Appeared in Austrian Robotics Workshop 2011, Hall, Tirol
Date  2011
Abstract Visual Simultaneous Localization and Mapping (VSLAM) is the task of building a map given a sequence of images. The map is built by by a moving robot, which is simultane- ously localized within it. Only few VSLAM systems showed that they are able to handle a terrain of several thousand square meters and reconstruct a true-scale map with high ac- curacy. In this paper, we propose a keyframe-based VSLAM system that estimates the full 6 DoF pose of the robot together with a sparse reconstruction of the environment. We tackle four basic problems of large-scale visual SLAM: (a) Accurate pose estimation of keyframes, (b) reliable map extension trigger, (c) loop detection and correction, and (d) handling of weakly textured environment. We evaluated our approach on two datasets and compared our results against two state-of-the art approaches. We outperform them by a factor of two with respect to accuracy while beeing real-time capable.
[Powered by Plone]