SLAMM: Visual monocular SLAM with continuous mapping using multiple maps
Document Type
Article
Publication Date
1-1-2018
Abstract
This paper presents the concept of Simultaneous Localization and Multi-Mapping (SLAMM). It is a system that ensures continuous mapping and information preservation despite failures in tracking due to corrupted frames or sensor’s malfunction; making it suitable for real-world applications. It works with single or multiple robots. In a single robot scenario the algorithm generates a new map at the time of tracking failure, and later it merges maps at the event of loop closure. Similarly, maps generated from multiple robots are merged without prior knowledge of their relative poses; which makes this algorithm flexible. The system works in real time at frame-rate speed. The proposed approach was tested on the KITTI and TUM RGB-D public datasets and it showed superior results compared to the state-of-the-arts in calibrated visual monocular keyframe-based SLAM. The mean tracking time is around 22 milliseconds. The initialization is twice as fast as it is in ORB-SLAM, and the retrieved map can reach up to 90 percent more in terms of information preservation depending on tracking loss and loop closure events. For the benefit of the community, the source code along with a framework to be run with Bebop drone are made available at https://github.com/hdaoud/ORBSLAMM.
Keywords
Algorithms, Artificial Intelligence, Databases, Factual, Robotics
Divisions
fsktm
Funders
University of Malaya’s Research Grant (UMRG), grant number RP030A-14AET,Fundamental Research Grant (FRGS), grant number FP061-2014A provided by Malaysia’s Ministry of Higher Education
Publication Title
PLoS ONE
Volume
13
Issue
4
Publisher
Public Library of Science