Zengrui Zheng, Kainan Su, Shifeng Lin, Zhiquan Fu and Chenguang Yang
Visual simultaneous localization and mapping (SLAM) has limitations such as sensitivity to lighting changes and lower measurement accuracy. The effective fusion of information…
Abstract
Purpose
Visual simultaneous localization and mapping (SLAM) has limitations such as sensitivity to lighting changes and lower measurement accuracy. The effective fusion of information from multiple modalities to address these limitations has emerged as a key research focus. This study aims to provide a comprehensive review of the development of vision-based SLAM (including visual SLAM) for navigation and pose estimation, with a specific focus on techniques for integrating multiple modalities.
Design/methodology/approach
This paper initially introduces the mathematical models and framework development of visual SLAM. Subsequently, this paper presents various methods for improving accuracy in visual SLAM by fusing different spatial and semantic features. This paper also examines the research advancements in vision-based SLAM with respect to multi-sensor fusion in both loosely coupled and tightly coupled approaches. Finally, this paper analyzes the limitations of current vision-based SLAM and provides predictions for future advancements.
Findings
The combination of vision-based SLAM and deep learning has significant potential for development. There are advantages and disadvantages to both loosely coupled and tightly coupled approaches in multi-sensor fusion, and the most suitable algorithm should be chosen based on the specific application scenario. In the future, vision-based SLAM is evolving toward better addressing challenges such as resource-limited platforms and long-term mapping.
Originality/value
This review introduces the development of vision-based SLAM and focuses on the advancements in multimodal fusion. It allows readers to quickly understand the progress and current status of research in this field.
Details
Keywords
Sen Li, He Guan, Xiaofei Ma, Hezhao Liu, Dan Zhang, Zeqi Wu and Huaizhou Li
To address the issues of low localization and mapping accuracy, as well as map ghosting and drift, in indoor degraded environments using light detection and ranging-simultaneous…
Abstract
Purpose
To address the issues of low localization and mapping accuracy, as well as map ghosting and drift, in indoor degraded environments using light detection and ranging-simultaneous localization and mapping (LiDAR SLAM), a real-time localization and mapping system integrating filtering and graph optimization theory is proposed. By incorporating filtering algorithms, the system effectively reduces localization errors and environmental noise. In addition, leveraging graph optimization theory, it optimizes the poses and positions throughout the SLAM process, further enhancing map accuracy and consistency. The purpose of this study resolves common problems such as map ghosting and drift, thereby achieving more precise real-time localization and mapping results.
Design/methodology/approach
The system consists of three main components: point cloud data preprocessing, tightly coupled inertial odometry based on filtering and backend pose graph optimization. First, point cloud data preprocessing uses the random sample consensus algorithm to segment the ground and extract ground model parameters, which are then used to construct ground constraint factors in backend optimization. Second, the frontend tightly coupled inertial odometry uses iterative error-state Kalman filtering, where the LiDAR odometry serves as observations and the inertial measurement unit preintegration results as predictions. By constructing a joint function, filtering fusion yields a more accurate LiDAR-inertial odometry. Finally, the backend incorporates graph optimization theory, introducing loop closure factors, ground constraint factors and odometry factors from frame-to-frame matching as constraints. This forms a factor graph that optimizes the map’s poses. The loop closure factor uses an improved scan-text-based loop closure detection algorithm for position recognition, reducing the rate of environmental misidentification.
Findings
A SLAM system integrating filtering and graph optimization technique has been proposed, demonstrating improvements of 35.3%, 37.6% and 40.8% in localization and mapping accuracy compared to ALOAM, lightweight and ground optimized lidar odometry and mapping and LiDAR inertial odometry via smoothing and mapping, respectively. The system exhibits enhanced robustness in challenging environments.
Originality/value
This study introduces a frontend laser-inertial odometry tightly coupled filtering method and a backend graph optimization method improved by loop closure detection. This approach demonstrates superior robustness in indoor localization and mapping accuracy.
Details
Keywords
Shuo Wang, Xin Li, Yu Zhang, Songhui Ma and Xianrui Ren
Visual simultaneous localization and mapping (SLAM) methods suffer from accumulated errors, especially in challenging environments without loop closure. By constructing…
Abstract
Purpose
Visual simultaneous localization and mapping (SLAM) methods suffer from accumulated errors, especially in challenging environments without loop closure. By constructing lightweight offline maps and using deep learning (DL)-based technology in the two stages, i.e. image retrieval and feature matching, the goal is to reconstruct the six-degree-of-freedom (6-DoF) relationship between SLAM sequences and map sequences. This study aims to propose a comprehensive coarse-to-fine 6-DoF long-term visual relocalization assisted SLAM method specifically designed for challenging environments, aiming to achieve more accurate pose estimation.
Design/methodology/approach
First, image global feature matching and patch-level global feature matching are conducted to achieve optimal frame-to-frame matching. Second, a DL network is introduced to extract and match features between the most similar frames, enabling point-to-point motion estimation. Finally, a fast pose graph optimization method is proposed to achieve real-time optimization of the pose in the SLAM sequence.
Findings
The proposed method has been successfully validated on the real-world FinnForest Dataset and UZH-FPV Drone Racing Dataset. The accuracy of the proposed method is evaluated using absolute positional error and absolute rotational error. Experimental results show that in most cases, there are significant improvements in the root mean square error and the standard deviation of the error in pose estimation, and it performs better than loop closure in terms of accuracy. This indicates that the method has strong generalizability and robustness.
Originality/value
The main contribution of this study is the proposal of a complete DL-based coarse-to-fine 6-DoF long-term visual relocalization method to assist vSLAM, which demonstrates enhanced robustness and generalizability and can eliminate cumulative errors in pose estimation under challenging environments.
Details
Keywords
Michał R. Nowicki, Dominik Belter, Aleksander Kostusiak, Petr Cížek, Jan Faigl and Piotr Skrzypczyński
This paper aims to evaluate four different simultaneous localization and mapping (SLAM) systems in the context of localization of multi-legged walking robots equipped with compact…
Abstract
Purpose
This paper aims to evaluate four different simultaneous localization and mapping (SLAM) systems in the context of localization of multi-legged walking robots equipped with compact RGB-D sensors. This paper identifies problems related to in-motion data acquisition in a legged robot and evaluates the particular building blocks and concepts applied in contemporary SLAM systems against these problems. The SLAM systems are evaluated on two independent experimental set-ups, applying a well-established methodology and performance metrics.
Design/methodology/approach
Four feature-based SLAM architectures are evaluated with respect to their suitability for localization of multi-legged walking robots. The evaluation methodology is based on the computation of the absolute trajectory error (ATE) and relative pose error (RPE), which are performance metrics well-established in the robotics community. Four sequences of RGB-D frames acquired in two independent experiments using two different six-legged walking robots are used in the evaluation process.
Findings
The experiments revealed that the predominant problem characteristics of the legged robots as platforms for SLAM are the abrupt and unpredictable sensor motions, as well as oscillations and vibrations, which corrupt the images captured in-motion. The tested adaptive gait allowed the evaluated SLAM systems to reconstruct proper trajectories. The bundle adjustment-based SLAM systems produced best results, thanks to the use of a map, which enables to establish a large number of constraints for the estimated trajectory.
Research limitations/implications
The evaluation was performed using indoor mockups of terrain. Experiments in more natural and challenging environments are envisioned as part of future research.
Practical implications
The lack of accurate self-localization methods is considered as one of the most important limitations of walking robots. Thus, the evaluation of the state-of-the-art SLAM methods on legged platforms may be useful for all researchers working on walking robots’ autonomy and their use in various applications, such as search, security, agriculture and mining.
Originality/value
The main contribution lies in the integration of the state-of-the-art SLAM methods on walking robots and their thorough experimental evaluation using a well-established methodology. Moreover, a SLAM system designed especially for RGB-D sensors and real-world applications is presented in details.
Details
Keywords
Janusz Marian Bedkowski and Timo Röhling
This paper aims to focus on real-world mobile systems, and thus propose relevant contribution to the special issue on “Real-world mobile robot systems”. This work on 3D laser…
Abstract
Purpose
This paper aims to focus on real-world mobile systems, and thus propose relevant contribution to the special issue on “Real-world mobile robot systems”. This work on 3D laser semantic mobile mapping and particle filter localization dedicated for robot patrolling urban sites is elaborated with a focus on parallel computing application for semantic mapping and particle filter localization. The real robotic application of patrolling urban sites is the goal; thus, it has been shown that crucial robotic components have reach high Technology Readiness Level (TRL).
Design/methodology/approach
Three different robotic platforms equipped with different 3D laser measurement system were compared. Each system provides different data according to the measured distance, density of points and noise; thus, the influence of data into final semantic maps has been compared. The realistic problem is to use these semantic maps for robot localization; thus, the influence of different maps into particle filter localization has been elaborated. A new approach has been proposed for particle filter localization based on 3D semantic information, and thus, the behavior of particle filter in different realistic conditions has been elaborated. The process of using proposed robotic components for patrolling urban site, such as the robot checking geometrical changes of the environment, has been detailed.
Findings
The focus on real-world mobile systems requires different points of view for scientific work. This study is focused on robust and reliable solutions that could be integrated with real applications. Thus, new parallel computing approach for semantic mapping and particle filter localization has been proposed. Based on the literature, semantic 3D particle filter localization has not yet been elaborated; thus, innovative solutions for solving this issue have been proposed. Recently, a semantic mapping framework that was already published was developed. For this reason, this study claimed that the authors’ applied studies during real-world trials with such mapping system are added value relevant for this special issue.
Research limitations/implications
The main problem is the compromise between computer power and energy consumed by heavy calculations, thus our main focus is to use modern GPGPU, NVIDIA PASCAL parallel processor architecture. Recent advances in GPGPUs shows great potency for mobile robotic applications, thus this study is focused on increasing mapping and localization capabilities by improving the algorithms. Current limitation is related with the number of particles processed by a single processor, and thus achieved performance of 500 particles in real-time is the current limitation. The implication is that multi-GPU architectures for increasing the number of processed particle can be used. Thus, further studies are required.
Practical implications
The research focus is related to real-world mobile systems; thus, practical aspects of the work are crucial. The main practical application is semantic mapping that could be used for many robotic applications. The authors claim that their particle filter localization is ready to integrate with real robotic platforms using modern 3D laser measurement system. For this reason, the authors claim that their system can improve existing autonomous robotic platforms. The proposed components can be used for detection of geometrical changes in the scene; thus, many practical functionalities can be applied such as: detection of cars, detection of opened/closed gate, etc. […] These functionalities are crucial elements of the safe and security domain.
Social implications
Improvement of safe and security domain is a crucial aspect of modern society. Protecting critical infrastructure plays an important role, thus introducing autonomous mobile platforms capable of supporting human operators of safe and security systems could have a positive impact if viewed from many points of view.
Originality/value
This study elaborates the novel approach of particle filter localization based on 3D data and semantic mapping. This original work could have a great impact on the mobile robotics domain, and thus, this study claims that many algorithmic and implementation issues were solved assuming real-task experiments. The originality of this work is influenced by the use of modern advanced robotic systems being a relevant set of technologies for proper evaluation of the proposed approach. Such a combination of experimental hardware and original algorithms and implementation is definitely an added value.
Details
Keywords
Xiangdi Yue, Yihuan Zhang, Jiawei Chen, Junxin Chen, Xuanyi Zhou and Miaolei He
In recent decades, the field of robotic mapping has witnessed widespread research and development in light detection and ranging (LiDAR)-based simultaneous localization and mapping…
Abstract
Purpose
In recent decades, the field of robotic mapping has witnessed widespread research and development in light detection and ranging (LiDAR)-based simultaneous localization and mapping (SLAM) techniques. This paper aims to provide a significant reference for researchers and engineers in robotic mapping.
Design/methodology/approach
This paper focused on the research state of LiDAR-based SLAM for robotic mapping as well as a literature survey from the perspective of various LiDAR types and configurations.
Findings
This paper conducted a comprehensive literature review of the LiDAR-based SLAM system based on three distinct LiDAR forms and configurations. The authors concluded that multi-robot collaborative mapping and multi-source fusion SLAM systems based on 3D LiDAR with deep learning will be new trends in the future.
Originality/value
To the best of the authors’ knowledge, this is the first thorough survey of robotic mapping from the perspective of various LiDAR types and configurations. It can serve as a theoretical and practical guide for the advancement of academic and industrial robot mapping.
Details
Keywords
Chen-Chien Hsu, Cheng-Kai Yang, Yi-Hsing Chien, Yin-Tien Wang, Wei-Yen Wang and Chiang-Heng Chien
FastSLAM is a popular method to solve the problem of simultaneous localization and mapping (SLAM). However, when the number of landmarks present in real environments increases…
Abstract
Purpose
FastSLAM is a popular method to solve the problem of simultaneous localization and mapping (SLAM). However, when the number of landmarks present in real environments increases, there are excessive comparisons of the measurement with all the existing landmarks in each particle. As a result, the execution speed will be too slow to achieve the objective of real-time navigation. Thus, this paper aims to improve the computational efficiency and estimation accuracy of conventional SLAM algorithms.
Design/methodology/approach
As an attempt to solve this problem, this paper presents a computationally efficient SLAM (CESLAM) algorithm, where odometer information is considered for updating the robot’s pose in particles. When a measurement has a maximum likelihood with the known landmark in the particle, the particle state is updated before updating the landmark estimates.
Findings
Simulation results show that the proposed CESLAM can overcome the problem of heavy computational burden while improving the accuracy of localization and mapping building. To practically evaluate the performance of the proposed method, a Pioneer 3-DX robot with a Kinect sensor is used to develop an RGB-D-based computationally efficient visual SLAM (CEVSLAM) based on Speeded-Up Robust Features (SURF). Experimental results confirm that the proposed CEVSLAM system is capable of successfully estimating the robot pose and building the map with satisfactory accuracy.
Originality/value
The proposed CESLAM algorithm overcomes the problem of the time-consuming process because of unnecessary comparisons in existing FastSLAM algorithms. Simulations show that accuracy of robot pose and landmark estimation is greatly improved by the CESLAM. Combining CESLAM and SURF, the authors establish a CEVSLAM to significantly improve the estimation accuracy and computational efficiency. Practical experiments by using a Kinect visual sensor show that the variance and average error by using the proposed CEVSLAM are smaller than those by using the other visual SLAM algorithms.
Details
Keywords
Qihua Ma, Qilin Li, Wenchao Wang and Meng Zhu
This study aims to achieve superior localization and mapping performance in point cloud degradation scenarios through the effective removal of dynamic obstacles. With the…
Abstract
Purpose
This study aims to achieve superior localization and mapping performance in point cloud degradation scenarios through the effective removal of dynamic obstacles. With the continuous development of various technologies for autonomous vehicles, the LIDAR-based Simultaneous localization and mapping (SLAM) system is becoming increasingly important. However, in SLAM systems, effectively addressing the challenges of point cloud degradation scenarios is essential for accurate localization and mapping, with dynamic obstacle removal being a key component.
Design/methodology/approach
This paper proposes a method that combines adaptive feature extraction and loop closure detection algorithms to address this challenge. In the SLAM system, the ground point cloud and non-ground point cloud are separated to reduce the impact of noise. And based on the cylindrical projection image of the point cloud, the intensity features are adaptively extracted, the degradation direction is determined by the degradation factor and the intensity features are matched with the map to correct the degraded pose. Moreover, through the difference in raster distribution of the point clouds before and after two frames in the loop process, the dynamic point clouds are identified and removed, and the map is updated.
Findings
Experimental results show that the method has good performance. The absolute displacement accuracy of the laser odometer is improved by 27.1%, the relative displacement accuracy is improved by 33.5% and the relative angle accuracy is improved by 23.8% after using the adaptive intensity feature extraction method. The position error is reduced by 30% after removing the dynamic target.
Originality/value
Compared with LiDAR odometry and mapping algorithm, the method has greater robustness and accuracy in mapping and localization.
Details
Keywords
Hui Xiong, Youping Chen, Xiaoping Li, Bing Chen and Jun Zhang
The purpose of this paper is to present a scan matching simultaneous localization and mapping (SLAM) algorithm based on particle filter to generate the grid map online. It mainly…
Abstract
Purpose
The purpose of this paper is to present a scan matching simultaneous localization and mapping (SLAM) algorithm based on particle filter to generate the grid map online. It mainly focuses on reducing the memory consumption and alleviating the loop closure problem.
Design/methodology/approach
The proposed method alleviates the loop closure problem by improving the accuracy of the robot’s pose. First, two improvements were applied to enhance the accuracy of the hill climbing scan matching. Second, a particle filter was used to maintain the diversity of the robot’s pose and then to supply potential seeds to the hill climbing scan matching to ensure that the best match point was the global optimum. The proposed method reduces the memory consumption by maintaining only a single grid map.
Findings
Simulation and experimental results have proved that this method can build a consistent map of a complex environment. Meanwhile, it reduced the memory consumption and alleviates the loop closure problem.
Originality/value
In this paper, a new SLAM algorithm has been proposed. It can reduce the memory consumption and alleviate the loop closure problem without lowering the accuracy of the generated grid map.
Details
Keywords
Ling Chen, Sen Wang, Klaus McDonald‐Maier and Huosheng Hu
The main purpose of this paper is to investigate two key elements of localization and mapping of Autonomous Underwater Vehicle (AUV), i.e. to overview various sensors and…
Abstract
Purpose
The main purpose of this paper is to investigate two key elements of localization and mapping of Autonomous Underwater Vehicle (AUV), i.e. to overview various sensors and algorithms used for underwater localization and mapping, and to make suggestions for future research.
Design/methodology/approach
The authors first review various sensors and algorithms used for AUVs in the terms of basic working principle, characters, their advantages and disadvantages. The statistical analysis is carried out by studying 35 AUV platforms according to the application circumstances of sensors and algorithms.
Findings
As real‐world applications have different requirements and specifications, it is necessary to select the most appropriate one by balancing various factors such as accuracy, cost, size, etc. Although highly accurate localization and mapping in an underwater environment is very difficult, more and more accurate and robust navigation solutions will be achieved with the development of both sensors and algorithms.
Research limitations/implications
This paper provides an overview of the state of art underwater localisation and mapping algorithms and systems. No experiments are conducted for verification.
Practical implications
The paper will give readers a clear guideline to find suitable underwater localisation and mapping algorithms and systems for their practical applications in hand.
Social implications
There is a wide range of audiences who will benefit from reading this comprehensive survey of autonomous localisation and mapping of UAVs.
Originality/value
The paper will provide useful information and suggestions to research students, engineers and scientists who work in the field of autonomous underwater vehicles.