K.M. Ibrahim Khalilullah, Shunsuke Ota, Toshiyuki Yasuda and Mitsuru Jindai
The purpose of this study is to develop a cost-effective autonomous wheelchair robot navigation method that assists the aging population.
Abstract
Purpose
The purpose of this study is to develop a cost-effective autonomous wheelchair robot navigation method that assists the aging population.
Design/methodology/approach
Navigation in outdoor environments is still a challenging task for an autonomous mobile robot because of the highly unstructured and different characteristics of outdoor environments. This study examines a complete vision guided real-time approach for robot navigation in urban roads based on drivable road area detection by using deep learning. During navigation, the camera takes a snapshot of the road, and the captured image is then converted into an illuminant invariant image. Subsequently, a deep belief neural network considers this image as an input. It extracts additional discriminative abstract features by using general purpose learning procedure for detection. During obstacle avoidance, the robot measures the distance from the obstacle position by using estimated parameters of the calibrated camera, and it performs navigation by avoiding obstacles.
Findings
The developed method is implemented on a wheelchair robot, and it is verified by navigating the wheelchair robot on different types of urban curve roads. Navigation in real environments indicates that the wheelchair robot can move safely from one place to another. The navigation performance of the developed method and a comparison with laser range finder (LRF)-based methods were demonstrated through experiments.
Originality/value
This study develops a cost-effective navigation method by using a single camera. Additionally, it utilizes the advantages of deep learning techniques for robust classification of the drivable road area. It performs better in terms of navigation when compared to LRF-based methods in LRF-denied environments.
Details
Keywords
K.M. Ibrahim Khalilullah, Shunsuke Ota, Toshiyuki Yasuda and Mitsuru Jindai
Wheelchair robot navigation in different weather conditions using single camera is still a challenging task. The purpose of this study is to develop an autonomous wheelchair robot…
Abstract
Purpose
Wheelchair robot navigation in different weather conditions using single camera is still a challenging task. The purpose of this study is to develop an autonomous wheelchair robot navigation method in different weather conditions, with single camera vision to assist physically disabled people.
Design/methodology/approach
A road detection method, called dimensionality reduction deep belief neural network (DRDBNN), is proposed for drivable road detection. Due to the dimensionality reduction ability of the DRDBNN, it detects the drivable road area in a short time for controlling the robot in real-time. A feed-forward neural network is used to control the robot for the boundary following navigation using evolved neural controller (ENC). The robot detects road junction area and navigates throughout the road, except in road junction, using calibrated camera and ENC. In road junction, it takes turning decision using Google Maps data, thus reaching the final destination.
Findings
The developed method is tested on a wheelchair robot in real environments. Navigation in real environments indicates that the wheelchair robot moves safely from source to destination by following road boundary. The navigation performance in different weather conditions of the developed method has been demonstrated by the experiments.
Originality/value
The wheelchair robot can navigate in different weather conditions. The detection process is faster than that of the previous DBNN method. The proposed ENC uses only distance information from the detected road area and controls the robot for boundary following navigation. In addition, it uses Google Maps data for taking turning decision and navigation in road junctions.