K.M. Ibrahim Khalilullah, Shunsuke Ota, Toshiyuki Yasuda and Mitsuru Jindai
The purpose of this study is to develop a cost-effective autonomous wheelchair robot navigation method that assists the aging population.
Abstract
Purpose
The purpose of this study is to develop a cost-effective autonomous wheelchair robot navigation method that assists the aging population.
Design/methodology/approach
Navigation in outdoor environments is still a challenging task for an autonomous mobile robot because of the highly unstructured and different characteristics of outdoor environments. This study examines a complete vision guided real-time approach for robot navigation in urban roads based on drivable road area detection by using deep learning. During navigation, the camera takes a snapshot of the road, and the captured image is then converted into an illuminant invariant image. Subsequently, a deep belief neural network considers this image as an input. It extracts additional discriminative abstract features by using general purpose learning procedure for detection. During obstacle avoidance, the robot measures the distance from the obstacle position by using estimated parameters of the calibrated camera, and it performs navigation by avoiding obstacles.
Findings
The developed method is implemented on a wheelchair robot, and it is verified by navigating the wheelchair robot on different types of urban curve roads. Navigation in real environments indicates that the wheelchair robot can move safely from one place to another. The navigation performance of the developed method and a comparison with laser range finder (LRF)-based methods were demonstrated through experiments.
Originality/value
This study develops a cost-effective navigation method by using a single camera. Additionally, it utilizes the advantages of deep learning techniques for robust classification of the drivable road area. It performs better in terms of navigation when compared to LRF-based methods in LRF-denied environments.
Details
Keywords
K.M. Ibrahim Khalilullah, Shunsuke Ota, Toshiyuki Yasuda and Mitsuru Jindai
Wheelchair robot navigation in different weather conditions using single camera is still a challenging task. The purpose of this study is to develop an autonomous wheelchair robot…
Abstract
Purpose
Wheelchair robot navigation in different weather conditions using single camera is still a challenging task. The purpose of this study is to develop an autonomous wheelchair robot navigation method in different weather conditions, with single camera vision to assist physically disabled people.
Design/methodology/approach
A road detection method, called dimensionality reduction deep belief neural network (DRDBNN), is proposed for drivable road detection. Due to the dimensionality reduction ability of the DRDBNN, it detects the drivable road area in a short time for controlling the robot in real-time. A feed-forward neural network is used to control the robot for the boundary following navigation using evolved neural controller (ENC). The robot detects road junction area and navigates throughout the road, except in road junction, using calibrated camera and ENC. In road junction, it takes turning decision using Google Maps data, thus reaching the final destination.
Findings
The developed method is tested on a wheelchair robot in real environments. Navigation in real environments indicates that the wheelchair robot moves safely from source to destination by following road boundary. The navigation performance in different weather conditions of the developed method has been demonstrated by the experiments.
Originality/value
The wheelchair robot can navigate in different weather conditions. The detection process is faster than that of the previous DBNN method. The proposed ENC uses only distance information from the detected road area and controls the robot for boundary following navigation. In addition, it uses Google Maps data for taking turning decision and navigation in road junctions.
Details
Keywords
Delowar Hossain, Genci Capi, Mitsuru Jindai and Shin-ichiro Kaneko
Development of autonomous robot manipulator for human-robot assembly tasks is a key component to reach high effectiveness. In such tasks, the robot real-time object recognition is…
Abstract
Purpose
Development of autonomous robot manipulator for human-robot assembly tasks is a key component to reach high effectiveness. In such tasks, the robot real-time object recognition is crucial. In addition, the need for simple and safe teaching techniques need to be considered, because: small size robot manipulators’ presence in everyday life environments is increasing requiring non-expert operators to teach the robot; and in small size applications, the operator has to teach several different motions in a short time.
Design/methodology/approach
For object recognition, the authors propose a deep belief neural network (DBNN)-based approach. The captured camera image is used as the input of the DBNN. The DBNN extracts the object features in the intermediate layers. In addition, the authors developed three teaching systems which utilize iPhone; haptic; and Kinect devices.
Findings
The object recognition by DBNN is robust for real-time applications. The robot picks up the object required by the user and places it in the target location. Three developed teaching systems are easy to use by non-experienced subjects, and they show different performance in terms of time to complete the task and accuracy.
Practical implications
The proposed method can ease the use of robot manipulators helping non-experienced users completing different assembly tasks.
Originality/value
This work applies DBNN for object recognition and three intuitive systems for teaching robot manipulators.