Search results

1 – 5 of 5
Per page
102050
Citations:
Loading...
Access Restricted. View access options
Article
Publication date: 15 August 2018

Kensuke Harada, Weiwei Wan, Tokuo Tsuji, Kohei Kikuchi, Kazuyuki Nagata and Hiromu Onda

This paper aims to automate the picking task needed in robotic assembly. Parts supplied to an assembly process are usually randomly staked in a box. If randomized bin-picking is…

279

Abstract

Purpose

This paper aims to automate the picking task needed in robotic assembly. Parts supplied to an assembly process are usually randomly staked in a box. If randomized bin-picking is introduced to a production process, we do not need any part-feeding machines or human workers to once arrange the objects to be picked by a robot. The authors introduce a learning-based method for randomized bin-picking.

Design/methodology/approach

The authors combine the learning-based approach on randomized bin-picking (Harada et al., 2014b) with iterative visual recognition (Harada et al., 2016a) and show additional experimental results. For learning, we use random forest explicitly considering the contact between a finger and a neighboring object. The iterative visual recognition method iteratively captures point cloud to obtain more complete point cloud of piled object by using 3D depth sensor attached at the wrist.

Findings

Compared with the authors’ previous research (Harada et al., 2014b) (Harada et al., 2016a), their new finding is as follows: by using random forest, the number of training data becomes extremely small. By adding penalty to occluded area, the learning-based method predicts the success after point cloud with less occluded area. We analyze the calculation time of the iterative visual recognition. We furthermore make clear the cases where a finger contacts neighboring objects.

Originality/value

The originality exists in the part where the authors combined the learning-based approach with the iterative visual recognition and supplied additional experimental results. After obtaining the complete point cloud of the piled object, prediction becomes effective.

Details

Industrial Robot: An International Journal, vol. 45 no. 4
Type: Research Article
ISSN: 0143-991X

Keywords

Access Restricted. View access options
Article
Publication date: 8 June 2021

Mohamed Raessa, Weiwei Wan and Kensuke Harada

This paper aims to present a hierarchical motion planner for planning the manipulation motion to repose long and heavy objects considering external support surfaces.

143

Abstract

Purpose

This paper aims to present a hierarchical motion planner for planning the manipulation motion to repose long and heavy objects considering external support surfaces.

Design/methodology/approach

The planner includes a task-level layer and a motion-level layer. This paper formulates the manipulation planning problem at the task level by considering grasp poses as nodes and object poses for edges. This paper considers regrasping and constrained in-hand slip (drooping) during building graphs and find mixed regrasping and drooping sequences by searching the graph. The generated sequences autonomously divide the object weight between the arm and the support surface and avoid configuration obstacles. Cartesian planning is used at the robot motion level to generate motions between adjacent critical grasp poses of the sequence found by the task-level layer.

Findings

Various experiments are carried out to examine the performance of the proposed planner. The results show improved capability of robot arms to manipulate long and heavy objects using the proposed planner.

Originality/value

The authors’ contribution is that they initially develop a graph-based planning system that reasons both in-hand and regrasp manipulation motion considering external supports. On one hand, the planner integrates regrasping and drooping to realize in-hand manipulation with external support. On the other hand, it switches states by releasing and regrasping objects when the object is in stably placed. The search graphs' nodes could be retrieved from remote cloud servers that provide a large amount of pre-annotated data to implement cyber intelligence.

Details

Assembly Automation, vol. 41 no. 3
Type: Research Article
ISSN: 0144-5154

Keywords

Access Restricted. View access options
Article
Publication date: 20 December 2017

Weiwei Wan, Kensuke Harada and Kazuyuki Nagata

The purpose of this paper is to develop a planner for finding an optimal assembly sequence for robots to assemble objects. Each manipulated object in the optimal sequence is…

844

Abstract

Purpose

The purpose of this paper is to develop a planner for finding an optimal assembly sequence for robots to assemble objects. Each manipulated object in the optimal sequence is stable during assembly. They are easy to grasp and robust to motion uncertainty.

Design/methodology/approach

The input to the planner is the mesh models of the objects, the relative poses between the objects in the assembly and the final pose of the assembly. The output is an optimal assembly sequence, namely, in which order should one assemble the objects, from which directions should the objects be dropped and candidate grasps of each object. The proposed planner finds the optimal solution by automatically permuting, evaluating and searching the possible assembly sequences considering stability, graspability and assemblability qualities.

Findings

The proposed planner could plan an optimal sequence to guide robots to do assembly using translational motion. The sequence provides initial and goal configurations to motion planning algorithms and is ready to be used by robots. The usefulness of the proposed method is verified by both simulation and real-world executions.

Originality/value

The paper proposes an assembly planner which can find an optimal assembly sequence automatically without teaching of the assembly orders and directions by skilled human technicians. The planner is highly expected to improve teachingless robotic manufacturing.

Details

Assembly Automation, vol. 38 no. 2
Type: Research Article
ISSN: 0144-5154

Keywords

Access Restricted. View access options
Article
Publication date: 3 July 2023

Kento Nakatsuru, Weiwei Wan and Kensuke Harada

This paper aims to study using a mobile manipulator with a collaborative robotic arm component to manipulate objects beyond the robot’s maximum payload.

103

Abstract

Purpose

This paper aims to study using a mobile manipulator with a collaborative robotic arm component to manipulate objects beyond the robot’s maximum payload.

Design/methodology/approach

This paper proposes a single-short probabilistic roadmap-based method to plan and optimize manipulation motion with environment support. The method uses an expanded object mesh model to examine contact and randomly explores object motion while keeping contact and securing affordable grasping force. It generates robotic motion trajectories after obtaining object motion using an optimization-based algorithm. With the proposed method’s help, the authors plan contact-rich manipulation without particularly analyzing an object’s contact modes and their transitions. The planner and optimizer determine them automatically.

Findings

The authors conducted experiments and analyses using simulations and real-world executions to examine the method’s performance. The method successfully found manipulation motion that met contact, force and kinematic constraints. It allowed a mobile manipulator to move heavy objects while leveraging supporting forces from environmental obstacles.

Originality/value

This paper presents an automatic approach for solving contact-rich heavy object manipulation problems. Unlike previous methods, the new approach does not need to explicitly analyze contact states and build contact transition graphs, thus providing a new view for robotic grasp-less manipulation, nonprehensile manipulation, manipulation with contact, etc.

Details

Robotic Intelligence and Automation, vol. 43 no. 4
Type: Research Article
ISSN: 2754-6969

Keywords

Access Restricted. View access options
Article
Publication date: 8 January 2018

Ixchel G. Ramirez-Alpizar, Kensuke Harada and Eiichi Yoshida

The aim of this work is to develop a simple planner that is able to automatically plan the motion for a dual-arm manipulator that assembles a ring-shaped elastic object into a…

240

Abstract

Purpose

The aim of this work is to develop a simple planner that is able to automatically plan the motion for a dual-arm manipulator that assembles a ring-shaped elastic object into a cylinder. Moreover, it is desirable to keep the amount of deformation as small as possible, because stretching the object can permanently change its size thus failing to perfectly fit in the cylindrical part and generating undesired gaps between the object and the cylinder.

Design/methodology/approach

The assembly task is divided in two parts: assembly task planning and assembly step planning. The first one computes key configurations of the robot’s end-effectors, and it is based on a simple heuristic method, whereas the latter computes the robot’s motion between key configurations using an optimization-based planner that includes a potential-energy-based cost function for minimizing the object’s deformation.

Findings

The optimization-based planner is shown to be effective for minimizing the deformation of the ring-shaped object. A simple heuristic approach is demonstrated to be valid for inserting deformable objects into a cylinder. Experimental results show that the object can be kept without deformation for the first part of the assembly task, thus reducing the time it is being stretched.

Originality value

A simple assembly planner for inserting ring-shaped deformable objects was developed and validated through several experiments. The proposed planner is able to insert ring-shaped objects without using any sensor (visual and/or force) feedback. The only feedback used is the position of the robot’s end-effectors, which is usually available for any robot.

Details

Assembly Automation, vol. 38 no. 2
Type: Research Article
ISSN: 0144-5154

Keywords

1 – 5 of 5
Per page
102050