Mohamed Raessa, Weiwei Wan and Kensuke Harada
This paper aims to present a hierarchical motion planner for planning the manipulation motion to repose long and heavy objects considering external support surfaces.
Abstract
Purpose
This paper aims to present a hierarchical motion planner for planning the manipulation motion to repose long and heavy objects considering external support surfaces.
Design/methodology/approach
The planner includes a task-level layer and a motion-level layer. This paper formulates the manipulation planning problem at the task level by considering grasp poses as nodes and object poses for edges. This paper considers regrasping and constrained in-hand slip (drooping) during building graphs and find mixed regrasping and drooping sequences by searching the graph. The generated sequences autonomously divide the object weight between the arm and the support surface and avoid configuration obstacles. Cartesian planning is used at the robot motion level to generate motions between adjacent critical grasp poses of the sequence found by the task-level layer.
Findings
Various experiments are carried out to examine the performance of the proposed planner. The results show improved capability of robot arms to manipulate long and heavy objects using the proposed planner.
Originality/value
The authors’ contribution is that they initially develop a graph-based planning system that reasons both in-hand and regrasp manipulation motion considering external supports. On one hand, the planner integrates regrasping and drooping to realize in-hand manipulation with external support. On the other hand, it switches states by releasing and regrasping objects when the object is in stably placed. The search graphs' nodes could be retrieved from remote cloud servers that provide a large amount of pre-annotated data to implement cyber intelligence.
Details
Keywords
Kento Nakatsuru, Weiwei Wan and Kensuke Harada
This paper aims to study using a mobile manipulator with a collaborative robotic arm component to manipulate objects beyond the robot’s maximum payload.
Abstract
Purpose
This paper aims to study using a mobile manipulator with a collaborative robotic arm component to manipulate objects beyond the robot’s maximum payload.
Design/methodology/approach
This paper proposes a single-short probabilistic roadmap-based method to plan and optimize manipulation motion with environment support. The method uses an expanded object mesh model to examine contact and randomly explores object motion while keeping contact and securing affordable grasping force. It generates robotic motion trajectories after obtaining object motion using an optimization-based algorithm. With the proposed method’s help, the authors plan contact-rich manipulation without particularly analyzing an object’s contact modes and their transitions. The planner and optimizer determine them automatically.
Findings
The authors conducted experiments and analyses using simulations and real-world executions to examine the method’s performance. The method successfully found manipulation motion that met contact, force and kinematic constraints. It allowed a mobile manipulator to move heavy objects while leveraging supporting forces from environmental obstacles.
Originality/value
This paper presents an automatic approach for solving contact-rich heavy object manipulation problems. Unlike previous methods, the new approach does not need to explicitly analyze contact states and build contact transition graphs, thus providing a new view for robotic grasp-less manipulation, nonprehensile manipulation, manipulation with contact, etc.
Details
Keywords
Jian-jun Yuan, Shuai Wang, Weiwei Wan, Yanxue Liang, Luo Yang and Yifan Liu
The aim of this paper is to implement direct teaching of industrial manipulators using current sensors. The traditional way to implement teaching is either to use a teaching…
Abstract
Purpose
The aim of this paper is to implement direct teaching of industrial manipulators using current sensors. The traditional way to implement teaching is either to use a teaching pedant, which is time consuming, or use force sensors, which increases system cost. To overcome these disadvantages, a novel method is explored in the paper by using current sensors installed at joints as torque observers.
Design/methodology/approach
The method uses current sensors installed at each joint of a manipulator as torque observers and estimates external forces from differences between joint-driven torque computed based on the values of current sensors and commanded values of motor-driven torque. The joint-driven torque is computed by cancelling out both pre-calibrated gravity and friction resistance (compensation). Also, to make the method robust, the paper presents a strategy to detect unexpected slowly drifts and zero external forces and stop the robot in those situations.
Findings
Experimental results demonstrated that compensating the joint torques using both pre-calibrated gravity and friction resistance has performance comparable to a force sensor installed on the end effector of a manipulator. It is possible to implement satisfying direct teaching without using force sensors on 7 degree of freedom manipulators with large mass and friction resistance.
Originality/value
The main contribution of the paper is that the authors cancel out both pre-calibrated gravity and friction resistance to improve the direct teaching using only current sensors; they develop methods to avoid unsafe situations like slow drifts. The method will benefit industrial manipulators, especially those with large mass and friction resistance, to realize flexible and reliable direct teaching.
Details
Keywords
Weiwei Wan, Kensuke Harada and Kazuyuki Nagata
The purpose of this paper is to develop a planner for finding an optimal assembly sequence for robots to assemble objects. Each manipulated object in the optimal sequence is…
Abstract
Purpose
The purpose of this paper is to develop a planner for finding an optimal assembly sequence for robots to assemble objects. Each manipulated object in the optimal sequence is stable during assembly. They are easy to grasp and robust to motion uncertainty.
Design/methodology/approach
The input to the planner is the mesh models of the objects, the relative poses between the objects in the assembly and the final pose of the assembly. The output is an optimal assembly sequence, namely, in which order should one assemble the objects, from which directions should the objects be dropped and candidate grasps of each object. The proposed planner finds the optimal solution by automatically permuting, evaluating and searching the possible assembly sequences considering stability, graspability and assemblability qualities.
Findings
The proposed planner could plan an optimal sequence to guide robots to do assembly using translational motion. The sequence provides initial and goal configurations to motion planning algorithms and is ready to be used by robots. The usefulness of the proposed method is verified by both simulation and real-world executions.
Originality/value
The paper proposes an assembly planner which can find an optimal assembly sequence automatically without teaching of the assembly orders and directions by skilled human technicians. The planner is highly expected to improve teachingless robotic manufacturing.
Details
Keywords
Jian-jun Yuan, Weiwei Wan, Xiajun Fu, Shuai Wang and Ning Wang
This paper aims to propose a novel method to identify the parameters of robotic manipulators using the torque exerted by the robot joint motors (measured by current sensors).
Abstract
Purpose
This paper aims to propose a novel method to identify the parameters of robotic manipulators using the torque exerted by the robot joint motors (measured by current sensors).
Design/methodology/approach
Previous studies used additional sensors like force sensor and inertia measurement unit, or additional payload mounted on the end-effector to perform parameter identification. The settings of these previous works were complicated. They could only identify part of the parameters. This paper uses the torque exerted by each joint while performing Fourier periodic excited trajectories. It divides the parameters into a linear part and a non-linear part, and uses linear least square (LLS) parameter estimation and dual-swarm-based particle swarm optimization (DPso) to compute the linear and non-linear parts, respectively.
Findings
The settings are simpler and can identify the dynamic parameters, the viscous friction coefficients and the Coulomb friction coefficients of two joints at the same time. A SIASUN 7-Axis Flexible Robot is used to experimentally validate the proposal. Comparison between the predicted torque values and ground-truth values of the joints confirms the effectiveness of the method.
Originality/value
The proposed method identifies two joints at the same time with satisfying precision and high efficiency. The identification errors of joints do not accumulate.
Details
Keywords
Jianjun Yuan, Yingjie Qian, Liming Gao, Zhaohan Yuan and Weiwei Wan
This paper aims to purpose an improved sensorless position-based force controller in gravitational direction for applications including polishing, milling and deburring.
Abstract
Purpose
This paper aims to purpose an improved sensorless position-based force controller in gravitational direction for applications including polishing, milling and deburring.
Design/methodology/approach
The first issue is the external force/torque estimation at end-effector. By using motor’s current information and Moore-Penrose generalized inverse matrix, it can be derived from the external torques of every joints for nonsingular cases. The second issue is the force control strategy which is based on position-based impedance control model. Two novel improvements were made to achieve a better performance. One is combination of impedance control and explicit force control. The other one is the real-time prediction of the surface’s shape allowing the controller adaptive to arbitrary surfaces.
Findings
The result of validation experiments indicates that the estimation of external force and prediction of surface’s shape are credible, and the position-based constant contact force controller in gravitational direction is functional. The accuracy of force tracking is adequate for targeted applications such as polishing, deburring and milling.
Originality/value
The value of this paper lies in three aspects which are sensorless external force estimation, the combination of impedance control and explicit force control and the independence of surface shape information achieved by real-time surface prediction.
Details
Keywords
Kensuke Harada, Weiwei Wan, Tokuo Tsuji, Kohei Kikuchi, Kazuyuki Nagata and Hiromu Onda
This paper aims to automate the picking task needed in robotic assembly. Parts supplied to an assembly process are usually randomly staked in a box. If randomized bin-picking is…
Abstract
Purpose
This paper aims to automate the picking task needed in robotic assembly. Parts supplied to an assembly process are usually randomly staked in a box. If randomized bin-picking is introduced to a production process, we do not need any part-feeding machines or human workers to once arrange the objects to be picked by a robot. The authors introduce a learning-based method for randomized bin-picking.
Design/methodology/approach
The authors combine the learning-based approach on randomized bin-picking (Harada et al., 2014b) with iterative visual recognition (Harada et al., 2016a) and show additional experimental results. For learning, we use random forest explicitly considering the contact between a finger and a neighboring object. The iterative visual recognition method iteratively captures point cloud to obtain more complete point cloud of piled object by using 3D depth sensor attached at the wrist.
Findings
Compared with the authors’ previous research (Harada et al., 2014b) (Harada et al., 2016a), their new finding is as follows: by using random forest, the number of training data becomes extremely small. By adding penalty to occluded area, the learning-based method predicts the success after point cloud with less occluded area. We analyze the calculation time of the iterative visual recognition. We furthermore make clear the cases where a finger contacts neighboring objects.
Originality/value
The originality exists in the part where the authors combined the learning-based approach with the iterative visual recognition and supplied additional experimental results. After obtaining the complete point cloud of the piled object, prediction becomes effective.
Details
Keywords
Xiaojun Wu, Peng Li, Jinghui Zhou and Yunhui Liu
Scattered parts are laid randomly during the manufacturing process and have difficulty to recognize and manipulate. This study aims to complete the grasp of the scattered parts by…
Abstract
Purpose
Scattered parts are laid randomly during the manufacturing process and have difficulty to recognize and manipulate. This study aims to complete the grasp of the scattered parts by a manipulator with a camera and learning method.
Design/methodology/approach
In this paper, a cascaded convolutional neural network (CNN) method for robotic grasping based on monocular vision and small data set of scattered parts is proposed. This method can be divided into three steps: object detection, monocular depth estimation and keypoint estimation. In the first stage, an object detection network is improved to effectively locate the candidate parts. Then, it contains a neural network structure and corresponding training method to learn and reason high-resolution input images to obtain depth estimation. The keypoint estimation in the third step is expressed as a cumulative form of multi-scale prediction from a network to use an red green blue depth (RGBD) map that is acquired from the object detection and depth map estimation. Finally, a grasping strategy is studied to achieve successful and continuous grasping. In the experiments, different workpieces are used to validate the proposed method. The best grasping success rate is more than 80%.
Findings
By using the CNN-based method to extract the key points of the scattered parts and calculating the possibility of grasp, the successful rate is increased.
Practical implications
This method and robotic systems can be used in picking and placing of most industrial automatic manufacturing or assembly processes.
Originality/value
Unlike standard parts, scattered parts are randomly laid and have difficulty recognizing and grasping for the robot. This study uses a cascaded CNN network to extract the keypoints of the scattered parts, which are also labeled with the possibility of successful grasping. Experiments are conducted to demonstrate the grasping of those scattered parts.
Details
Keywords
Jingmei Zhai, Xianwen Zeng and Ziqing Su
To ensure accurate position and force control of massage robot working on human body with unknown skin characteristics, this study aims to propose a novel intelligent impedance…
Abstract
Purpose
To ensure accurate position and force control of massage robot working on human body with unknown skin characteristics, this study aims to propose a novel intelligent impedance control system.
Design/methodology/approach
First, a skin dynamic model (SDM) is introduced to describe force-deformation on the human body as feed-forward for force control. Then a particle swarm optimization (PSO) method combined with graph-based knowledge transfer learning (GKT) is studied, which will effectively identify personalized skin parameters. Finally, a self-tuning impedance control strategy is designed to accommodate uncertainty of skin dynamics, system delay and signal noise exist in practical applications.
Findings
Compared with traditional least square method, genetic algorithm and other kinds of PSO methods, combination of PSO and GKT is validated using experimental data to improve the accuracy and convergence of identification results. The force control is effective, although there are contour errors, control delay and noise problems when the robot does massage on human body.
Originality/value
Integrating GKT into PSO identification algorithm, and designing an adaptive impedance control algorithm. As a result, the robot can understand textural and biological attributes of its surroundings and adapt its planning activities to carry out a stable and accurate force tracking control during dynamic contacts between a robot and a human.