Zengxin Kang, Jing Cui, Yijie Wang, Zhikai Hu and Zhongyi Chu
Current flexible printed circuit (FPC) assembly relies heavily on manual labor, limiting capacity and increasing costs. Small FPC size makes automation challenging as terminals…
Abstract
Purpose
Current flexible printed circuit (FPC) assembly relies heavily on manual labor, limiting capacity and increasing costs. Small FPC size makes automation challenging as terminals can be visually occluded. The purpose of this study is to use 3D tactile sensing to mimic human manual mating skills for enabling sensing offset between FPC terminals (FPC-t) and FPC mating slots (FPC-s) under visual occlusion.
Design/methodology/approach
The proposed model has three stages: spatial encoding, offset estimation and action strategy. The spatial encoder maps sparse 3D tactile data into a compact 1D feature capturing valid spatial assembly information to enable temporal processing. To compensate for low sensor resolution, consecutive spatial features are input to a multistage temporal convolutional network which estimates alignment offsets. The robot then performs alignment or mating actions based on the estimated offsets.
Findings
Experiments are conducted on a Redmi Note 4 smartphone assembly platform. Compared to other models, the proposed approach achieves superior offset estimation. Within limited trials, it successfully assembles FPCs under visual occlusion using three-axis tactile sensing.
Originality/value
A spatial encoder is designed to encode three-axis tactile data into feature maps, overcoming multistage temporal convolution network’s (MS-TCN) inability to directly process such input. Modifying the output to estimate assembly offsets with related motion semantics overcame MS-TCN’s segmentation points output, unable to meet assembly monitoring needs. Training and testing the improved MS-TCN on an FPC data set demonstrated accurate monitoring of the full process. An assembly platform verified performance on automated FPC assembly.
Details
Keywords
Zengxin Kang, Jing Cui and Zhongyi Chu
Accurate segmentation of artificial assembly action is the basis of autonomous industrial assembly robots. This paper aims to study the precise segmentation method of manual…
Abstract
Purpose
Accurate segmentation of artificial assembly action is the basis of autonomous industrial assembly robots. This paper aims to study the precise segmentation method of manual assembly action.
Design/methodology/approach
In this paper, a temporal-spatial-contact features segmentation system (TSCFSS) for manual assembly actions recognition and segmentation is proposed. The system consists of three stages: spatial features extraction, contact force features extraction and action segmentation in the temporal dimension. In the spatial features extraction stage, a vectors assembly graph (VAG) is proposed to precisely describe the motion state of the objects and relative position between objects in an RGB-D video frame. Then graph networks are used to extract the spatial features from the VAG. In the contact features extraction stage, a sliding window is used to cut contact force features between hands and tools/parts corresponding to the video frame. Finally, in the action segmentation stage, the spatial and contact features are concatenated as the input of temporal convolution networks for action recognition and segmentation. The experiments have been conducted on a new manual assembly data set containing RGB-D video and contact force.
Findings
In the experiments, the TSCFSS is used to recognize 11 kinds of assembly actions in demonstrations and outperforms the other comparative action identification methods.
Originality/value
A novel manual assembly actions precisely segmentation system, which fuses temporal features, spatial features and contact force features, has been proposed. The VAG, a symbolic knowledge representation for describing assembly scene state, is proposed, making action segmentation more convenient. A data set with RGB-D video and contact force is specifically tailored for researching manual assembly actions.