Search results
1 – 2 of 2Guodong Sa, Zhengyang Jiang, Jiacheng Sun, Chan Qiu, Zhenyu Liu and Jianrong Tan
Real-time monitoring of the critical physical fields of core components in complex equipment is of great significance as it can predict potential failures, provide reasonable…
Abstract
Purpose
Real-time monitoring of the critical physical fields of core components in complex equipment is of great significance as it can predict potential failures, provide reasonable preventive maintenance strategies and thereby ensure the service performance of the equipment. This research aims to propose a hierarchical explicit–implicit combined sensing-based real-time monitoring method to achieve the sensing of critical physical field information of core components in complex equipment.
Design/methodology/approach
Sensor deployable and non-deployable areas are divided based on the dynamic and static constraints in actual service. An integrated method of measurement point layout and performance evaluation is used to optimize sensor placement, and an association mapping between information in non-deployable and deployable areas is established, achieving hierarchical explicit–implicit combined sensing of key sensor information for core components. Finally, the critical physical fields of core components are reconstructed and visualized.
Findings
The proposed method is applied to the spindle system of CNC machine tools, and the result shows that this method can effectively monitor the spindle system temperature field.
Originality/value
This research provides an effective method for monitoring the service performance of complex equipment, especially considering the dynamic and static constraints during the service process and detecting critical information in non-deployable areas.
Details
Keywords
Li Shaochen, Zhenyu Liu, Yu Huang, Daxin Liu, Guifang Duan and Jianrong Tan
Assembly action recognition plays an important role in assembly process monitoring and human-robot collaborative assembly. Previous works overlook the interaction relationship…
Abstract
Purpose
Assembly action recognition plays an important role in assembly process monitoring and human-robot collaborative assembly. Previous works overlook the interaction relationship between hands and operated objects and lack the modeling of subtle hand motions, which leads to a decline in accuracy for fine-grained action recognition. This paper aims to model the hand-object interactions and hand movements to realize high-accuracy assembly action recognition.
Design/methodology/approach
In this paper, a novel multi-stream hand-object interaction network (MHOINet) is proposed for assembly action recognition. To learn the hand-object interaction relationship in assembly sequence, an interaction modeling network (IMN) comprising both geometric and visual modeling is exploited in the interaction stream. The former captures the spatial location relation of hand and interacted parts/tools according to their detected bounding boxes, and the latter focuses on mining the visual context of hand and object at pixel level through a position attention model. To model the hand movements, a temporal enhancement module (TEM) with multiple convolution kernels is developed in the hand stream, which captures the temporal dependences of hand sequences in short and long ranges. Finally, assembly action prediction is accomplished by merging the outputs of different streams through a weighted score-level fusion. A robotic arm component assembly dataset is created to evaluate the effectiveness of the proposed method.
Findings
The method can achieve the recognition accuracy of 97.31% and 95.32% for coarse and fine assembly actions, which outperforms other comparative methods. Experiments on human-robot collaboration prove that our method can be applied to industrial production.
Originality/value
The author proposes a novel framework for assembly action recognition, which simultaneously leverages the features of hands, objects and hand-object interactions. The TEM enhances the representation of dynamics of hands and facilitates the recognition of assembly actions with various time spans. The IMN learns the semantic information from hand-object interactions, which is significant for distinguishing fine assembly actions.
Details