Search results

1 – 4 of 4
Article
Publication date: 6 July 2020

Longfei Song, Zhiyong Liu, Lin Lu, Xiaogang Li, BaoZhuang Sun and Huanlin Cheng

This paper aims to analyze a failure case of a P110 tube in a CO2 flooding well.

Abstract

Purpose

This paper aims to analyze a failure case of a P110 tube in a CO2 flooding well.

Design/methodology/approach

The chemical composition, microstructure and mechanical properties of the failed P110 tubing steel were tested, and met the API Spec 5CT standard. The fractures were investigated by scanning electron microscopy and energy dispersive spectroscopy.

Findings

Fracture was induced by stress corrosion cracking (SCC) and that the stress concentration caused by the mechanical damage played an important role in the failure. The failure case is a SCC failure affected by mechanical damage and galvanic corrosion.

Originality/value

The effect of the infiltration of groundwater was studied in the failure case. The stress concentration caused by the mechanical damage played an important role in the failure.

Details

Anti-Corrosion Methods and Materials, vol. 67 no. 5
Type: Research Article
ISSN: 0003-5599

Keywords

Book part
Publication date: 14 October 2019

Wan-Yu Liu and Joseph S. Chen

This study attempts to demonstrate how a tourism attraction (i.e., museum) could establish its brand equity. It involves a case study on one of the most famous museums in Taiwan…

Abstract

This study attempts to demonstrate how a tourism attraction (i.e., museum) could establish its brand equity. It involves a case study on one of the most famous museums in Taiwan which involves an in-depth interview. The results show that the museum under investigation has established a clear brand identification and its brand communications but has a limited interpretation of its brand assets. Recommendations include strengthening its experiential propaganda, organizing large-scale intercity festivals, coordinating with other vendors to sell cultural products, increasing the number of professional exhibitions, and establishing a self-evaluation mechanism.

Article
Publication date: 26 August 2022

Shanhua Qian, Longfei Gong, Wei Wang, Zifeng Ni and Haidong Ren

This study aims to reduce the harm of industrial lubricants to consumers. Composite aluminum-based grease (CAG) was prepared, and medical-grade montmorillonite (M-MMT) was used to…

Abstract

Purpose

This study aims to reduce the harm of industrial lubricants to consumers. Composite aluminum-based grease (CAG) was prepared, and medical-grade montmorillonite (M-MMT) was used to improve the antiwear performance of the prepared grease.

Design/methodology/approach

The influence of the additive (M-MMT) on the tribological performance is mainly investigated using a ball-disc wear tester, and the wear scar surface about the disc was characterized by white light interferometer and electrochemical workstation. Moreover, the cell viability test was used to evaluate the safety of the grease.

Findings

The results indicated that for the grease containing 1.5% M-MMT, the average coefficient of friction was reduced by about 46% compared with the CAG, the wear volume of the disc reduced by about 74%. Moreover, CAG and 1.5% M-MMT-containing CAG were proved safety by means of the cell viability test.

Originality/value

The integral properties of CAG can be improved with the medical-grade materials as the additives, while ensuring the safety.

Details

Industrial Lubrication and Tribology, vol. 74 no. 10
Type: Research Article
ISSN: 0036-8792

Keywords

Article
Publication date: 21 March 2023

Longfei Zhang, Yanghe Feng, Rongxiao Wang, Yue Xu, Naifu Xu, Zeyi Liu and Hang Du

Offline reinforcement learning (RL) acquires effective policies by using prior collected large-scale data, while, in some scenarios, collecting data may be hard because it is…

Abstract

Purpose

Offline reinforcement learning (RL) acquires effective policies by using prior collected large-scale data, while, in some scenarios, collecting data may be hard because it is time-consuming, expensive and dangerous, i.e. health care, autonomous driving, seeking a more efficient offline RL method. The purpose of the study is to introduce an algorithm, which attempts to sample the high-value transitions in the prioritized buffer, and uniformly sample from the normal experience buffer, improving sample efficiency of offline reinforcement learning, as well as alleviating the “extrapolation error” commonly arising in offline RL.

Design/methodology/approach

The authors propose a new structure of experience replay architecture, which consists of double experience replies, a prioritized experience replay and a normal experience replay, supplying samples for policy updates in different training phases. At the first training stage, the authors sample from prioritized experience replay according to the calculated priority of each transitions. At the second training stage, the authors sample from the normal experience replay uniformly. The combination of the two experience replies is initialized by the same offline data set.

Findings

The proposed method eliminates out-of-distribution problem in an offline RL regime, and promotes training by leveraging a new efficient experience replay. The authors evaluate their method on D4RL benchmark, and the results reveal that the algorithm can achieve superior performance over the state-of-the-art offline RL algorithm. The ablation study proves that the authors’ experience replay architecture plays an important role in terms of improving final performance, data-efficiency and training stability.

Research limitations/implications

Because of the extra addition of prioritized experience replay, the proposed method increases the computational burden and has the risk of changing data distribution due to the combined sample strategy. Therefore, researchers are encouraged to use the experience replay block effectively and efficiently further.

Practical implications

Offline RL is susceptible to the quality and coverage of pre-collected data, which may be not easy to be collected from specific environment, demanding practitioners to handcraft behavior policy to interact with environment for gathering data.

Originality/value

The proposed approach focuses on the experience replay architecture for offline RL, and empirically demonstrates the superiority of the algorithm on data efficiency and final performance over conservative Q-learning across diverse D4RL tasks. In particular, the authors compare different variants of their experience replay block, and the experiments show that the stages, when to sample from the priority buffer, play an important role in the algorithm. The algorithm is easy to implement and can be combined with any Q-value approximation-based offline RL methods by minor adjustment.

Details

Robotic Intelligence and Automation, vol. 43 no. 1
Type: Research Article
ISSN: 2754-6969

Keywords

1 – 4 of 4