Search results

1 – 2 of 2
Per page
102050
Citations:
Loading...
Access Restricted. View access options
Article
Publication date: 9 December 2024

Qiaojuan Peng, Xiong Luo, Yuqi Yuan, Fengbo Gu, Hailun Shen and Ziyang Huang

With the development of Web information systems, steel e-commerce platforms have accumulated a large number of quality objection texts. These texts reflect consumer…

8

Abstract

Purpose

With the development of Web information systems, steel e-commerce platforms have accumulated a large number of quality objection texts. These texts reflect consumer dissatisfaction with the dimensions, appearance and performance of steel products, providing valuable insights for product improvement and consumer decision-making. Currently, mainstream solutions rely on pre-trained models, but their performance on domain-specific data sets and few-shot data sets is not satisfactory. This paper aims to address these challenges by proposing more effective methods for improving model performance on these specialized data sets.

Design/methodology/approach

This paper presents a method on the basis of in-domain pre-training, bidirectional encoder representation from Transformers (BERT) and prompt learning. Specifically, a domain-specific unsupervised data set is introduced into the BERT model for in-domain pre-training, enabling the model to better understand specific language patterns in the steel e-commerce industry, enhancing the model’s generalization capability; the incorporation of prompt learning into the BERT model enhances attention to sentence context, improving classification performance on few-shot data sets.

Findings

Through experimental evaluation, this method demonstrates superior performance on the quality objection data set, achieving a Macro-F1 score of 93.32%. Additionally, ablation experiments further validate the significant advantages of in-domain pre-training and prompt learning in enhancing model performance.

Originality/value

This study clearly demonstrates the value of the new method in improving the classification of quality objection texts for steel products. The findings of this study offer practical insights for product improvement in the steel industry and provide new directions for future research on few-shot learning and domain-specific models, with potential applications in other fields.

Details

International Journal of Web Information Systems, vol. 21 no. 1
Type: Research Article
ISSN: 1744-0084

Keywords

Access Restricted. View access options
Article
Publication date: 26 December 2024

Hao Zhang, Weilong Ding, Qi Yu and Zijian Liu

The proposed model aims to tackle the data quality issues in multivariate time series caused by missing values. It preserves data set integrity by accurately imputing missing…

11

Abstract

Purpose

The proposed model aims to tackle the data quality issues in multivariate time series caused by missing values. It preserves data set integrity by accurately imputing missing data, ensuring reliable analysis outcomes.

Design/methodology/approach

The Conv-DMSA model employs a combination of self-attention mechanisms and convolutional networks to handle the complexities of multivariate time series data. The convolutional network is adept at learning features across uneven time intervals through an imputation feature map, while the Diagonal Mask Self-Attention (DMSA) block is specifically designed to capture time dependencies and feature correlations. This dual approach allows the model to effectively address the temporal imbalance, feature correlation and time dependency challenges that are often overlooked in traditional imputation models.

Findings

Extensive experiments conducted on two public data sets and a real project data set have demonstrated the adaptability and effectiveness of the Conv-DMSA model for imputing missing data. The model outperforms baseline methods by significantly reducing the Root Mean Square Error (RMSE) metric, showcasing its superior performance. Specifically, Conv-DMSA has been found to reduce RMSE by 37.2% to 63.87% compared to other models, indicating its enhanced accuracy and efficiency in handling missing data in multivariate time series.

Originality/value

The Conv-DMSA model introduces a unique combination of convolutional networks and self-attention mechanisms to the field of missing data imputation. Its innovative use of a diagonal mask within the self-attention block allows for a more nuanced understanding of the data’s temporal and relational aspects. This novel approach not only addresses the existing shortcomings of conventional imputation methods but also sets a new standard for handling missing data in complex, multivariate time series data sets. The model’s superior performance and its capacity to adapt to varying levels of missing data make it a significant contribution to the field.

Details

International Journal of Web Information Systems, vol. 21 no. 1
Type: Research Article
ISSN: 1744-0084

Keywords

Access

Year

Last month (2)

Content type

1 – 2 of 2
Per page
102050