Search results

1 – 10 of 163
Per page
102050
Citations:
Loading...
Access Restricted. View access options
Article
Publication date: 24 September 2020

Toshiki Tomihira, Atsushi Otsuka, Akihiro Yamashita and Tetsuji Satoh

Recently, Unicode has been standardized with the penetration of social networking services, the use of emojis has become common. Emojis, as they are also known, are most effective…

895

Abstract

Purpose

Recently, Unicode has been standardized with the penetration of social networking services, the use of emojis has become common. Emojis, as they are also known, are most effective in expressing emotions in sentences. Sentiment analysis in natural language processing manually labels emotions for sentences. The authors can predict sentiment using emoji of text posted on social media without labeling manually. The purpose of this paper is to propose a new model that learns from sentences using emojis as labels, collecting English and Japanese tweets from Twitter as the corpus. The authors verify and compare multiple models based on attention long short-term memory (LSTM) and convolutional neural networks (CNN) and Bidirectional Encoder Representations from Transformers (BERT).

Design/methodology/approach

The authors collected 2,661 kinds of emoji registered as Unicode characters from tweets using Twitter application programming interface. It is a total of 6,149,410 tweets in Japanese. First, the authors visualized a vector space produced by the emojis by Word2Vec. In addition, the authors found that emojis and similar meaning words of emojis are adjacent and verify that emoji can be used for sentiment analysis. Second, it involves entering a line of tweets containing emojis, learning and testing with that emoji as a label. The authors compared the BERT model with the conventional models [CNN, FastText and Attention bidirectional long short-term memory (BiLSTM)] that were high scores in the previous study.

Findings

Visualized the vector space of Word2Vec, the authors found that emojis and similar meaning words of emojis are adjacent and verify that emoji can be used for sentiment analysis. The authors obtained a higher score with BERT models compared to the conventional model. Therefore, the sophisticated experiments demonstrate that they improved the score over the conventional model in two languages. General emoji prediction is greatly influenced by context. In addition, the score may be lowered due to a misunderstanding of meaning. By using BERT based on a bi-directional transformer, the authors can consider the context.

Practical implications

The authors can find emoji in the output words by typing a word using an input method editor (IME). The current IME only considers the most latest inputted word, although it is possible to recommend emojis considering the context of the inputted sentence in this study. Therefore, the research can be used to improve IME performance in the future.

Originality/value

In the paper, the authors focus on multilingual emoji prediction. This is the first attempt of comparison at emoji prediction between Japanese and English. In addition, it is also the first attempt to use the BERT model based on the transformer for predicting limited emojis although the transformer is known to be effective for various NLP tasks. The authors found that a bidirectional transformer is suitable for emoji prediction.

Details

International Journal of Web Information Systems, vol. 16 no. 3
Type: Research Article
ISSN: 1744-0084

Keywords

Access Restricted. View access options
Article
Publication date: 25 November 2024

Sohui Kim and Min Ho Ryu

This study introduces a novel approach to conducting importance-performance analysis (IPA) and importance-performance competitor analysis (IPCA) by utilizing online reviews as an…

30

Abstract

Purpose

This study introduces a novel approach to conducting importance-performance analysis (IPA) and importance-performance competitor analysis (IPCA) by utilizing online reviews as an alternative to traditional survey-based marketing research.

Design/methodology/approach

In order to conduct IPA and IPCA utilizing online reviews, the following three steps were executed: (1) Extract key attributes of the product/service with latent Dirichlet allocation (LDA) topic modeling. (2) Measure the importance of each attribute with keyword analysis. (3) Measure the performance of each attribute with KoBERT-based sentiment analysis.

Findings

By utilizing LDA, we were able to identify significant attributes identified by real users’ reviews. The approach of evaluating attribute importance using keyword metrics offers the advantage of straightforward computations, reducing computational expenses and facilitating intuitive metric assessments. The evaluation of BERT-based performance involved adapting pre-trained language models to the specific analysis domain, resulting in substantial time savings without compromising accuracy, ultimately bolstering the dependability of the metrics. Lastly, the case study’s findings indicate a growing emphasis on the aesthetic aspects of mobile banking apps in South Korea while highlighting the pressing need for enhancements in critical areas such as stability and security, which are particularly pertinent to the finance industry.

Originality/value

Previous studies have limitations in assessing significance solely based on sentiment scores and review ratings, resulting in an inability to independently measure satisfaction and importance metrics. This research addresses these limitations by introducing a keyword frequency-based importance metric, enhancing the accuracy and suitability of these measurements independently. In the context of performance measurement, this study utilizes pre-trained large language models (LLMs), which provide a cost-effective alternative to previous methods while preserving measurement accuracy. Additionally, this approach demonstrates the potential for industry-wide competitive analysis by enabling comparisons among multiple competitors. Furthermore, the study extends the application of review data-based IPA and IPCA, traditionally used in the tourism sector, to the evaluation of financial mobile applications. This innovation expands the scope of these methodologies, indicating their potential applicability across various industries.

Details

International Journal of Bank Marketing, vol. 43 no. 4
Type: Research Article
ISSN: 0265-2323

Keywords

Access Restricted. View access options
Article
Publication date: 13 August 2024

Samia Nawaz Yousafzai, Hooria Shahbaz, Armughan Ali, Amreen Qamar, Inzamam Mashood Nasir, Sara Tehsin and Robertas Damaševičius

The objective is to develop a more effective model that simplifies and accelerates the news classification process using advanced text mining and deep learning (DL) techniques. A…

35

Abstract

Purpose

The objective is to develop a more effective model that simplifies and accelerates the news classification process using advanced text mining and deep learning (DL) techniques. A distributed framework utilizing Bidirectional Encoder Representations from Transformers (BERT) was developed to classify news headlines. This approach leverages various text mining and DL techniques on a distributed infrastructure, aiming to offer an alternative to traditional news classification methods.

Design/methodology/approach

This study focuses on the classification of distinct types of news by analyzing tweets from various news channels. It addresses the limitations of using benchmark datasets for news classification, which often result in models that are impractical for real-world applications.

Findings

The framework’s effectiveness was evaluated on a newly proposed dataset and two additional benchmark datasets from the Kaggle repository, assessing the performance of each text mining and classification method across these datasets. The results of this study demonstrate that the proposed strategy significantly outperforms other approaches in terms of accuracy and execution time. This indicates that the distributed framework, coupled with the use of BERT for text analysis, provides a robust solution for analyzing large volumes of data efficiently. The findings also highlight the value of the newly released corpus for further research in news classification and emotion classification, suggesting its potential to facilitate advancements in these areas.

Originality/value

This research introduces an innovative distributed framework for news classification that addresses the shortcomings of models trained on benchmark datasets. By utilizing cutting-edge techniques and a novel dataset, the study offers significant improvements in accuracy and processing speed. The release of the corpus represents a valuable contribution to the field, enabling further exploration into news and emotion classification. This work sets a new standard for the analysis of news data, offering practical implications for the development of more effective and efficient news classification systems.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 17 no. 4
Type: Research Article
ISSN: 1756-378X

Keywords

Access Restricted. View access options
Article
Publication date: 26 August 2022

William Harly and Abba Suganda Girsang

With the rise of online discussion and argument mining, methods that are able to analyze arguments become increasingly important. A recent study proposed the usage of agreement…

185

Abstract

Purpose

With the rise of online discussion and argument mining, methods that are able to analyze arguments become increasingly important. A recent study proposed the usage of agreement between arguments to represent both stance polarity and intensity, two important aspects in analyzing arguments. However, this study primarily focused on finetuning bidirectional encoder representations from transformer (BERT) model. The purpose of this paper is to propose convolutional neural network (CNN)-BERT architecture to improve the previous method.

Design/methodology/approach

The used CNN-BERT architecture in this paper directly uses the generated hidden representation from BERT. This allows for better use of the pretrained BERT model and makes finetuning the pretrained BERT model optional. The authors then compared the CNN-BERT architecture with the method proposed in the previous study (BERT and Siamese-BERT).

Findings

Experiment results demonstrate that the proposed CNN-BERT is able to achieve a 71.87% accuracy in measuring agreement between arguments. Compared to the previous study that achieve an accuracy of 68.58%, the CNN-BERT architecture was able to increase the accuracy by 3.29%. The CNN-BERT architecture is also able to achieve a similar result even without further pretraining the BERT model.

Originality/value

The principal originality of this paper is the proposition of using CNN-BERT to better use the pretrained BERT model for measuring agreement between arguments. The proposed method is able to improve performance and also able to achieve a similar result without further training the BERT model. This allows separation of the BERT model from the CNN classifier, which significantly reduces the model size and allows the usage of the same pretrained BERT model for other problems that also did not need to finetune their BERT model.

Details

International Journal of Web Information Systems, vol. 18 no. 5/6
Type: Research Article
ISSN: 1744-0084

Keywords

Access Restricted. View access options
Article
Publication date: 28 December 2020

Arpita Gupta, Saloni Priyani and Ramadoss Balakrishnan

In this study, the authors have used the customer reviews of books and movies in natural language for the purpose of sentiment analysis and reputation generation on the reviews…

130

Abstract

Purpose

In this study, the authors have used the customer reviews of books and movies in natural language for the purpose of sentiment analysis and reputation generation on the reviews. Most of the existing work has performed sentiment analysis and reputation generation on the reviews by using single classification models and considered other attributes for reputation generation.

Design/methodology/approach

The authors have taken review, helpfulness and rating into consideration. In this paper, the authors have performed sentiment analysis for extracting the probability of the review belonging to a class, which is further used for generating the sentiment score and reputation of the review. The authors have used pre-trained BERT fine-tuned for sentiment analysis on movie and book reviews separately.

Findings

In this study, the authors have also combined the three models (BERT, Naïve Bayes and SVM) for more accurate sentiment classification and reputation generation, which has outperformed the best BERT model in this study. They have achieved the best accuracy of 91.2% for the movie review data set and 89.4% for the book review data set which is better than the existing state-of-art methods. They have used the transfer learning concept in deep learning where you take knowledge gained from one problem and apply it to a similar problem.

Originality/value

The authors have proposed a novel model based on combination of three classification models, which has outperformed the existing state-of-art methods. To the best of the authors’ knowledge, there is no existing model which combines three models for sentiment score calculation and reputation generation for the book review data set.

Details

World Journal of Engineering, vol. 18 no. 4
Type: Research Article
ISSN: 1708-5284

Keywords

Access Restricted. View access options
Article
Publication date: 19 September 2022

Srishti Sharma, Mala Saraswat and Anil Kumar Dubey

Owing to the increased accessibility of internet and related technologies, more and more individuals across the globe now turn to social media for their daily dose of news rather…

595

Abstract

Purpose

Owing to the increased accessibility of internet and related technologies, more and more individuals across the globe now turn to social media for their daily dose of news rather than traditional news outlets. With the global nature of social media and hardly any checks in place on posting of content, exponential increase in spread of fake news is easy. Businesses propagate fake news to improve their economic standing and influencing consumers and demand, and individuals spread fake news for personal gains like popularity and life goals. The content of fake news is diverse in terms of topics, styles and media platforms, and fake news attempts to distort truth with diverse linguistic styles while simultaneously mocking true news. All these factors together make fake news detection an arduous task. This work tried to check the spread of disinformation on Twitter.

Design/methodology/approach

This study carries out fake news detection using user characteristics and tweet textual content as features. For categorizing user characteristics, this study uses the XGBoost algorithm. To classify the tweet text, this study uses various natural language processing techniques to pre-process the tweets and then apply a hybrid convolutional neural network–recurrent neural network (CNN-RNN) and state-of-the-art Bidirectional Encoder Representations from Transformers (BERT) transformer.

Findings

This study uses a combination of machine learning and deep learning approaches for fake news detection, namely, XGBoost, hybrid CNN-RNN and BERT. The models have also been evaluated and compared with various baseline models to show that this approach effectively tackles this problem.

Originality/value

This study proposes a novel framework that exploits news content and social contexts to learn useful representations for predicting fake news. This model is based on a transformer architecture, which facilitates representation learning from fake news data and helps detect fake news easily. This study also carries out an investigative study on the relative importance of content and social context features for the task of detecting false news and whether absence of one of these categories of features hampers the effectiveness of the resultant system. This investigation can go a long way in aiding further research on the subject and for fake news detection in the presence of extremely noisy or unusable data.

Details

International Journal of Web Information Systems, vol. 18 no. 5/6
Type: Research Article
ISSN: 1744-0084

Keywords

Access Restricted. View access options
Article
Publication date: 13 September 2024

Ahmad Honarjoo, Ehsan Darvishan, Hassan Rezazadeh and Amir Homayoon Kosarieh

This article introduces SigBERT, a novel approach that fine-tunes bidirectional encoder representations from transformers (BERT) for the purpose of distinguishing between intact…

68

Abstract

Purpose

This article introduces SigBERT, a novel approach that fine-tunes bidirectional encoder representations from transformers (BERT) for the purpose of distinguishing between intact and impaired structures by analyzing vibration signals. Structural health monitoring (SHM) systems are crucial for identifying and locating damage in civil engineering structures. The proposed method aims to improve upon existing methods in terms of cost-effectiveness, accuracy and operational reliability.

Design/methodology/approach

SigBERT employs a fine-tuning process on the BERT model, leveraging its capabilities to effectively analyze time-series data from vibration signals to detect structural damage. This study compares SigBERT's performance with baseline models to demonstrate its superior accuracy and efficiency.

Findings

The experimental results, obtained through the Qatar University grandstand simulator, show that SigBERT outperforms existing models in terms of damage detection accuracy. The method is capable of handling environmental fluctuations and offers high reliability for non-destructive monitoring of structural health. The study mentions the quantifiable results of the study, such as achieving a 99% accuracy rate and an F-1 score of 0.99, to underline the effectiveness of the proposed model.

Originality/value

SigBERT presents a significant advancement in SHM by integrating deep learning with a robust transformer model. The method offers improved performance in both computational efficiency and diagnostic accuracy, making it suitable for real-world operational environments.

Details

International Journal of Structural Integrity, vol. 15 no. 5
Type: Research Article
ISSN: 1757-9864

Keywords

Available. Open Access. Open Access
Article
Publication date: 8 February 2024

Ana Isabel Lopes, Edward C. Malthouse, Nathalie Dens and Patrick De Pelsmacker

Engaging in webcare, i.e. responding to online reviews, can positively affect consumer attitudes, intentions and behavior. Research is often scarce or inconsistent regarding the…

2026

Abstract

Purpose

Engaging in webcare, i.e. responding to online reviews, can positively affect consumer attitudes, intentions and behavior. Research is often scarce or inconsistent regarding the effects of specific webcare strategies on business performance. Therefore, this study tests whether and how several webcare strategies affect hotel bookings.

Design/methodology/approach

We apply machine learning classifiers to secondary data (webcare messages) to classify webcare variables to be included in a regression analysis looking at the effect of these strategies on hotel bookings while controlling for possible confounds such as seasonality and hotel-specific effects.

Findings

The strategies that have a positive effect on bookings are directing reviewers to a private channel, being defensive, offering compensation and having managers sign the response. Webcare strategies to be avoided are apologies, merely asking for more information, inviting customers for another visit and adding informal non-verbal cues. Strategies that do not appear to affect future bookings are expressing gratitude, personalizing and having staff members (rather than managers) sign webcare.

Practical implications

These findings help managers optimize their webcare strategy for better business results and develop automated webcare.

Originality/value

We look into several commonly used and studied webcare strategies that affect actual business outcomes, being that most previous research studies are experimental or look into a very limited set of strategies.

Details

Journal of Service Management, vol. 35 no. 6
Type: Research Article
ISSN: 1757-5818

Keywords

Access Restricted. View access options
Article
Publication date: 26 January 2022

Xingyu Ken Chen, Jin-Cheon Na, Luke Kien-Weng Tan, Mark Chong and Murphy Choy

The COVID-19 pandemic has spurred a concurrent outbreak of false information online. Debunking false information about a health crisis is critical as misinformation can trigger…

410

Abstract

Purpose

The COVID-19 pandemic has spurred a concurrent outbreak of false information online. Debunking false information about a health crisis is critical as misinformation can trigger protests or panic, which necessitates a better understanding of it. This exploratory study examined the effects of debunking messages on a COVID-19-related public chat on WhatsApp in Singapore.

Design/methodology/approach

To understand the effects of debunking messages about COVID-19 on WhatsApp conversations, the following was studied. The relationship between source credibility (i.e. characteristics of a communicator that affect the receiver's acceptance of the message) of different debunking message types and their effects on the length of the conversation, sentiments towards various aspects of a crisis, and the information distortions in a message thread were studied. Deep learning techniques, knowledge graphs (KG), and content analyses were used to perform aspect-based sentiment analysis (ABSA) of the messages and measure information distortion.

Findings

Debunking messages with higher source credibility (e.g. providing evidence from authoritative sources like health authorities) help close a discussion thread earlier. Shifts in sentiments towards some aspects of the crisis highlight the value of ABSA in monitoring the effectiveness of debunking messages. Finally, debunking messages with lower source credibility (e.g. stating that the information is false without any substantiation) are likely to increase information distortion in conversation threads.

Originality/value

The study supports the importance of source credibility in debunking and an ABSA approach in analysing the effect of debunking messages during a health crisis, which have practical value for public agencies during a health crisis. Studying differences in the source credibility of debunking messages on WhatsApp is a novel shift from the existing approaches. Additionally, a novel approach to measuring information distortion using KGs was used to shed insights on how debunking can reduce information distortions.

Details

Online Information Review, vol. 46 no. 6
Type: Research Article
ISSN: 1468-4527

Keywords

Access Restricted. View access options
Article
Publication date: 23 September 2024

Bernardo Cerqueira de Lima, Renata Maria Abrantes Baracho, Thomas Mandl and Patricia Baracho Porto

Social media platforms that disseminate scientific information to the public during the COVID-19 pandemic highlighted the importance of the topic of scientific communication…

54

Abstract

Purpose

Social media platforms that disseminate scientific information to the public during the COVID-19 pandemic highlighted the importance of the topic of scientific communication. Content creators in the field, as well as researchers who study the impact of scientific information online, are interested in how people react to these information resources and how they judge them. This study aims to devise a framework for extracting large social media datasets and find specific feedback to content delivery, enabling scientific content creators to gain insights into how the public perceives scientific information.

Design/methodology/approach

To collect public reactions to scientific information, the study focused on Twitter users who are doctors, researchers, science communicators or representatives of research institutes, and processed their replies for two years from the start of the pandemic. The study aimed in developing a solution powered by topic modeling enhanced by manual validation and other machine learning techniques, such as word embeddings, that is capable of filtering massive social media datasets in search of documents related to reactions to scientific communication. The architecture developed in this paper can be replicated for finding any documents related to niche topics in social media data. As a final step of our framework, we also fine-tuned a large language model to be able to perform the classification task with even more accuracy, forgoing the need of more human validation after the first step.

Findings

We provided a framework capable of receiving a large document dataset, and, with the help of with a small degree of human validation at different stages, is able to filter out documents within the corpus that are relevant to a very underrepresented niche theme inside the database, with much higher precision than traditional state-of-the-art machine learning algorithms. Performance was improved even further by the fine-tuning of a large language model based on BERT, which would allow for the use of such model to classify even larger unseen datasets in search of reactions to scientific communication without the need for further manual validation or topic modeling.

Research limitations/implications

The challenges of scientific communication are even higher with the rampant increase of misinformation in social media, and the difficulty of competing in a saturated attention economy of the social media landscape. Our study aimed at creating a solution that could be used by scientific content creators to better locate and understand constructive feedback toward their content and how it is received, which can be hidden as a minor subject between hundreds of thousands of comments. By leveraging an ensemble of techniques ranging from heuristics to state-of-the-art machine learning algorithms, we created a framework that is able to detect texts related to very niche subjects in very large datasets, with just a small amount of examples of texts related to the subject being given as input.

Practical implications

With this tool, scientific content creators can sift through their social media following and quickly understand how to adapt their content to their current user’s needs and standards of content consumption.

Originality/value

This study aimed to find reactions to scientific communication in social media. We applied three methods with human intervention and compared their performance. This study shows for the first time, the topics of interest which were discussed in Brazil during the COVID-19 pandemic.

Details

Data Technologies and Applications, vol. 59 no. 1
Type: Research Article
ISSN: 2514-9288

Keywords

1 – 10 of 163
Per page
102050