Bharathiraja Balasubramanian, Praveen Kumar Ramanujam, Ranjith Ravi Kumar, Chakravarthy Muninathan and Yogendran Dhinakaran
The purpose of this paper is to speak about the production of biodiesel from waste cooking oil which serves as an alternate fuel in the absence of conventional fuels such as…
Abstract
Purpose
The purpose of this paper is to speak about the production of biodiesel from waste cooking oil which serves as an alternate fuel in the absence of conventional fuels such as diesel and petrol. Though much research work was carried out using non-edible crops such as Jatropha and Pongamia, cooking oil utilized in bulk quantity is discarded as a waste. This is reused again as it contains more of esters that when combined with an alcohol in presence of an enzyme as a catalyst yields triglycerides (biodiesel).
Design/methodology/approach
The lipase producing strain Rhizopus oryzae and pure enzyme lipase is immobilized and treated with waste cooking oil for the production of FAME. Reaction parameters such as temperature, time, oil to acyl acceptor ratio and enzyme concentration were considered for purified lipase and in the case of Rhizopus oryzae, pH, olive oil concentration and rpm were considered for optimization studies. The response generated through each run were evaluated and analyzed through the central composited design of response surface methodology and thus the optimized reaction conditions were determined.
Findings
A high conversion (94.01 percent) was obtained for methanol when compared to methyl acetate (91.11 percent) and ethyl acetate (90.06 percent) through lipase catalyzed reaction at oil to solvent ratio of 1:3, enzyme concentration of 10 percent at 30°C after 24 h. Similarly, for methanol a high conversion (83.76 percent) was obtained at an optimum pH of 5.5, olive oil concentration 25 g/L and 150 rpm using Rhizopus oryzae when compared to methyl acetate (81.09 percent) and ethyl acetate (80.49 percent).
Originality/value
This research work implies that the acyl acceptors methyl acetate and ethyl acetate which are novel solvents for biodiesel production can also be used to obtain high yields as compared with methanol under optimized conditions.
Details
Keywords
Veerraju Gampala, Praful Vijay Nandankar, M. Kathiravan, S. Karunakaran, Arun Reddy Nalla and Ranjith Reddy Gaddam
The purpose of this paper is to analyze and build a deep learning model that can furnish statistics of COVID-19 and is able to forecast pandemic outbreak using Kaggle open…
Abstract
Purpose
The purpose of this paper is to analyze and build a deep learning model that can furnish statistics of COVID-19 and is able to forecast pandemic outbreak using Kaggle open research COVID-19 data set. As COVID-19 has an up-to-date data collection from the government, deep learning techniques can be used to predict future outbreak of coronavirus. The existing long short-term memory (LSTM) model is fine-tuned to forecast the outbreak of COVID-19 with better accuracy, and an empirical data exploration with advanced picturing has been made to comprehend the outbreak of coronavirus.
Design/methodology/approach
This research work presents a fine-tuned LSTM deep learning model using three hidden layers, 200 LSTM unit cells, one activation function ReLu, Adam optimizer, loss function is mean square error, the number of epochs 200 and finally one dense layer to predict one value each time.
Findings
LSTM is found to be more effective in forecasting future predictions. Hence, fine-tuned LSTM model predicts accurate results when applied to COVID-19 data set.
Originality/value
The fine-tuned LSTM model is developed and tested for the first time on COVID-19 data set to forecast outbreak of pandemic according to the authors’ knowledge.
Details
Keywords
Federico Barravecchia, Luca Mastrogiacomo and Fiorenzo Franceschini
The aim of this study is to enhance the product quality management by proposing a framework for the classification of anomalies in digital voice of customer (VoC), i.e. user…
Abstract
Purpose
The aim of this study is to enhance the product quality management by proposing a framework for the classification of anomalies in digital voice of customer (VoC), i.e. user feedback on product/service usage gathered from online sources such as online reviews. By categorizing significant deviations in the content of digital VoC, the research seeks to provide actionable insights for quality improvement.
Design/methodology/approach
The study proposes the application of topic modeling algorithms, in particular the structural topic model, to large datasets of digital VoC, enabling the identification and classification of customer feedback into distinct topics. This approach helps to systematically analyze deviations from expected feedback patterns, providing early detection of potential quality issues or shifts in customer preferences. By focusing on anomalies in digital VoC, the study offers a dynamic framework for improving product quality and enhancing customer satisfaction.
Findings
The research categorizes anomalies into spike, level, trend and seasonal types, each with distinct characteristics and implications for quality management. Case studies illustrate how these anomalies can signal critical shifts in customer sentiment and behavior, highlighting the importance of targeted responses to maintain or enhance product quality.
Research limitations/implications
Despite its contributions, the study has some limitations. The reliance on historical data may not hold in rapidly changing markets. Additionally, text mining techniques may miss implicit customer sentiment.
Practical implications
The findings suggest that companies can enhance their quality tracking tools by digital VoC anomaly detection into their standard practices, potentially leading to more responsive and effective quality management systems.
Originality/value
This paper introduces a novel framework for interpreting digital VoC anomalies within the Quality 4.0 context. By integrating text mining techniques with traditional quality tracking, it offers a novel approach for leveraging customer feedback to drive continuous improvement.
Details
Keywords
Suhang Yang, Tangrui Chen and Zhifeng Xu
Recycled aggregate self-compacting concrete (RASCC) has the potential for sustainable resource utilization and has been widely applied. Predicting the compressive strength (CS) of…
Abstract
Purpose
Recycled aggregate self-compacting concrete (RASCC) has the potential for sustainable resource utilization and has been widely applied. Predicting the compressive strength (CS) of RASCC is challenging due to its complex composite nature and nonlinear behavior.
Design/methodology/approach
This study comprehensively evaluated commonly used machine learning (ML) techniques, including artificial neural networks (ANN), random trees (RT), bagging and random forests (RF) for predicting the CS of RASCC. The results indicate that RF and ANN models typically have advantages with higher R2 values, lower root mean square error (RMSE), mean square error (MSE) and mean absolute error (MAE) values.
Findings
The combination of ML and Shapley additive explanation (SHAP) interpretable algorithms provides physical rationality, allowing engineers to adjust the proportion based on parameter analysis to predict and design RASCC. The sensitivity analysis of the ML model indicates that ANN’s interpretation ability is weaker than tree-based algorithms (RT, BG and RF). ML regression technology has high accuracy, good interpretability and great potential for predicting the CS of RASCC.
Originality/value
ML regression technology has high accuracy, good interpretability and great potential for predicting the CS of RASCC.
Details
Keywords
Surabhi Singh, Shiwangi Singh, Alex Koohang, Anuj Sharma and Sanjay Dhir
The primary aim of this study is to detail the use of soft computing techniques in business and management research. Its objectives are as follows: to conduct a comprehensive…
Abstract
Purpose
The primary aim of this study is to detail the use of soft computing techniques in business and management research. Its objectives are as follows: to conduct a comprehensive scientometric analysis of publications in the field of soft computing, to explore the evolution of keywords, to identify key research themes and latent topics and to map the intellectual structure of soft computing in the business literature.
Design/methodology/approach
This research offers a comprehensive overview of the field by synthesising 43 years (1980–2022) of soft computing research from the Scopus database. It employs descriptive analysis, topic modelling (TM) and scientometric analysis.
Findings
This study's co-citation analysis identifies three primary categories of research in the field: the components, the techniques and the benefits of soft computing. Additionally, this study identifies 16 key study themes in the soft computing literature using TM, including decision-making under uncertainty, multi-criteria decision-making (MCDM), the application of deep learning in object detection and fault diagnosis, circular economy and sustainable development and a few others.
Practical implications
This analysis offers a valuable understanding of soft computing for researchers and industry experts and highlights potential areas for future research.
Originality/value
This study uses scientific mapping and performance indicators to analyse a large corpus of 4,512 articles in the field of soft computing. It makes significant contributions to the intellectual and conceptual framework of soft computing research by providing a comprehensive overview of the literature on soft computing literature covering a period of four decades and identifying significant trends and topics to direct future research.