Search results
1 – 10 of 25M. Karthik, Solomon Oyebisi, Pshtiwan Shakor, Sathvik Sharath Chandra, L. Prajwal and U.S. Agrawal
This work aims to investigate the feasibility of recycling waste plastic (polyethylene terephthalate) as a coarse aggregate for producing blended cement concrete modified with fly…
Abstract
Purpose
This work aims to investigate the feasibility of recycling waste plastic (polyethylene terephthalate) as a coarse aggregate for producing blended cement concrete modified with fly ash and pond ash.
Design/methodology/approach
The low, medium and high controlled strength blended cement concrete modified with varied proportions of fly and pond ashes were produced. Manufactured sand and recycled plastic coarse aggregate (RPCA) replaced normal fine and coarse aggregates. Concrete samples were tested for workability, mechanical and durability characteristics. Microstructural analysis was performed on cement concrete blended with fly and pond ashes and compared to conventional concrete samples.
Findings
All concrete mixes showed better flowability with values greater than 200 mm. Besides, the maximum flow time was approximately 8 s. The wet density of blended cement concrete-RPCA-based concretes was approximately 30% lower than that of conventional concrete. The compressive strengths of the controlled strength mix at 7 and 28 days were within the specified ranges. While the conventional concrete had slightly higher permeability, the blended cement concrete-RPCA-based concretes had better thermal resistivity and lower thermal conductivity. The scanning electron microscopy analysis revealed the densification of the microstructure due to the filler effects of fly and pond ashes.
Originality/value
This study establishes the prospects of substituting RPCA with normal coarse aggregate in the production of controlled low-strength blended cement concrete, offering benefits of structural fill concrete, lower permeability and thermal conductivity, higher thermal resistivity and reduced density and shrinkage.
Details
Keywords
Saima Yaqoob, Jaharah A. Ghani, Nabil Jouini, Shalina Sheik Muhamad, Che Hassan Che Haron and Afifah Juri
This study aims to investigate the machining performance of CVD-coated carbide tools by considering most crucial machinability aspects: cutting force, tool life, surface roughness…
Abstract
Purpose
This study aims to investigate the machining performance of CVD-coated carbide tools by considering most crucial machinability aspects: cutting force, tool life, surface roughness and chip morphology in high-speed hard turning of AISI 4340 alloy steel under a sustainable minimum quantity lubrication (MQL) environment.
Design/methodology/approach
The purpose of this study is to analyze the performance of coated carbide tools under MQL environment therefore, machining tests were performed in accordance with the Taguchi L9 orthogonal array, accommodating the three crucial machining parameters such as cutting speed (V = 300–400 m/min), feed rate (F = 0.1–0.2 mm/rev) and depth of cut (DOC = 0.2–0.4 mm). The measured or calculated values obtained in each experimental run were validated for normality assumptions before drawing any statistical inferences. Taguchi signal-to-noise (S/N) ratio and analysis of variance methodologies were used to examine the effect of machining variables on the performance outcomes.
Findings
The quantitative analysis revealed that the depth of cut exerted the most significant influence on cutting force, with a contributing rate of 60.72%. Cutting speed was identified as the primary variable affecting the tool life, exhibiting a 47.58% contribution, while feed rate had the most dominating impact on surface roughness, with an overall contributing rate of 89.95%. The lowest cutting force (184.55 N) and the longest tool life (7.10 min) were achieved with low machining parameters at V = 300 m/min, F = 0.1 mm/rev, DOC = 0.2 mm. Conversely, the lowest surface roughness (496 nm) was achieved with high cutting speed, low feed rate and moderate depth of cut at V = 400 m/min, F = 0.1 mm/rev and DOC = 0.3 mm. Moreover, the microscopic examination of the chips revealed a serrated shape formation under all machining conditions. However, the degree of serration increased with an incremental raise with cutting speed and feed rate.
Research limitations/implications
The study is limited to study the effect of machining parameters within the stated range of cutting speed, feed rate and depth of cut as well as other parameters.
Practical implications
Practitioners may consider to adopt this machining technique to create more sustainable working environment as well as eliminate the disposal cost of the used metal cutting fluid.
Social implications
By applying this machining technique, diseases caused by metal cutting fluid to the mechanist will be significantly reduced, therefore creating better lifestyles.
Originality/value
Hard turning is commonly carried out with advanced cutting tools such as ceramics, cubic boron nitride and polycrystalline cubic boron nitride to attain exceptional surface finish. However, the high cost of these tools necessitates exploration of alternative approaches. Therefore, this study investigates the potential of using cost-effective, multilayer-coated carbide tools under MQL conditions to achieve comparable surface quality.
Peer review
The peer review history for this article is available at: https://publons.com/publon/10.1108/ILT-01-2024-0013/
Details
Keywords
An efficient e-waste management system is developed, aided by deep learning techniques. Here, a smart bin system using Internet of things (IoT) sensors is generated. The sensors…
Abstract
Purpose
An efficient e-waste management system is developed, aided by deep learning techniques. Here, a smart bin system using Internet of things (IoT) sensors is generated. The sensors detect the level of waste in the dustbin. The data collected by the IoT sensor is stored in the blockchain. Here, an adaptive deep Markov random field (ADMRF) method is implemented to determine the weight of the wastes. The performance of the ADMRF is boosted by optimizing its parameters with the help of the improved corona virus herd immunity optimization algorithm (ICVHIOA). Here, the main objective of the developed ADMRF-based waste weight prediction is to minimize the root mean square error (RMSE) and mean absolute error (MAE) rate at the time of testing. If the weight of the bins is more than 80%, then an alert message will be sent to the waste collector directly. Optimal route selection is carried out using the developed ICVHIOA for efficient collection of wastes from the smart bin. Here, the main objectives of the optimal route selection are to reduce the distance and time to minimize the operational cost and the environmental impacts. The collected waste is then considered for recycling. The performance of the implemented IoT and blockchain-based smart dustbin is evaluated by comparing it with other existing smart dustbins for e-waste management.
Design/methodology/approach
The developed e-waste management system is used to collect the waste and to avoid certain diseases caused by the dumped waste. Disposal and recycling of the e-waste is necessary to decrease pollution and to manufacture new products from the waste.
Findings
The RMSE of the implemented framework was 33.65% better than convolutional neural network (CNN), 27.12% increased than recurrent neural network (RNN), 22.27% advanced than Resnet and 9.99% superior to long short-term memory (LSTM).
Originality/value
The proposed E-waste management system has given an enhanced performance rate in weight prediction and also in optimal route selection when compared with other conventional methods.
Details
Keywords
S.M. Sayem, Azharul Islam, Mohammad Rajib Uddin and Jarin Sadia Promy
The study aims to identify the determinants of customer satisfaction in the electronic commerce (e-commerce) industry in Bangladesh. It also investigates whether acceptance of IT…
Abstract
Purpose
The study aims to identify the determinants of customer satisfaction in the electronic commerce (e-commerce) industry in Bangladesh. It also investigates whether acceptance of IT innovation mediates the relationship between the determinants of e-commerce and customer satisfaction.
Design/methodology/approach
A survey questionnaire had been designed and distributed among the customers of e-commerce businesses. Data were collected from 408 respondents, who were mostly from urban areas of the country. The collected data had been analysed with the application of the partial least square approach using SmartPLS4. First, the measurement model was applied to determine the validity and reliability of the dataset. Then, the structural model was utilized to justify the hypothesis.
Findings
The results showed that customer service, perceived ease of use and customer trust in e-commerce services have a significant positive impact on customer satisfaction. The acceptance of IT innovation, which showed a positive influence on customer satisfaction, enhanced customer satisfaction when accompanied by perceived ease of use and digital literacy.
Practical implications
The results would have valuable insight for the e-commerce business in designing their products and services and taking policies to achieve long-term customer loyalty.
Originality/value
This is the first study that incorporates IT innovation acceptance as a mediating variable. Although a number of factors have been identified as the determinants of customer satisfaction, the specific mechanism of IT innovation acceptance as a mediator between predictors and customer satisfaction is unique in this study.
Details
Keywords
Dean Neu and Gregory D. Saxton
This study is motivated to provide a theoretically informed, data-driven assessment of the consequences associated with the participation of non-human bots in social…
Abstract
Purpose
This study is motivated to provide a theoretically informed, data-driven assessment of the consequences associated with the participation of non-human bots in social accountability movements; specifically, the anti-inequality/anti-corporate #OccupyWallStreet conversation stream on Twitter.
Design/methodology/approach
A latent Dirichlet allocation (LDA) topic modeling approach as well as XGBoost machine learning algorithms are applied to a dataset of 9.2 million #OccupyWallStreet tweets in order to analyze not only how the speech patterns of bots differ from other participants but also how bot participation impacts the trajectory of the aggregate social accountability conversation stream. The authors consider two research questions: (1) do bots speak differently than non-bots and (2) does bot participation influence the conversation stream.
Findings
The results indicate that bots do speak differently than non-bots and that bots exert both weak form and strong form influence. Bots also steadily become more prevalent. At the same time, the results show that bots also learn from and adapt their speaking patterns to emphasize the topics that are important to non-bots and that non-bots continue to speak about their initial topics.
Research limitations/implications
These findings help improve understanding of the consequences of bot participation within social media-based democratic dialogic processes. The analyses also raise important questions about the increasing importance of apparently nonhuman actors within different spheres of social life.
Originality/value
The current study is the first, to the authors’ knowledge, that uses a theoretically informed Big Data approach to simultaneously consider the micro details and aggregate consequences of bot participation within social media-based dialogic social accountability processes.
Details
Keywords
Manju Priya Arthanarisamy Ramaswamy and Suja Palaniswamy
The aim of this study is to investigate subject independent emotion recognition capabilities of EEG and peripheral physiological signals namely: electroocoulogram (EOG)…
Abstract
Purpose
The aim of this study is to investigate subject independent emotion recognition capabilities of EEG and peripheral physiological signals namely: electroocoulogram (EOG), electromyography (EMG), electrodermal activity (EDA), temperature, plethysmograph and respiration. The experiments are conducted on both modalities independently and in combination. This study arranges the physiological signals in order based on the prediction accuracy obtained on test data using time and frequency domain features.
Design/methodology/approach
DEAP dataset is used in this experiment. Time and frequency domain features of EEG and physiological signals are extracted, followed by correlation-based feature selection. Classifiers namely – Naïve Bayes, logistic regression, linear discriminant analysis, quadratic discriminant analysis, logit boost and stacking are trained on the selected features. Based on the performance of the classifiers on the test set, the best modality for each dimension of emotion is identified.
Findings
The experimental results with EEG as one modality and all physiological signals as another modality indicate that EEG signals are better at arousal prediction compared to physiological signals by 7.18%, while physiological signals are better at valence prediction compared to EEG signals by 3.51%. The valence prediction accuracy of EOG is superior to zygomaticus electromyography (zEMG) and EDA by 1.75% at the cost of higher number of electrodes. This paper concludes that valence can be measured from the eyes (EOG) while arousal can be measured from the changes in blood volume (plethysmograph). The sorted order of physiological signals based on arousal prediction accuracy is plethysmograph, EOG (hEOG + vEOG), vEOG, hEOG, zEMG, tEMG, temperature, EMG (tEMG + zEMG), respiration, EDA, while based on valence prediction accuracy the sorted order is EOG (hEOG + vEOG), EDA, zEMG, hEOG, respiration, tEMG, vEOG, EMG (tEMG + zEMG), temperature and plethysmograph.
Originality/value
Many of the emotion recognition studies in literature are subject dependent and the limited subject independent emotion recognition studies in the literature report an average of leave one subject out (LOSO) validation result as accuracy. The work reported in this paper sets the baseline for subject independent emotion recognition using DEAP dataset by clearly specifying the subjects used in training and test set. In addition, this work specifies the cut-off score used to classify the scale as low or high in arousal and valence dimensions. Generally, statistical features are used for emotion recognition using physiological signals as a modality, whereas in this work, time and frequency domain features of physiological signals and EEG are used. This paper concludes that valence can be identified from EOG while arousal can be predicted from plethysmograph.
Details
Keywords
Oladosu Oyebisi Oladimeji and Ayodeji Olusegun J. Ibitoye
Diagnosing brain tumors is a process that demands a significant amount of time and is heavily dependent on the proficiency and accumulated knowledge of radiologists. Over the…
Abstract
Purpose
Diagnosing brain tumors is a process that demands a significant amount of time and is heavily dependent on the proficiency and accumulated knowledge of radiologists. Over the traditional methods, deep learning approaches have gained popularity in automating the diagnosis of brain tumors, offering the potential for more accurate and efficient results. Notably, attention-based models have emerged as an advanced, dynamically refining and amplifying model feature to further elevate diagnostic capabilities. However, the specific impact of using channel, spatial or combined attention methods of the convolutional block attention module (CBAM) for brain tumor classification has not been fully investigated.
Design/methodology/approach
To selectively emphasize relevant features while suppressing noise, ResNet50 coupled with the CBAM (ResNet50-CBAM) was used for the classification of brain tumors in this research.
Findings
The ResNet50-CBAM outperformed existing deep learning classification methods like convolutional neural network (CNN), ResNet-CBAM achieved a superior performance of 99.43%, 99.01%, 98.7% and 99.25% in accuracy, recall, precision and AUC, respectively, when compared to the existing classification methods using the same dataset.
Practical implications
Since ResNet-CBAM fusion can capture the spatial context while enhancing feature representation, it can be integrated into the brain classification software platforms for physicians toward enhanced clinical decision-making and improved brain tumor classification.
Originality/value
This research has not been published anywhere else.
Details
Keywords
Heru Agus Santoso, Brylian Fandhi Safsalta, Nanang Febrianto, Galuh Wilujeng Saraswati and Su-Cheng Haw
Plant cultivation holds a pivotal role in agriculture, necessitating precise disease identification for the overall health of plants. This research conducts a comprehensive…
Abstract
Purpose
Plant cultivation holds a pivotal role in agriculture, necessitating precise disease identification for the overall health of plants. This research conducts a comprehensive comparative analysis between two prominent deep learning algorithms, convolutional neural network (CNN) and DenseNet121, with the goal of enhancing disease identification in tomato plant leaves.
Design/methodology/approach
The dataset employed in this investigation is a fusion of primary data and publicly available data, covering 13 distinct disease labels and a total of 18,815 images for model training. The data pre-processing workflow prioritized activities such as normalizing pixel dimensions, implementing data augmentation and achieving dataset balance, which were subsequently followed by the modeling and testing phases.
Findings
Experimental findings elucidated the superior performance of the DenseNet121 model over the CNN model in disease classification on tomato leaves. The DenseNet121 model attained a training accuracy of 98.27%, a validation accuracy of 87.47% and average recall, precision and F1-score metrics of 87, 88 and 87%, respectively. The ultimate aim was to implement the optimal classifier for a mobile application, namely Tanamin.id, and, therefore, DenseNet121 was the preferred choice.
Originality/value
The integration of private and public data significantly contributes to determining the optimal method. The CNN method achieves a training accuracy of 90.41% and a validation accuracy of 83.33%, whereas the DenseNet121 method excels with a training accuracy of 98.27% and a validation accuracy of 87.47%. The DenseNet121 architecture, comprising 121 layers, a global average pooling (GAP) layer and a dropout layer, showcases its effectiveness. Leveraging categorical_crossentropy as the loss function and utilizing the stochastic gradien descent (SGD) Optimizer with a learning rate of 0.001 guides the course of the training process. The experimental results unequivocally demonstrate the superior performance of DenseNet121 over CNN.
Details
Keywords
Karthik Padamata and Rama Devi Vangapandu
In the process of providing quality healthcare services, identifying the important healthcare attributes and their operational performance is crucial in the healthcare industry…
Abstract
Purpose
In the process of providing quality healthcare services, identifying the important healthcare attributes and their operational performance is crucial in the healthcare industry. Highlighting this, the authors have aimed to find the importance of certain healthcare attributes and their respective performance from the customers’ perspective in the Indian private tertiary healthcare facilities by conducting an importance-performance analysis (IPA).
Design/methodology/approach
For this study, the authors have derived 10 healthcare attributes from the literature and responses regarding their importance and performance were taken from 350 inpatients from 6 hospitals.
Findings
The analysis resulted in identification of the most and least important, high and low performing healthcare attributes as perceived by the patients. In terms of attribute importance, the doctors’ competencies and provision of safe and effective patient care have been ranked most important, whereas the doctors’ competencies and accessibility have been rated high in terms of performance. In addition, the importance ranks and performance scores helped in the development of a two-dimensional IPA grid.
Practical implications
The IPA grid helped in identifying the areas for improvement, hence determining the need for implementation of significant strategies in the process of cost-effective high-quality healthcare service provision.
Originality/value
There is a paucity of IPA studies with a focus on the Indian healthcare system in identifying and demonstrating the healthcare components that need to be addressed for improvement. Emphasizing the gap, this is one of the pioneering studies that captured various healthcare attributes’ importance and their respective performance from the lens of hospital inpatients, which helped in the development of an IPA grid by clearly outlining the areas that need attention, especially in the post-pandemic scenario.
Details
Keywords
Prashant Srivastava, Karthik N.S. Iyer, Yu (Jade) Chu and Mohammed Rawwas
Borrowing from the dynamic capabilities theory and augmented by the relational view, the study investigates the criticality of supply chain agility in delivering operational…
Abstract
Purpose
Borrowing from the dynamic capabilities theory and augmented by the relational view, the study investigates the criticality of supply chain agility in delivering operational performance while understanding the determinant role of key cross-firm resources. Additionally, based on the contingency theory, the interactive influence of two critical context factors, supply uncertainty and product complexity, is examined to enrich the understanding of the contingent nature of the operational performance implications.
Design/methodology/approach
The study draws its conclusions from the survey data collected from a 152-respondent sample of executives from US manufacturing firms. The empirical data analyses using partial least square structural equation modeling (PLS-SEM) relate agility to operational performance enhancements while incorporating the moderating effects of contextual factors.
Findings
The study relates agility capability to operational performance enhancements, while resource specificity and resource complementarity emerge as significant determinants of the capability. Results on the contingent impact of contextual factors suggest differential influences of supply uncertainty and product complexity on the agility–performance relationship: while the former enhances, the latter detracts from the relationship.
Originality/value
The study’s contributions suggest theory extensions into supply chains as contexts, reinforcing the importance of market-responsive capabilities and the foundational nature of supply chains as repositories of vital cross-firm resources. The contingent nature of the agility–performance relationship accents the importance of market context factors.
Details