Magesh S., Niveditha V.R., Rajakumar P.S., Radha RamMohan S. and Natrayan L.
The current and on-going coronavirus (COVID-19) has disrupted many human lives all over the world and seems very difficult to confront this global crisis as the infection is…
Abstract
Purpose
The current and on-going coronavirus (COVID-19) has disrupted many human lives all over the world and seems very difficult to confront this global crisis as the infection is transmitted by physical contact. As no vaccine or medical treatment made available till date, the only solution is to detect the COVID-19 cases, block the transmission, isolate the infected and protect the susceptible population. In this scenario, the pervasive computing becomes essential, as it is environment-centric and data acquisition via smart devices provides better way for analysing diseases with various parameters.
Design/methodology/approach
For data collection, Infrared Thermometer, Hikvision’s Thermographic Camera and Acoustic device are deployed. Data-imputation is carried out by principal component analysis. A mathematical model susceptible, infected and recovered (SIR) is implemented for classifying COVID-19 cases. The recurrent neural network (RNN) with long-term short memory is enacted to predict the COVID-19 disease.
Findings
Machine learning models are very efficient in predicting diseases. In the proposed research work, besides contribution of smart devices, Artificial Intelligence detector is deployed to reduce false alarms. A mathematical model SIR is integrated with machine learning techniques for better classification. Implementation of RNN with Long Short Term Memory (LSTM) model furnishes better prediction holding the previous history.
Originality/value
The proposed research collected COVID −19 data using three types of sensors for temperature sensing and detecting the respiratory rate. After pre-processing, 300 instances are taken for experimental results considering the demographic features: Sex, Patient Age, Temperature, Finding and Clinical Trials. Classification is performed using SIR mode and finally predicted 188 confirmed cases using RNN with LSTM model.
Details
Keywords
Reham Reda, Mohamed Saad, Mohamed Zaky Ahmed and Hoda Abd-Elkader
This paper aims to monitor, evaluate and adjust the joint quality of dissimilar friction stir welded AA2024-T3 and AA7075-T6 Al alloys.
Abstract
Purpose
This paper aims to monitor, evaluate and adjust the joint quality of dissimilar friction stir welded AA2024-T3 and AA7075-T6 Al alloys.
Design/methodology/approach
Taguchi analysis for design of experiments and ANOVA analysis were applied. Tensile test, visual inspection and macro and microstructure investigations were carried out at each welding condition. In addition, the grain size of stir zone and the value of heat input were measured.
Findings
Using Taguchi analysis, the optimum values of tool rotary speed, welding speed and axial load were 1,200 rpm, 100 mm/min and 1,300 kg, respectively, yielding the maximum tensile strength of the joints of 427 MPa. ANOVA analysis indicated that the most significant parameter on the joint strength is the tool rotary speed, followed by welding speed and axial load, with contributions of 67, 27 and 2 per cent, respectively. Best mixing between Al alloys in the stir zone with no defects was observed at moderate speeds because of proper heat input and grain size, resulting in high strength.
Originality/value
A relation between structure characteristics of the joint, the process parameters and the joint strength was established to control the joint quality.
Details
Keywords
Steven Buchanan and Zamzam Husain
The purpose is to provide insight into the social media related information behaviours of Muslim women within Arab society, and to explore issues of societal constraint and…
Abstract
Purpose
The purpose is to provide insight into the social media related information behaviours of Muslim women within Arab society, and to explore issues of societal constraint and control, and impact on behaviours.
Design/methodology/approach
The study conducted semi-structured interviews with Muslim women resident within the capital city of a nation within the Arabian Peninsula.
Findings
Social media provides the study participants' with an important source of information and social connection, and medium for personal expression. However, use is constrained within sociocultural boundaries, and monitored by husbands and/or male relatives. Pseudonym accounts and carefully managed privacy settings are used to circumvent boundaries and pursue needs, but not without risk of social transgression. The authors provide evidence of systematic marginalisation, but also of resilience and agency to overcome. Self-protective acts of secrecy and deception are employed to not only cope with small world life, but to also circumvent boundaries and move between social and information worlds.
Research limitations/implications
Findings should not be considered representative of Muslim women as a whole as Muslim women are not a homogenous group, and Arabian Peninsula nations variously more conservative or liberal than others.
Practical implications
Findings contribute to practical and conceptual understanding of digital literacy with implications for education programmes including social, moral and intellectual aspects.
Originality/value
Findings contribute to conceptual and practical understanding of information poverty, evidencing structural inequalities as a major contributory factor, and that self-protective information behaviours, often considered reductive, can also be expansive in nature.
Details
Keywords
Rajakumar B.R., Gokul Yenduri, Sumit Vyas and Binu D.
This paper aims to propose a new assessment system module for handling the comprehensive answers written through the answer interface.
Abstract
Purpose
This paper aims to propose a new assessment system module for handling the comprehensive answers written through the answer interface.
Design/methodology/approach
The working principle is under three major phases: Preliminary semantic processing: In the pre-processing work, the keywords are extracted for each answer given by the course instructor. In fact, this answer is actually considered as the key to evaluating the answers written by the e-learners. Keyword and semantic processing of e-learners for hierarchical clustering-based ontology construction: For each answer given by each student, the keywords and the semantic information are extracted and clustered (hierarchical clustering) using a new improved rider optimization algorithm known as Rider with Randomized Overtaker Update (RR-OU). Ontology matching evaluation: Once the ontology structures are completed, a new alignment procedure is used to find out the similarity between two different documents. Moreover, the objects defined in this work focuses on “how exactly the matching process is done for evaluating the document.” Finally, the e-learners are classified based on their grades.
Findings
On observing the outcomes, the proposed model shows less relative mean squared error measure when weights were (0.5, 0, 0.5), and it was 71.78% and 16.92% better than the error values attained for (0, 0.5, 0.5) and (0.5, 0.5, 0). On examining the outcomes, the values of error attained for (1, 0, 0) were found to be lower than the values when weights were (0, 0, 1) and (0, 1, 0). Here, the mean absolute error (MAE) measure for weight (1, 0, 0) was 33.99% and 51.52% better than the MAE value for weights (0, 0, 1) and (0, 1, 0). On analyzing the overall error analysis, the mean absolute percentage error of the implemented RR-OU model was 3.74% and 56.53% better than k-means and collaborative filtering + Onto + sequential pattern mining models, respectively.
Originality/value
This paper adopts the latest optimization algorithm called RR-OU for proposing a new assessment system module for handling the comprehensive answers written through the answer interface. To the best of the authors’ knowledge, this is the first work that uses RR-OU-based optimization for developing a new ontology alignment-based online assessment of e-learners.
Details
Keywords
Ranjeet Yadav and Ashutosh Tripathi
Multiple input multiple-output (MIMO) has emerged as one among the many noteworthy technologies in recent wireless applications because of its powerful ability to improve…
Abstract
Purpose
Multiple input multiple-output (MIMO) has emerged as one among the many noteworthy technologies in recent wireless applications because of its powerful ability to improve bandwidth efficiency and performance, i.e. through developing its unique spatial multiplexing capability and spatial diversity gain. For carrying out an enhanced communication in next-generation networks, the MIMO and orthogonal frequency division multiple systems were combined that facilitate the spatial multiplexing on resource blocks (RBs) based on time-frequency. This paper aims to propose a novel approach for maximizing the throughput of cell-edge users and cell-center users.
Design/methodology/approach
In this work, the specified multi-objective function is defined as the single objective function, which is solved by the introduction of a new improved algorithm as well. This optimization problem can be resolved by the fine-tuning of certain parameters such as assigned power for RB, cell-center user, cell-edge user and RB allocation. The fine-tuning of parameters is attained by a new improved Lion algorithm (LA), termed as Lion with new cub generation (LA-NCG) model. Finally, the betterment of the presented approach is validated over the existing models in terms of signal to interference plus noise ratio, throughput and so on.
Findings
On examining the outputs, the adopted LA-NCG model for 4BS was 66.67%, 66.67% and 20% superior to existing joint processing coordinated multiple point-based dual decomposition method (JC-DDM), fractional programming (FP) and LA models. In addition, the throughput of conventional JC-DDM, FP and LA models lie at a range of 10, 45 and 35, respectively, at the 100th iteration. However, the presented LA-NCG scheme accomplishes a higher throughput of 58. Similarly, the throughput of the adopted scheme observed for 8BS was 59.68%, 44.19% and 9.68% superior to existing JC-DDM, FP and LA models. Thus, the enhancement of the adopted LA-NCG model has been validated effectively from the attained outcomes.
Originality/value
This paper adopts the latest optimization algorithm called LA-NCG to establish a novel approach for maximizing the throughput of cell-edge users and cell-center users. This is the first that work uses LA-NCG-based optimization that assists in fine-tuning certain parameters such as assigned power for RB, cell-center user, cell-edge user and RB allocation.
Details
Keywords
Rajakumar Krishnan, Arunkumar Thangavelu, P. Prabhavathy, Devulapalli Sudheer, Deepak Putrevu and Arundhati Misra
Extracting suitable features to represent an image based on its content is a very tedious task. Especially in remote sensing we have high-resolution images with a variety of…
Abstract
Purpose
Extracting suitable features to represent an image based on its content is a very tedious task. Especially in remote sensing we have high-resolution images with a variety of objects on the Earth's surface. Mahalanobis distance metric is used to measure the similarity between query and database images. The low distance obtained image is indexed at the top as high relevant information to the query.
Design/methodology/approach
This paper aims to develop an automatic feature extraction system for remote sensing image data. Haralick texture features based on Contourlet transform are fused with statistical features extracted from the QuadTree (QT) decomposition are developed as feature set to represent the input data. The extracted features will retrieve similar images from the large image datasets using an image-based query through the web-based user interface.
Findings
The developed retrieval system performance has been analyzed using precision and recall and F1 score. The proposed feature vector gives better performance with 0.69 precision for the top 50 relevant retrieved results over other existing multiscale-based feature extraction methods.
Originality/value
The main contribution of this paper is developing a texture feature vector in a multiscale domain by combining the Haralick texture properties in the Contourlet domain and Statistical features using QT decomposition. The features required to represent the image is 207 which is very less dimension compare to other texture methods. The performance shows superior than the other state of art methods.
Details
Keywords
Stock market forecasters are focusing to create a positive approach for predicting the stock price. The fundamental principle of an effective stock market prediction is not only…
Abstract
Purpose
Stock market forecasters are focusing to create a positive approach for predicting the stock price. The fundamental principle of an effective stock market prediction is not only to produce the maximum outcomes but also to reduce the unreliable stock price estimate. In the stock market, sentiment analysis enables people for making educated decisions regarding the investment in a business. Moreover, the stock analysis identifies the business of an organization or a company. In fact, the prediction of stock prices is more complex due to high volatile nature that varies a large range of investor sentiment, economic and political factors, changes in leadership and other factors. This prediction often becomes ineffective, while considering only the historical data or textural information. Attempts are made to make the prediction more precise with the news sentiment along with the stock price information.
Design/methodology/approach
This paper introduces a prediction framework via sentiment analysis. Thereby, the stock data and news sentiment data are also considered. From the stock data, technical indicator-based features like moving average convergence divergence (MACD), relative strength index (RSI) and moving average (MA) are extracted. At the same time, the news data are processed to determine the sentiments by certain processes like (1) pre-processing, where keyword extraction and sentiment categorization process takes place; (2) keyword extraction, where WordNet and sentiment categorization process is done; (3) feature extraction, where Proposed holoentropy based features is extracted. (4) Classification, deep neural network is used that returns the sentiment output. To make the system more accurate on predicting the sentiment, the training of NN is carried out by self-improved whale optimization algorithm (SIWOA). Finally, optimized deep belief network (DBN) is used to predict the stock that considers the features of stock data and sentiment results from news data. Here, the weights of DBN are tuned by the new SIWOA.
Findings
The performance of the adopted scheme is computed over the existing models in terms of certain measures. The stock dataset includes two companies such as Reliance Communications and Relaxo Footwear. In addition, each company consists of three datasets (a) in daily option, set start day 1-1-2019 and end day 1-12-2020, (b) in monthly option, set start Jan 2000 and end Dec 2020 and (c) in yearly option, set year 2000. Moreover, the adopted NN + DBN + SIWOA model was computed over the traditional classifiers like LSTM, NN + RF, NN + MLP and NN + SVM; also, it was compared over the existing optimization algorithms like NN + DBN + MFO, NN + DBN + CSA, NN + DBN + WOA and NN + DBN + PSO, correspondingly. Further, the performance was calculated based on the learning percentage that ranges from 60, 70, 80 and 90 in terms of certain measures like MAE, MSE and RMSE for six datasets. On observing the graph, the MAE of the adopted NN + DBN + SIWOA model was 91.67, 80, 91.11 and 93.33% superior to the existing classifiers like LSTM, NN + RF, NN + MLP and NN + SVM, respectively for dataset 1. The proposed NN + DBN + SIWOA method holds minimum MAE value of (∼0.21) at learning percentage 80 for dataset 1; whereas, the traditional models holds the value for NN + DBN + CSA (∼1.20), NN + DBN + MFO (∼1.21), NN + DBN + PSO (∼0.23) and NN + DBN + WOA (∼0.25), respectively. From the table, it was clear that the RMSRE of the proposed NN + DBN + SIWOA model was 3.14, 1.08, 1.38 and 15.28% better than the existing classifiers like LSTM, NN + RF, NN + MLP and NN + SVM, respectively, for dataset 6. In addition, he MSE of the adopted NN + DBN + SIWOA method attain lower values (∼54944.41) for dataset 2 than other existing schemes like NN + DBN + CSA(∼9.43), NN + DBN + MFO (∼56728.68), NN + DBN + PSO (∼2.95) and NN + DBN + WOA (∼56767.88), respectively.
Originality/value
This paper has introduced a prediction framework via sentiment analysis. Thereby, along with the stock data and news sentiment data were also considered. From the stock data, technical indicator based features like MACD, RSI and MA are extracted. Therefore, the proposed work was said to be much appropriate for stock market prediction.
Details
Keywords
A. Arun Negemiya, S. Rajakumar and V. Balasubramanian
The purpose of this paper is to develop an empirical relationship for predicting the strength of titanium to austenitic stainless steel fabricated by diffusion bonding (DB…
Abstract
Purpose
The purpose of this paper is to develop an empirical relationship for predicting the strength of titanium to austenitic stainless steel fabricated by diffusion bonding (DB) process. Process parameters such as bonding pressure, bonding temperature and holding time play the main role in deciding the joint strength.
Design/methodology/approach
In this study, three-factors, five-level central composite rotatable design was used to conduct the minimum number of experiments involving all the combinations of parameters.
Findings
An empirical relationship was developed to predict the lap shear strength (LSS) of the joints incorporating DB process parameters. The developed empirical relationship was optimized using particle swarm optimization (PSO). The optimized value discovered through PSO was compared with the response surface methodology (RSM). The joints produced using bonding pressure of 14 MPa, bonding temperature of 900°C and holding time of 70 min exhibited a maximum LSS of 150.51 MPa in comparison with other joints. This was confirmed by constructing response graphs and contour plots.
Originality/value
Optimizing the DB parameters using RSM and PSO, PSO gives an accurate result when compared with RSM. Also, a sensitivity analysis is carried out to identify the most influencing parameter for the DB process.
Details
Keywords
J. Rajakumar, P. Saikrishnan and A. Chamkha
The purpose of this paper is to consider axisymmetric mixed convection flow of water over a sphere with variable viscosity and Prandtl number and an applied magnetic field.
Abstract
Purpose
The purpose of this paper is to consider axisymmetric mixed convection flow of water over a sphere with variable viscosity and Prandtl number and an applied magnetic field.
Design/methodology/approach
The non-similar solutions have been obtained from the origin of the streamwise co-ordinate to the point of zero skin friction using quasilinearization technique with an implicit finite-difference scheme.
Findings
The effect of M is not notable on the temperature and heat transfer coefficient when λ is large. The skin friction coefficient and velocity profile are enhance with the increase of MHD parameter M when λ is small. Viscous dissipation has no significant on the skin friction coefficient under MHD effect. For M=1, the movement of the slot or slot suction or slot injection do not cause any effect on flow separation. The slot suction and the movement of the slot in downstream direction delay the point of zero skin friction for M=0.
Originality/value
The present results are original and new for water boundary-layer flow over sphere in mixed convection flow with MHD effect and non-uniform mass transfer. So this study would be useful in analysing the skin friction and heat transfer coefficient on sphere of mixed convection flow of water boundary layer with MHD effect.
Details
Keywords
Asha Sukumaran and Thomas Brindha
The humans are gifted with the potential of recognizing others by their uniqueness, in addition with more other demographic characteristics such as ethnicity (or race), gender and…
Abstract
Purpose
The humans are gifted with the potential of recognizing others by their uniqueness, in addition with more other demographic characteristics such as ethnicity (or race), gender and age, respectively. Over the decades, a vast count of researchers had undergone in the field of psychological, biological and cognitive sciences to explore how the human brain characterizes, perceives and memorizes faces. Moreover, certain computational advancements have been developed to accomplish several insights into this issue.
Design/methodology/approach
This paper intends to propose a new race detection model using face shape features. The proposed model includes two key phases, namely. (a) feature extraction (b) detection. The feature extraction is the initial stage, where the face color and shape based features get mined. Specifically, maximally stable extremal regions (MSER) and speeded-up robust transform (SURF) are extracted under shape features and dense color feature are extracted as color feature. Since, the extracted features are huge in dimensions; they are alleviated under principle component analysis (PCA) approach, which is the strongest model for solving “curse of dimensionality”. Then, the dimensional reduced features are subjected to deep belief neural network (DBN), where the race gets detected. Further, to make the proposed framework more effective with respect to prediction, the weight of DBN is fine tuned with a new hybrid algorithm referred as lion mutated and updated dragon algorithm (LMUDA), which is the conceptual hybridization of lion algorithm (LA) and dragonfly algorithm (DA).
Findings
The performance of proposed work is compared over other state-of-the-art models in terms of accuracy and error performance. Moreover, LMUDA attains high accuracy at 100th iteration with 90% of training, which is 11.1, 8.8, 5.5 and 3.3% better than the performance when learning percentage (LP) = 50%, 60%, 70%, and 80%, respectively. More particularly, the performance of proposed DBN + LMUDA is 22.2, 12.5 and 33.3% better than the traditional classifiers DCNN, DBN and LDA, respectively.
Originality/value
This paper achieves the objective detecting the human races from the faces. Particularly, MSER feature and SURF features are extracted under shape features and dense color feature are extracted as color feature. As a novelty, to make the race detection more accurate, the weight of DBN is fine tuned with a new hybrid algorithm referred as LMUDA, which is the conceptual hybridization of LA and DA, respectively.