Swetha K., P.V.Y. Jayasree and Vijay Saradhi
The purpose of this paper is to provide a miniaturized antenna design for a mobile phone. The second design is 2 × 4 elements multiple-input multiple-output (MIMO) antenna with 8…
Abstract
Purpose
The purpose of this paper is to provide a miniaturized antenna design for a mobile phone. The second design is 2 × 4 elements multiple-input multiple-output (MIMO) antenna with 8 port having a substrate size is 20 × 40 mm2 which fits easier within smartphone handset devices for fifth generation technology.
Design/methodology/approach
In this work, the first design is a conventional patch antenna, and its dual-band characteristics are obtained by characteristic mode analysis. In this method, orthogonal modes are carried out for 28 GHz and 38 GHz, and further, both orthogonal modes are excited by finite integration technique full-wave method with 50 Ohm single port coaxial feed line.
Findings
In this configuration, better return loss, high gain, larger bandwidths and low envelope correlation coefficient (ECC) are evaluated by the full-wave simulation computer simulation technology (CST) studio suite.
Originality/value
In this arrangement, the performance metrics of the antenna are analyzed using electromagnetic simulator CST Studio suite.
Ersin Bahar and Gurhan Gurarslan
The purpose of this study is to introduce a new numerical scheme with no stability condition and high-order accuracy for the solution of two-dimensional coupled groundwater flow…
Abstract
Purpose
The purpose of this study is to introduce a new numerical scheme with no stability condition and high-order accuracy for the solution of two-dimensional coupled groundwater flow and transport simulation problems with regular and irregular geometries and compare the results with widely acceptable programs such as Modular Three-Dimensional Finite-Difference Ground-Water Flow Model (MODFLOW) and Modular Three-Dimensional Multispecies Transport Model (MT3DMS).
Design/methodology/approach
The newly proposed numerical scheme is based on the method of lines (MOL) approach and uses high-order approximations both in space and time. Quintic B-spline (QBS) functions are used in space to transform partial differential equations, representing the relevant physical phenomena in the system of ordinary differential equations. Then this system is solved with the DOPRI5 algorithm that requires no stability condition. The obtained results are compared with the results of the MODFLOW and MT3DMS programs to verify the accuracy of the proposed scheme.
Findings
The results indicate that the proposed numerical scheme can successfully simulate the two-dimensional coupled groundwater flow and transport problems with complex geometry and parameter structures. All the results are in good agreement with the reference solutions.
Originality/value
To the best of the authors' knowledge, the QBS-DOPRI5 method is used for the first time for solving two-dimensional coupled groundwater flow and transport problems with complex geometries and can be extended to high-dimensional problems. In the future, considering the success of the proposed numerical scheme, it can be used successfully for the identification of groundwater contaminant source characteristics.
Details
Keywords
Kalyan Nagaraj, Biplab Bhattacharjee, Amulyashree Sridhar and Sharvani GS
Phishing is one of the major threats affecting businesses worldwide in current times. Organizations and customers face the hazards arising out of phishing attacks because of…
Abstract
Purpose
Phishing is one of the major threats affecting businesses worldwide in current times. Organizations and customers face the hazards arising out of phishing attacks because of anonymous access to vulnerable details. Such attacks often result in substantial financial losses. Thus, there is a need for effective intrusion detection techniques to identify and possibly nullify the effects of phishing. Classifying phishing and non-phishing web content is a critical task in information security protocols, and full-proof mechanisms have yet to be implemented in practice. The purpose of the current study is to present an ensemble machine learning model for classifying phishing websites.
Design/methodology/approach
A publicly available data set comprising 10,068 instances of phishing and legitimate websites was used to build the classifier model. Feature extraction was performed by deploying a group of methods, and relevant features extracted were used for building the model. A twofold ensemble learner was developed by integrating results from random forest (RF) classifier, fed into a feedforward neural network (NN). Performance of the ensemble classifier was validated using k-fold cross-validation. The twofold ensemble learner was implemented as a user-friendly, interactive decision support system for classifying websites as phishing or legitimate ones.
Findings
Experimental simulations were performed to access and compare the performance of the ensemble classifiers. The statistical tests estimated that RF_NN model gave superior performance with an accuracy of 93.41 per cent and minimal mean squared error of 0.000026.
Research limitations/implications
The research data set used in this study is publically available and easy to analyze. Comparative analysis with other real-time data sets of recent origin must be performed to ensure generalization of the model against various security breaches. Different variants of phishing threats must be detected rather than focusing particularly toward phishing website detection.
Originality/value
The twofold ensemble model is not applied for classification of phishing websites in any previous studies as per the knowledge of authors.
Details
Keywords
Virendra Kumar Verma, Sachin S. Kamble, L. Ganapathy and Pradeep Kumar Tarei
The purpose of this study is to identify, analyse and model the post-processing barriers of 3D-printed medical models (3DPMM) printed by fused deposition modelling to overcome…
Abstract
Purpose
The purpose of this study is to identify, analyse and model the post-processing barriers of 3D-printed medical models (3DPMM) printed by fused deposition modelling to overcome these barriers for improved operational efficiency in the Indian context.
Design/methodology/approach
The methodology used interpretive structural modelling (ISM), cross-impact matrix multiplication applied to classification (MICMAC) analysis and decision-making trial and evaluation laboratory (DEMATEL) to understand the hierarchical and contextual relations among the barriers of the post-processing.
Findings
A total of 11 post-processing barriers were identified in this study using ISM, literature review and experts’ input. The MICMAC analysis identified support material removal, surface finishing, cleaning, inspection and issues with quality consistency as significant driving barriers for post-processing. MICMAC also identified linkage barriers as well as dependent barriers. The ISM digraph model was developed using a final reachability matrix, which would help practitioners specifically tackle post-processing barriers. Further, the DEMATEL method allows practitioners to emphasize the causal effects of post-processing barriers and guides them in overcoming these barriers.
Research limitations/implications
There may have been a few post-processing barriers that were overlooked by the Indian experts, which might have been important for other country’s perspective.
Practical implications
The presented ISM model and DEMATEL provide directions for operation managers in planning operational strategies for overcoming post-processing issues in the medical 3D-printing industry. Also, managers may formulate operational strategies based on the driving and dependence power of post-processing barriers as well as the causal effects relationships of the barriers.
Originality/value
This study contributes to identifying, analyzing and modelling the post-processing barriers of 3DPMM through a combined ISM and DEMATEL methodology, which has not yet been reviewed. This study also contributes to decision makers developing suitable strategies to overcome the post-processing barriers for improved operational efficiency.
Details
Keywords
Preeti Godabole and Girish Bhole
The main purpose of the paper is timing analysis of mixed critical applications on the multicore system to identify an efficient task scheduling mechanism to achieve three main…
Abstract
Purpose
The main purpose of the paper is timing analysis of mixed critical applications on the multicore system to identify an efficient task scheduling mechanism to achieve three main objectives improving schedulability, achieving reliability and minimizing the number of cores used. The rise in transient faults in embedded systems due to the use of low-cost processors has led to the use of fault-tolerant scheduling and mapping techniques.
Design/methodology/approach
The paper opted for a simulation-based study. The simulation of mixed critical applications, like air traffic control systems and synthetic workloads, is carried out using a litmus-real time testbed on an Ubuntu machine. The heuristic algorithms for task allocation based on utilization factors and task criticalities are proposed for partitioned approaches with multiple objectives.
Findings
Both partitioned earliest deadline first (EDF) with the utilization-based heuristic and EDF-virtual deadline (VD) with a criticality-based heuristic for allocation works well, as it schedules the air traffic system with a 98% success ratio (SR) using only three processor cores with transient faults being handled by the active backup of the tasks. With synthetic task loads, the proposed criticality-based heuristic works well with EDF-VD, as the SR is 94%. The validation of the proposed heuristic is done with a global and partitioned approach of scheduling, considering active backups to make the system reliable. There is an improvement in SR by 11% as compared to the global approach and a 17% improvement in comparison with the partitioned fixed-priority approach with only three processor cores being used.
Research limitations/implications
The simulations of mixed critical tasks are carried out on a real-time kernel based on Linux and are generalizable in Linux-based environments.
Practical implications
The rise in transient faults in embedded systems due to the use of low-cost processors has led to the use of fault-tolerant scheduling and mapping techniques.
Originality/value
This paper fulfills an identified need to have multi-objective task scheduling in a mixed critical system. The timing analysis helps to identify performance risks and assess alternative architectures used to achieve reliability in terms of transient faults.
Details
Keywords
Fátima García-Martínez, Diego Carou, Francisco de Arriba-Pérez and Silvia García-Méndez
Material extrusion is one of the most commonly used approaches within the additive manufacturing processes available. Despite its popularity and related technical advancements…
Abstract
Purpose
Material extrusion is one of the most commonly used approaches within the additive manufacturing processes available. Despite its popularity and related technical advancements, process reliability and quality assurance remain only partially solved. In particular, the surface roughness caused by this process is a key concern. To solve this constraint, experimental plans have been exploited to optimize surface roughness in recent years. However, the latter empirical trial and error process is extremely time- and resource consuming. Thus, this study aims to avoid using large experimental programs to optimize surface roughness in material extrusion.
Design/methodology/approach
This research provides an in-depth analysis of the effect of several printing parameters: layer height, printing temperature, printing speed and wall thickness. The proposed data-driven predictive modeling approach takes advantage of Machine Learning (ML) models to automatically predict surface roughness based on the data gathered from the literature and the experimental data generated for testing.
Findings
Using ten-fold cross-validation of data gathered from the literature, the proposed ML solution attains a 0.93 correlation with a mean absolute percentage error of 13%. When testing with our own data, the correlation diminishes to 0.79 and the mean absolute percentage error reduces to 8%. Thus, the solution for predicting surface roughness in extrusion-based printing offers competitive results regarding the variability of the analyzed factors.
Research limitations/implications
There are limitations in obtaining large volumes of reliable data, and the variability of the material extrusion process is relatively high.
Originality/value
Although ML is not a novel methodology in additive manufacturing, the use of published data from multiple sources has barely been exploited to train predictive models. As available manufacturing data continue to increase on a daily basis, the ability to learn from these large volumes of data is critical in future manufacturing and science. Specifically, the power of ML helps model surface roughness with limited experimental tests.
Details
Keywords
Dinesh Kumar Anguraj, S. Balasubramaniyan, E. Saravana Kumar, J. Vakula Rani and M. Ashwin
The purpose of the research is to concentrate on the most important smart metropolitan applications which are smart living, smart security and smart maintainable. In that, Power…
Abstract
Purpose
The purpose of the research is to concentrate on the most important smart metropolitan applications which are smart living, smart security and smart maintainable. In that, Power management and security is a most important problem in the current metropolitan situation.
Design/methodology/approach
A smart metropolitan area utilizes recent innovative technologies to improve its living, security and maintainable. The aim of this study is to recognize and resolve the difficulties in metropolitan area applications.
Findings
The main aim of this study is to reduce the metropolitan foremost energy consumption, to recharge the electric vehicles and to increase the lifetime of smart street lights.
Originality/value
The hybrid renewable energy street light applies smart resolutions to substructure and facilities in rural and metropolitan areas to create them well. This study will be applying smart metropolitan solar and wind turbine street light using renewable energy for existing areas. In future, the smart street light work will be implemented everywhere else.
Details
Keywords
Hima Bindu Valiveti, Anil Kumar B., Lakshmi Chaitanya Duggineni, Swetha Namburu and Swaraja Kuraparthi
Road accidents, an inadvertent mishap can be detected automatically and alerts sent instantly with the collaboration of image processing techniques and on-road video surveillance…
Abstract
Purpose
Road accidents, an inadvertent mishap can be detected automatically and alerts sent instantly with the collaboration of image processing techniques and on-road video surveillance systems. However, to rely exclusively on visual information especially under adverse conditions like night times, dark areas and unfavourable weather conditions such as snowfall, rain, and fog which result in faint visibility lead to incertitude. The main goal of the proposed work is certainty of accident occurrence.
Design/methodology/approach
The authors of this work propose a method for detecting road accidents by analyzing audio signals to identify hazardous situations such as tire skidding and car crashes. The motive of this project is to build a simple and complete audio event detection system using signal feature extraction methods to improve its detection accuracy. The experimental analysis is carried out on a publicly available real time data-set consisting of audio samples like car crashes and tire skidding. The Temporal features of the recorded audio signal like Energy Volume Zero Crossing Rate 28ZCR2529 and the Spectral features like Spectral Centroid Spectral Spread Spectral Roll of factor Spectral Flux the Psychoacoustic features Energy Sub Bands ratio and Gammatonegram are computed. The extracted features are pre-processed and trained and tested using Support Vector Machine (SVM) and K-nearest neighborhood (KNN) classification algorithms for exact prediction of the accident occurrence for various SNR ranges. The combination of Gammatonegram with Temporal and Spectral features of the validates to be superior compared to the existing detection techniques.
Findings
Temporal, Spectral, Psychoacoustic features, gammetonegram of the recorded audio signal are extracted. A High level vector is generated based on centroid and the extracted features are classified with the help of machine learning algorithms like SVM, KNN and DT. The audio samples collected have varied SNR ranges and the accuracy of the classification algorithms is thoroughly tested.
Practical implications
Denoising of the audio samples for perfect feature extraction was a tedious chore.
Originality/value
The existing literature cites extraction of Temporal and Spectral features and then the application of classification algorithms. For perfect classification, the authors have chosen to construct a high level vector from all the four extracted Temporal, Spectral, Psycho acoustic and Gammetonegram features. The classification algorithms are employed on samples collected at varied SNR ranges.
Details
Keywords
Naga Swetha R, Vimal K. Shrivastava and K. Parvathi
The mortality rate due to skin cancers has been increasing over the past decades. Early detection and treatment of skin cancers can save lives. However, due to visual resemblance…
Abstract
Purpose
The mortality rate due to skin cancers has been increasing over the past decades. Early detection and treatment of skin cancers can save lives. However, due to visual resemblance of normal skin and lesion and blurred lesion borders, skin cancer diagnosis has become a challenging task even for skilled dermatologists. Hence, the purpose of this study is to present an image-based automatic approach for multiclass skin lesion classification and compare the performance of various models.
Design/methodology/approach
In this paper, the authors have presented a multiclass skin lesion classification approach based on transfer learning of deep convolutional neural network. The following pre-trained models have been used: VGG16, VGG19, ResNet50, ResNet101, ResNet152, Xception, MobileNet and compared their performances on skin cancer classification.
Findings
The experiments have been performed on HAM10000 dataset, which contains 10,015 dermoscopic images of seven skin lesion classes. The categorical accuracy of 83.69%, Top2 accuracy of 91.48% and Top3 accuracy of 96.19% has been obtained.
Originality/value
Early detection and treatment of skin cancer can save millions of lives. This work demonstrates that the transfer learning can be an effective way to classify skin cancer images, providing adequate performance with less computational complexity.
Details
Keywords
The purpose of this paper was to determine factors that could be used to create different authentication requirements for diverse online banking customers based on their risk…
Abstract
Purpose
The purpose of this paper was to determine factors that could be used to create different authentication requirements for diverse online banking customers based on their risk profile. Online security remains a challenge to ensure safe transacting on the Internet. User authentication, a human-centric process, is regarded as the basis of computer security and hence secure access to online banking services. The increased use of technology to enforce additional actions has the ability to improve the quality of authentication and hence online security, but often at the expense of usability. The objective of this study was to determine factors that could be used to create different authentication requirements for diverse online banking customers based on their risk profile.
Design/methodology/approach
A web-based survey was designed to determine online consumers’ competence resecure online behaviour, and this was used to quantify the online behaviour as more or less secure. The browsers used by consumers as well as their demographical data were correlated with the security profile of respondents to test for any significant variance in practice that could inform differentiated authentication.
Findings
A statistical difference between behaviours based on some of the dependant variables was evident from the analysis. Based on the results, a case could be made to have different authentication methods for online banking customers based on both their browser selected (before individual identification) as well as demographical data (after identification) to ensure a safer online environment.
Originality/value
The research can be used by the financial services sector to improve online security, where required, without necessarily reducing usability for more “security inclined” customers.