Search results
1 – 10 of 101R.V. Maheswari, B. Vigneshwaran and L. Kalaivani
The purpose of this paper is to investigate the condition of insulation in high-voltage equipments using partial discharge (PD) measurements. It proposes the methods to eliminate…
Abstract
Purpose
The purpose of this paper is to investigate the condition of insulation in high-voltage equipments using partial discharge (PD) measurements. It proposes the methods to eliminate several noises like white noise, random noise and discrete spectral interferences which severely pollutes the PD signals. The study aims to remove these noises from the PD signal effectively by preserving the signal features.
Design/methodology/approach
This paper employs fast Fourier transform, discrete wavelet transform and translational invariant wavelet transform (TIWT) for denoising the PD signals. The simulated damped exponential pulse and damped oscillatory pulse with low- and high-level noises and a measured PD signal are considered for this analysis. The conventional wavelet denoising approach is also improved by estimating the automated global optimum threshold value using genetic algorithm (GA). The statistical parameters are evaluated and compared. Among these methods, GA-based TIWT approach provides robustness and reduces computational complexity.
Findings
This paper provides effective condition monitoring of power apparatus using GA-based TIWT approach. This method provides the low value of mean square error, pulse amplitude distortion and also high reduction in noise level due to its robustness and reduced computational complexity. It suggests that this approach works well for both signals immersed in noise as well as for noise immersed in signals.
Research limitations/implications
Because of the chosen PD signals, the research results may lack for multiple discharges. Therefore, researchers are encouraged to test the proposed propositions further.
Practical implications
The paper includes implication for the development of online testing for equipment analysis and diagnostics during normal operating condition. Corrective actions can be planned and implemented, resulting in reduced unscheduled downtime.
Social implications
This PD-based analysis often present well in advance of insulation failure, asset managers can monitor it over time and make informed strategic decisions regarding the repair or replacement of the equipment. These predictive diagnostics help society to prioritize investments before an unexpected outage occurs.
Originality/value
This paper provides an enhanced study of condition monitoring of HV power apparatus by which life time of insulation can be increased by taking preventive measures.
Details
Keywords
Anan Zhang, Cong He, Maoyi Sun, Qian Li, Hong Wei Li and Lin Yang
Noise abatement is one of the key techniques for Partial Discharge (PD) on-line measurement and monitoring. However, how to enhance the efficiency of PD signal noise suppression…
Abstract
Purpose
Noise abatement is one of the key techniques for Partial Discharge (PD) on-line measurement and monitoring. However, how to enhance the efficiency of PD signal noise suppression is a challenging work. Hence, this study aims to improve the efficiency of PD signal noise abatement.
Design/methodology/approach
In this approach, the time–frequency characteristics of PD signal had been obtained based on fast kurtogram and S-transform time–frequency spectrum, and these characteristics were used to optimize the parameters for the signal matching over-complete dictionary. Subsequently, a self-adaptive selection of matching atoms was realized when using Matching Pursuit (MP) to analyze PD signals, which leading to seldom noise signal element was represented in sparse decomposition.
Findings
The de-noising of PD signals was achieved efficiently. Simulation and experimental results show that the proposed method has good adaptability and significant noise abatement effect compared with Empirical Mode Decomposition, Wavelet Threshold and global signal sparse decomposition of MP.
Originality/value
A self-adaptive noise abatement method was proposed to improve the efficiency of PD signal noise suppression based on the signal sparse representation and its MP algorithm, which is significant to on-line PD measurement.
Details
Keywords
Hanuman Reddy N., Amit Lathigara, Rajanikanth Aluvalu and Uma Maheswari V.
Cloud computing (CC) refers to the usage of virtualization technology to share computing resources through the internet. Task scheduling (TS) is used to assign computational…
Abstract
Purpose
Cloud computing (CC) refers to the usage of virtualization technology to share computing resources through the internet. Task scheduling (TS) is used to assign computational resources to requests that have a high volume of pending processing. CC relies on load balancing to ensure that resources like servers and virtual machines (VMs) running on real servers share the same amount of load. VMs are an important part of virtualization, where physical servers are transformed into VM and act as physical servers during the process. It is possible that a user’s request or data transmission in a cloud data centre may be the reason for the VM to be under or overloaded with data.
Design/methodology/approach
VMs are an important part of virtualization, where physical servers are transformed into VM and act as physical servers during the process. It is possible that a user’s request or data transmission in a cloud data centre may be the reason for the VM to be under or overloaded with data. With a large number of VM or jobs, this method has a long makespan and is very difficult. A new idea to cloud loads without decreasing implementation time or resource consumption is therefore encouraged. Equilibrium optimization is used to cluster the VM into underloaded and overloaded VMs initially in this research. Underloading VMs is used to improve load balance and resource utilization in the second stage. The hybrid algorithm of BAT and the artificial bee colony (ABC) helps with TS using a multi-objective-based system. The VM manager performs VM migration decisions to provide load balance among physical machines (PMs). When a PM is overburdened and another PM is underburdened, the decision to migrate VMs is made based on the appropriate conditions. Balanced load and reduced energy usage in PMs are achieved in the former case. Manta ray foraging (MRF) is used to migrate VMs, and its decisions are based on a variety of factors.
Findings
The proposed approach provides the best possible scheduling for both VMs and PMs. To complete the task, improved whale optimization algorithm for Cloud TS has 42 s of completion time, enhanced multi-verse optimizer has 48 s, hybrid electro search with a genetic algorithm has 50 s, adaptive benefit factor-based symbiotic organisms search has 38 s and, finally, the proposed model has 30 s, which shows better performance of the proposed model.
Originality/value
User’s request or data transmission in a cloud data centre may cause the VMs to be under or overloaded with data. To identify the load on VM, initially EQ algorithm is used for clustering process. To figure out how well the proposed method works when the system is very busy by implementing hybrid algorithm called BAT–ABC. After the TS process, VM migration is occurred at the final stage, where optimal VM is identified by using MRF algorithm. The experimental analysis is carried out by using various metrics such as execution time, transmission time, makespan for various iterations, resource utilization and load fairness. With its system load, the metric gives load fairness. How load fairness is worked out depends on how long each task takes to do. It has been added that a cloud system may be able to achieve more load fairness if tasks take less time to finish.
Details
Keywords
B. Maheswari and Rajganesh Nagarajan
A new Chatbot system is implemented to provide both voice-based and textual-based communication to address student queries without any delay. Initially, the input texts are…
Abstract
Purpose
A new Chatbot system is implemented to provide both voice-based and textual-based communication to address student queries without any delay. Initially, the input texts are gathered from the chat and then the gathered text is fed to pre-processing techniques like tokenization, stemming of words and removal of stop words. Then, the pre-processed data are given to the Natural Learning Process (NLP) for extracting the features, where the XLnet and Bidirectional Encoder Representations from Transformers (BERT) are utilized to extract the features. From these extracted features, the target-based fused feature pools are obtained. Then, the intent detection is carried out to extract the answers related to the user queries via Enhanced 1D-Convolutional Neural Networks with Long Short Term Memory (E1DCNN-LSTM) where the parameters are optimized using Position Averaging of Binary Emperor Penguin Optimizer with Colony Predation Algorithm (PA-BEPOCPA). Finally, the answers are extracted based on the intent of a particular student’s teaching materials like video, image or text. The implementation results are analyzed through different recently developed Chatbot detection models to validate the effectiveness of the newly developed model.
Design/methodology/approach
A smart model for the NLP is developed to help education-related institutions for an easy way of interaction between students and teachers with high prediction of accurate data for the given query. This research work aims to design a new educational Chatbot to assist the teaching-learning process with the NLP. The input data are gathered from the user through chats and given to the pre-processing stage, where tokenization, steaming of words and removal of stop words are used. The output data from the pre-processing stage is given to the feature extraction phase where XLnet and BERT are used. In this feature extraction, the optimal features are extracted using hybrid PA-BEPOCPA to maximize the correlation coefficient. The features from XLnet and features from BERT were given to target-based features fused pool to produce optimal features. Here, the best features are optimally selected using developed PA-BEPOCPA for maximizing the correlation among coefficients. The output of selected features is given to E1DCNN-LSTM for implementation of educational Chatbot with high accuracy and precision.
Findings
The investigation result shows that the implemented model achieves maximum accuracy of 57% more than Bidirectional long short-term memory (BiLSTM), 58% more than One Dimansional Convolutional Neural Network (1DCNN), 59% more than LSTM and 62% more than Ensemble for the given dataset.
Originality/value
The prediction accuracy was high in this proposed deep learning-based educational Chatbot system when compared with various baseline works.
Details
Keywords
Kavitha D., Nandagopal R. and Uma Maheswari B.
The purpose of this paper is to empirically investigate the impact of board characteristics such as size, independence, busyness and duality on the extent of discretionary…
Abstract
Purpose
The purpose of this paper is to empirically investigate the impact of board characteristics such as size, independence, busyness and duality on the extent of discretionary disclosures of listed Indian firms.
Design/methodology/approach
A disclosure index with 110 items was constructed to assess the discretionary disclosures in the annual reports of listed firms. The study measured disclosure using 1,024 firm-year observations over 8 years from 2009 to 2016. Board characteristics such as size, independence, busyness and duality have been used in the study as indicators of corporate governance.
Findings
The results indicate that while the proportion of independent directors positively impacts the extent of discretionary disclosures, boards with duality and the busyness of the director have a negative impact. The size of the board does not significantly impact the extent of disclosures.
Research limitations/implications
This study examines the discretionary disclosures made only in the annual reports. Future studies could examine information disclosed in other media. Moreover, this study uses an un-weighted self-constructed disclosure index, which is subject to its inherent limitations.
Originality/value
This study has examined the impact of the “busyness” of the director on the extent of disclosures. This variable has not been explored in prior studies. The significance of the variable indicates that the number of directorships held impacts the efficiency with which a director performs his/her role in the board. The study reiterates the need for firms and policymakers to focus on improving board independence and to move away from leadership structures with duality.
Details
Keywords
Purva Mujumdar and J. Uma Maheswari
The design phase is generally characterized with two-way multiple information exchanges/overlaps between the interdependent entities. In this paper, entity is a generic term to…
Abstract
Purpose
The design phase is generally characterized with two-way multiple information exchanges/overlaps between the interdependent entities. In this paper, entity is a generic term to represent teams, components, activities or parameters. Existing approaches can either capture a single overlap or lack practical application in representing multiple overlaps. The beeline diagraming method (BDM) network is efficient in representing multiple overlaps for construction projects. However, it considers any entity as indivisible and cannot distinguish partial criticality of entities. In reality, the design phase in any construction project is driven by need basis and often has numerous interruptions. Hence, there is a need to develop an alternate network analysis for BDM for interruptible execution. The paper aims to discuss these issues.
Design/methodology/approach
A pilot study is conducted to formulate the hypothetical examples. Subsequently, these hypothetical BDM examples are analyzed to trace a pattern for criticality. This pattern study along with the existing precedence diagramming method network analysis enabled to derive new equations for forward pass, backward pass and float. Finally, the proposed concepts are applied to two design cases and reviewed with the design experts.
Findings
The proposed network analysis for BDM is efficient for interruptible entity execution.
Practical implications
The proposed BDM network is an information-intensive network that enables the design participants to view the project holistically. Application to two distinct cases emphasizes that the concept is generic and can be applied to any project that is characterized with beelines.
Originality/value
An alternate network analysis for BDM is investigated for interruptible entity execution. This study also clarifies the related concepts – interdependency, iteration, overlaps and multiple information exchanges/linkages.
Details
Keywords
While rapid increase in demand for foods but limited availability of croplands has forced to adopt input-intensive farming practices to increase yield, there are serious long-term…
Abstract
While rapid increase in demand for foods but limited availability of croplands has forced to adopt input-intensive farming practices to increase yield, there are serious long-term ecological implications including degradation of biodiversity. It is increasingly recognised that ensuring agricultural sustainability under the changing climatic conditions requires a change in the production system along with necessary policies and institutional arrangements. In this context, this chapter examines if climate-smart agriculture (CSA) can facilitate adaptation and mitigation practices by improving resource utilisation efficiency in India. Such an attempt has special significance as the existing studies have very limited discussions on three main aspects, viz., resource productivity, adaptation practices and mitigation strategies in a comprehensive manner. Based on insights from the existing studies, this chapter points out that CSA can potentially make significant contribution to enhancing resource productivity, adaptation practices, mitigation strategies and food security, especially among the land-constrained farmers who are highly prone to environmental shocks. In this connection, staggered trench irrigation structure has facilitated rainwater harvesting, local irrigation and livelihood generation in West Bengal. However, it is necessary to revisit the existing approaches to promotion of CSA and dissemination of information on the design of local adaptation strategies. This chapter also proposes a change in the food system from climate-sensitive to CSA through integration of technologies, institutions and policies.
Details
Keywords
Ambaji S. Jadhav, Pushpa B. Patil and Sunil Biradar
Diabetic retinopathy (DR) is a central root of blindness all over the world. Though DR is tough to diagnose in starting stages, and the detection procedure might be time-consuming…
Abstract
Purpose
Diabetic retinopathy (DR) is a central root of blindness all over the world. Though DR is tough to diagnose in starting stages, and the detection procedure might be time-consuming even for qualified experts. Nowadays, intelligent disease detection techniques are extremely acceptable for progress analysis and recognition of various diseases. Therefore, a computer-aided diagnosis scheme based on intelligent learning approaches is intended to propose for diagnosing DR effectively using a benchmark dataset.
Design/methodology/approach
The proposed DR diagnostic procedure involves four main steps: (1) image pre-processing, (2) blood vessel segmentation, (3) feature extraction, and (4) classification. Initially, the retinal fundus image is taken for pre-processing with the help of Contrast Limited Adaptive Histogram Equalization (CLAHE) and average filter. In the next step, the blood vessel segmentation is carried out using a segmentation process with optimized gray-level thresholding. Once the blood vessels are extracted, feature extraction is done, using Local Binary Pattern (LBP), Texture Energy Measurement (TEM based on Laws of Texture Energy), and two entropy computations – Shanon's entropy, and Kapur's entropy. These collected features are subjected to a classifier called Neural Network (NN) with an optimized training algorithm. Both the gray-level thresholding and NN is enhanced by the Modified Levy Updated-Dragonfly Algorithm (MLU-DA), which operates to maximize the segmentation accuracy and to reduce the error difference between the predicted and actual outcomes of the NN. Finally, this classification error can correctly prove the efficiency of the proposed DR detection model.
Findings
The overall accuracy of the proposed MLU-DA was 16.6% superior to conventional classifiers, and the precision of the developed MLU-DA was 22% better than LM-NN, 16.6% better than PSO-NN, GWO-NN, and DA-NN. Finally, it is concluded that the implemented MLU-DA outperformed state-of-the-art algorithms in detecting DR.
Originality/value
This paper adopts the latest optimization algorithm called MLU-DA-Neural Network with optimal gray-level thresholding for detecting diabetic retinopathy disease. This is the first work utilizes MLU-DA-based Neural Network for computer-aided Diabetic Retinopathy diagnosis.
Details
Keywords
Shahidha Banu S. and Maheswari N.
Background modelling has played an imperative role in the moving object detection as the progress of foreground extraction during video analysis and surveillance in many real-time…
Abstract
Purpose
Background modelling has played an imperative role in the moving object detection as the progress of foreground extraction during video analysis and surveillance in many real-time applications. It is usually done by background subtraction. This method is uprightly based on a mathematical model with a fixed feature as a static background, where the background image is fixed with the foreground object running over it. Usually, this image is taken as the background model and is compared against every new frame of the input video sequence. In this paper, the authors presented a renewed background modelling method for foreground segmentation. The principal objective of the work is to perform the foreground object detection only in the premeditated region of interest (ROI). The ROI is calculated using the proposed algorithm reducing and raising by half (RRH). In this algorithm, the coordinate of a circle with the frame width as the diameter is considered for traversal to find the pixel difference. The change in the pixel intensity is considered to be the foreground object and the position of it is determined based on the pixel location. Most of the techniques study their updates to the pixels of the complete frame which may result in increased false rate; The proposed system deals these flaw by controlling the ROI object (the region only where the background subtraction is performed) and thus extracts a correct foreground by exactly categorizes the pixel as the foreground and mines the precise foreground object. The broad experimental results and the evaluation parameters of the proposed approach with the state of art methods were compared against the most recent background subtraction approaches. Moreover, the efficiency of the authors’ method is analyzed in different situations to prove that this method is available for real-time videos as well as videos available in the 2014 challenge change detection data set.
Design/methodology/approach
In this paper, the authors presented a fresh background modelling method for foreground segmentation. The main objective of the work is to perform the foreground object detection only on the premeditated ROI. The region for foreground extraction is calculated using proposed RRH algorithm. Most of the techniques study their updates to the pixels of the complete frame which may result in increased false rate; most challenging case is that, the slow moving object is updated quickly to detect the foreground region. The anticipated system deals these flaw by controlling the ROI object (the region only where the background subtraction is performed) and thus extracts a correct foreground by exactly categorizing the pixel as the foreground and mining the precise foreground object.
Findings
Plum Analytics provide a new conduit for documenting and contextualizing the public impact and reach of research within digitally networked environments. While limitations are notable, the metrics promoted through the platform can be used to build a more comprehensive view of research impact.
Originality/value
The algorithm used in the work was proposed by the authors and are used for experimental evaluations.
Details
Keywords
Uma Maheswari Devi Parmata, Sankara Rao B. and Rajashekhar B.
The aim of this paper is to contribute to the services marketing literature by developing a scale based on Parasuraman’s SERVQUAL scale for the measurement of distributor…
Abstract
Purpose
The aim of this paper is to contribute to the services marketing literature by developing a scale based on Parasuraman’s SERVQUAL scale for the measurement of distributor perceived service quality at the distributor–manufacturer interface of the pharmaceutical supply chain.
Design/methodology/approach
Based on a literature review and discussions with experts, a questionnaire was designed basing on the widely used service quality measurement scale (SERVQUAL). Personal survey was conducted among selected distributors spread over three major cities of the Indian pharmaceutical market. The study used the exploratory factor analysis to identify the critical factors of service quality followed by the confirmatory factor analysis (AMOS 20).
Findings
A valid scale with four dimensions – (reliability, assurance, responsiveness and communication) and 13 items for measuring the distributor perceived service quality was developed which also satisfied all the reliability and validity tests. The findings of the present study indicate that distributor perceived service quality has an effect on satisfaction.
Practical implications
The proposed scale is an attempt to explore the less researched area. This study will give further insights to researchers to measure service quality at different phases of the pharmaceutical supply chain. The study is limited to three cities; it can be extended to other regions of the country. This study will be helpful to the practicing managers to measure the service quality and improve the performance in the pharmaceutical supply chain.
Social implications
Service quality in pharmaceutical supply chain is very important, as it directly effects the health of the people, so the proposed scale can be used to control the quality of service.
Originality/value
The scale developed in this study can also be used for measuring distributor perceived service quality in other manufacturing sectors. This research provides direction and scope for further research to develop new concepts and models in measuring service quality in the supply chain.
Details