Subhodeep Mukherjee, Ramji Nagariya, Manish Mohan Baral, Bharat Singh Patel, Venkataiah Chittipaka, K. Srinivasa Rao and U.V. Adinarayana Rao
The circular economy is a production and consumption model that encourages people to share, lease, reuse, repair, refurbish and recycle existing materials and products for as long…
Abstract
Purpose
The circular economy is a production and consumption model that encourages people to share, lease, reuse, repair, refurbish and recycle existing materials and products for as long as possible. The blockchain-based circular economy is being used in many industries worldwide, but Indian electronic MSMEs face many problems in adopting a blockchain-based circular economy. The research aims to discover the barriers the electronic MSMEs face in adopting a blockchain-based circular economy and pull back from achieving environmental sustainability in their operations.
Design/methodology/approach
Fifteen barriers are identified from the literature review and finalized with experts' opinions. These barriers are evaluated by using interpretive structural modeling (ISM), MICMAC analysis and fuzzy TOPSIS method.
Findings
Lack of support from distribution channels, lack of traceability mechanism and customer attitudes toward purchasing remanufactured goods are identified as the most critical barriers.
Practical implications
The study will benchmark the electronic MSMEs in achieving environmental sustainability in the blockchain-based circular economy.
Originality/value
It is a study that not only establishes a hierarchical relationship among the barriers of blockchain adoption in Indian electronic MSMEs but also verifies the results with fuzzy TOPSIS method.
Details
Keywords
Kondapalli Siva Prasad, Chalamalasetti Srinivasa Rao and Damera Nageswara Rao
The purpose of this paper is to optimize the fusion zone grain size and hardness using Hooke and Jeeves Algorithm.
Abstract
Purpose
The purpose of this paper is to optimize the fusion zone grain size and hardness using Hooke and Jeeves Algorithm.
Design/methodology/approach
Experiments are conducted as per four factors, five levels response surface method based central composite design matrix. Empirical relations for predicting grain size and harness are developed. The effect of welding variables on grain size and hardness are studies. Grain size and hardness are optimised using Hooke and Jeeves Algorithm.
Findings
The developed empirical relations can be effectively used to predict grain size and hardness values of micro plasma arc welded Inconel 625 sheets. The values of grain size and hardness obtained by Hooke and Jeeves Algorithm matches with experimental values with great accuracy.
Research limitations/implications
The developed mathematical models are valid for 0.25 mm thick Inconel 625 sheets only.
Practical implications
In the present paper only four important factors namely peak current, back current, pulse rate and pulse width are considered, however one may consider other parameters like plasma gas flow rate, shielding gas flow rate, etc.
Originality/value
The present work is very much useful to sheet metal industries manufacturing metal bellows, diaphragms, etc.
Details
Keywords
James S. Meka, Praveen B. Choppala, Joseph Noel Kombathula and Raj K. Kuvala
This paper presents an analytic report, with precise scientific rigor, the positive impact of the government’s welfare schemes and the areas that need urgent public policy…
Abstract
Purpose
This paper presents an analytic report, with precise scientific rigor, the positive impact of the government’s welfare schemes and the areas that need urgent public policy intervention.
Design/methodology/approach
Uddanam, Srikakulam in Andhra Pradesh, a conglomeration of an apportioned group of villages, grapples with a severe and mysterious kidney disease epidemic since the 1980s, affecting agricultural communities. The region, which was once fondly called “Udyanam,” translated as “Garden,” for its richness in greenery and cashew and coconut trees, has now become “Uddanam,” the land of death and despair. The residents of the region suffer with high rates of kidney failure and associated health complications for factors including environmental toxins and poor water quality. Despite several efforts by governments, the impact of governmental policy on improving the conditions has been non-significant. The problem has been taken into sincere and serious consideration by the present Government of Andhra Pradesh which introduced ground-breaking welfare initiatives to impede the prevalence of the disease and the deaths among patients. This paper presents an analytic report, with precise scientific rigor, the positive impact of the government’s welfare schemes, and the areas that need urgent public policy intervention.
Findings
This paper is the first to identify that out of the total of 942 CKD patients interviewed uniformly at random from the Uddanam mandals, a majority of 86.06%, who belong to advanced stages, receive advanced governmental (free) medical care, and soon succumb to the disease, and a minority of 13.94%, who belong to early stages of the disease, do not benefit directly from government welfare schemes, and hence perpetually proceed to advanced stages.
Research limitations/implications
The qualitative study conducted in this paper is not fully exhaustive; however, the samples are taken uniformly at random from the entire region of influence, which renders the results credible.
Practical implications
The key findings of this paper will provide a scientific basis for governmental and private health institutions to focus for providing sophisticated medical care for early state CKD patients to further mitigate the mortality rate due to the disease in Uddanam.
Social implications
This paper shall create a positive social impact of the CKD handling measures taken by governmental and private agencies, and will bring to light the most impending issues that need immediate address, which are of great concern to the international community and media.
Originality/value
This paper is original and the contributions and findings presented herein have not been presented by anyone elsewhere. This paper is also the first to cojoin the impact of medical treatment for CKD at Uddanam and the use of digital technology, e.g. online consultation, online reports, etc.
Details
Keywords
Kaladhar Gaddala and P. Sangameswara Raju
In general, the optimal reactive power compensation could drastically enhance the performance of distributed network by the reduction of power loss and by enhancement of line…
Abstract
Purpose
In general, the optimal reactive power compensation could drastically enhance the performance of distributed network by the reduction of power loss and by enhancement of line loadability and voltage profile. Till now, there exist various reactive power compensation models including capacitor placement, joined process of on-load tap changer and capacitor banks and integration of DG. Further, one of the current method is the allocation of distribution FACTS (DFACTS) device. Even though, the DFACTS devices are usually used in the enhancement of power quality, they could be used in the optimal reactive power compensation with more effectiveness.
Design/methodology/approach
This paper introduces a power quality enhancement model that is based on a new hybrid optimization algorithm for selecting the precise unified power quality conditioner (UPQC) location and sizing. A new algorithm rider optimization algorithm (ROA)-modified particle swarm optimization (PSO) in fitness basis (RMPF) is introduced for this optimal selections.
Findings
Through the performance analysis, it is observed that as the iteration increases, there is a gradual minimization of cost function. At the 40th iteration, the proposed method is 1.99 per cent better than ROA and genetic algorithm (GA); 0.09 per cent better than GMDA and WOA; and 0.14, 0.57 and 1.94 per cent better than Dragonfly algorithm (DA), worst solution linked whale optimization (WS-WU) and PSO, respectively. At the 60th iteration, the proposed method attains less cost function, which is 2.07, 0.08, 0.06, 0.09, 0.07 and 1.90 per cent superior to ROA, GMDA, DA, GA, WS-WU and PSO, respectively. Thus, the proposed model proves that it is better than other models.
Originality/value
This paper presents a technique for optimal placing and sizing of UPQC. To the best of the authors’ knowledge, this is the first work that introduces RMPF algorithm to solve the optimization problems.
Details
Keywords
Loganathan Appaia, Padmanaban Muthu Krishnan and Sankaran Kalaiselvi
– The purpose of this paper is the determination of reliability sampling plans in the Bayesian approach assuming that the lifetime distribution is exponential.
Abstract
Purpose
The purpose of this paper is the determination of reliability sampling plans in the Bayesian approach assuming that the lifetime distribution is exponential.
Design/methodology/approach
Sampling plans are used in manufacturing companies as a tool for carrying out sampling inspections, in order to make decisions about the disposition of many finished products. If the quality characteristic is considered as the lifetime of the products, the plan is known as a reliability sampling plan. In life testing, censoring schemes are adopted in order to save time and cost of life test. The inverted gamma distribution is employed as the natural conjugate prior to the average lifetime of the products. The sampling plans are developed assuming various probability distributions to the lifetime of the products.
Findings
The optimum plans n and c are obtained for some sets of values of (p1, a, p2, ß). The selection of sampling plans is illustrated through numerical examples.
Originality/value
Results obtained in this paper are original and the study has been done for the first time in this regard. Reliability sampling plans are essential for making decisions either to accept or reject based on the inspection of the sample.
Details
Keywords
Dharmendra B.V., Shyam Prasad Kodali and Nageswara Rao Boggarapu
The purpose of this paper is to adopt the multi-objective optimization technique for identifying a set of optimum abrasive water jet machining (AWJM) parameters to achieve maximum…
Abstract
Purpose
The purpose of this paper is to adopt the multi-objective optimization technique for identifying a set of optimum abrasive water jet machining (AWJM) parameters to achieve maximum material removal rate (MRR) and minimum surface roughness.
Design/methodology/approach
Data of a few experiments as per the Taguchi’s orthogonal array are considered for achieving maximum MRR and minimum surface roughness (Ra) of the Inconel718. Analysis of variance is performed to understand the statistical significance of AWJM input process parameters.
Findings
Empirical relations are developed for MRR and Ra in terms of the AWJM process parameters and demonstrated their adequacy through comparison of test results.
Research limitations/implications
The signal-to-noise ratio transformation should be applied to take in to account the scatter in the repetition of tests in each test run. But, many researchers have adopted this transformation on a single output response of each test run, which has no added advantage other than additional computational task. This paper explains the impact of insignificant process parameter in selection of optimal process parameters. This paper demands drawbacks and complexity in existing theories prior to use new algorithms.
Practical implications
Taguchi approach is quite simple and easy to handle optimization problems, which has no practical implications (if it handles properly). There is no necessity to hunt for new algorithms for obtaining solution for multi-objective optimization AWJM process.
Originality/value
This paper deals with a case study, which demonstrates the simplicity of the Taguchi approach in solving multi-objective optimization problems with a few number of experiments.
Details
Keywords
Bhavya Swathi I., Suvarna Raju L. and Perumalla Janaki Ramulu
Friction stir processing (FSP) is overviewed with the process variables, along with the thermal aspect of different metals.
Abstract
Purpose
Friction stir processing (FSP) is overviewed with the process variables, along with the thermal aspect of different metals.
Design/methodology/approach
With its inbuilt advantages, FSP is used to reduce the failure in the structural integrity of the body panels of automobiles, airplanes and lashing rails. FSP has excellent process ability and surface treatability with good corrosion resistance and high strength at elevated temperatures. Process parameters such as rotation speed of the tool, traverse speed, tool tilt angle, groove design, volume fraction and increase in number of tool passes should be considered for generating a processed and defect-free surface of the workpiece.
Findings
FSP process is used for modifying the surface by reinforcement of composites to improve the mechanical properties and results in the ultrafine grain refinement of microstructure. FSP uses the frictional heat and mechanical deformation for achieving the maximum performance using the low-cost tool; the production time is also very less.
Originality/value
100
Details
Keywords
Y. Srinivasa Rao and M. Satyam
The effects of material parameters and processing conditions on the resistance drop by high voltage discharge in PVC – graphite thick film resistors are studied in this paper. The…
Abstract
The effects of material parameters and processing conditions on the resistance drop by high voltage discharge in PVC – graphite thick film resistors are studied in this paper. The resistance drop increased upon an increase in graphite aggregate size, which is a function of material parameters and processing conditions. The resistance drop has been attributed to the dielectrophoretic motion of graphite particles in PVC by the application of high voltages to polymer thick film resistors.
Details
Keywords
The purpose of this paper is to consider the estimation of multicomponent stress-strength reliability. The system is regarded as alive only if at least s out of k (s<k) strengths…
Abstract
Purpose
The purpose of this paper is to consider the estimation of multicomponent stress-strength reliability. The system is regarded as alive only if at least s out of k (s<k) strengths exceed the stress. The reliability of such a system is obtained when strength, stress variates are from Erlang-truncated exponential (ETE) distribution with different shape parameters. The reliability is estimated using the maximum likelihood (ML) method of estimation when samples are drawn from strength and stress distributions. The reliability estimators are compared asymptotically. The small sample comparison of the reliability estimates is made through Monte Carlo simulation. Using real data sets the authors illustrate the procedure.
Design/methodology/approach
The authors have developed multicomponent stress-strength reliability based on ETE distribution. To estimate reliability, the parameters are estimated by using ML method.
Findings
The simulation results indicate that the average bias and average mean square error decreases as sample size increases for both methods of estimation in reliability. The length of the confidence interval also decreases as the sample size increases and simulated actual coverage probability is close to the nominal value in all sets of parameters considered here. Using real data, the authors illustrate the estimation process.
Originality/value
This research work has conducted independently and the results of the author’s research work are very useful for fresh researchers.
Details
Keywords
Rajyalakshmi K. and Nageswara Rao Boggarapu
Scatter in the outcome of repeated experiments is unavoidable due to measurement errors in addition to the non-linear nature of the output responses with unknown influential input…
Abstract
Purpose
Scatter in the outcome of repeated experiments is unavoidable due to measurement errors in addition to the non-linear nature of the output responses with unknown influential input parameters. It is a standard practice to select an orthogonal array in the Taguchi approach for tracing optimum input parameters by conducting a few number of experiments and confirm them through additional experimentation (if necessary). The purpose of this paper is to present a simple methodology and its validation with existing test results in finding the expected range of the output response by suggesting modifications in the Taguchi method.
Design/methodology/approach
The modified Taguchi approach is proposed to find the optimum process parameters and the expected range of the output response.
Findings
This paper presents a simple methodology and its validation with existing test results in finding the expected range of the output response by suggesting modifications in the Taguchi method.
Research limitations/implications
Adequacy of this methodology should be examined by considering the test data on different materials and structures.
Originality/value
The introduction of Chauvenet’s criterion and opposing the signal-to-noise ratio transformation on repeated experiments of each test run will provide fruitful results and less computation burden.