Search results
1 – 10 of 271Maroof Naieem Qadri and S.M.K. Quadri
The purpose of this paper is to propose a model to map the on-premise computing system of the university with cloud computing for achieving an effective and reliable university…
Abstract
Purpose
The purpose of this paper is to propose a model to map the on-premise computing system of the university with cloud computing for achieving an effective and reliable university e-governance (e-gov) system.
Design/methodology/approach
The proposed model incorporates the university’s internal e-gov system with cloud computing in order to achieve better reliability, accessibility and availability of e-gov services while keeping the recurring expenditure low. This model has been implemented (and tested on a university e-gov system) in the University of Kashmir (UOK); case study of this implementation has been chosen as the research methodology to discuss and demonstrate the proposed model.
Findings
According to the results based on practical implementation, the proposed model is ideal for e-governed systems as it provided adequate cost savings and high availability (HA) with operational ease, apart from continuing to have the necessary security in place to maintain confidential information such as student details, grades, etc.
Practical implications
The implication of this study is to achieve HA and to reduce the cost from using external clouds, mapping internal IT servers of the university with the external cloud computing services.
Originality/value
Because no established mapping model for universities has been provided for effective, low-cost, highly available university e-gov system, the proposed mapping model through this paper closes this gap and provides guidelines to implement a hybrid-mapped e-gov model for universities while keeping the recurring expenditure on cloud computing minimal. The paper provides the perceptions of its adoption at UOK for achieving high reliability, accessibility and uptime of its e-gov applications while keeping the recurring expenditure on cloud computing minimal.
Details
Keywords
Nesar Ahmad, M.U. Bokhari, S.M.K. Quadri and M.G.M. Khan
The purpose of this research is to incorporate the exponentiated Weibull testing‐effort functions into software reliability modeling and to estimate the optimal software release…
Abstract
Purpose
The purpose of this research is to incorporate the exponentiated Weibull testing‐effort functions into software reliability modeling and to estimate the optimal software release time.
Design/methodology/approach
This paper suggests a software reliability growth model based on the non‐homogeneous Poisson process (NHPP) which incorporates the exponentiated Weibull (EW) testing‐efforts.
Findings
Experimental results on actual data from three software projects are compared with other existing models which reveal that the proposed software reliability growth model with EW testing‐effort is wider and effective SRGM.
Research limitations/implications
This paper presents a SRGM using a constant error detection rate per unit testing‐effort.
Practical implications
Software reliability growth model is one of the fundamental techniques to assess software reliability quantitatively. The results obtained in this paper will be useful during the software testing process.
Originality/value
The present scheme has a flexible structure and may cover many of the earlier results on software reliability growth modeling. In general, this paper also provides a framework in which many software reliability growth models can be described.
Details
Keywords
Shabia Shabir Khan and S.M.K. Quadri
As far as the treatment of most complex issues in the design is concerned, approaches based on classical artificial intelligence are inferior compared to the ones based on…
Abstract
Purpose
As far as the treatment of most complex issues in the design is concerned, approaches based on classical artificial intelligence are inferior compared to the ones based on computational intelligence, particularly this involves dealing with vagueness, multi-objectivity and good amount of possible solutions. In practical applications, computational techniques have given best results and the research in this field is continuously growing. The purpose of this paper is to search for a general and effective intelligent tool for prediction of patient survival after surgery. The present study involves the construction of such intelligent computational models using different configurations, including data partitioning techniques that have been experimentally evaluated by applying them over realistic medical data set for the prediction of survival in pancreatic cancer patients.
Design/methodology/approach
On the basis of the experiments and research performed over the data belonging to various fields using different intelligent tools, the authors infer that combining or integrating the qualification aspects of fuzzy inference system and quantification aspects of artificial neural network can prove an efficient and better model for prediction. The authors have constructed three soft computing-based adaptive neuro-fuzzy inference system (ANFIS) models with different configurations and data partitioning techniques with an aim to search capable predictive tools that could deal with nonlinear and complex data. After evaluating the models over three shuffles of data (training set, test set and full set), the performances were compared in order to find the best design for prediction of patient survival after surgery. The construction and implementation of models have been performed using MATLAB simulator.
Findings
On applying the hybrid intelligent neuro-fuzzy models with different configurations, the authors were able to find its advantage in predicting the survival of patients with pancreatic cancer. Experimental results and comparison between the constructed models conclude that ANFIS with Fuzzy C-means (FCM) partitioning model provides better accuracy in predicting the class with lowest mean square error (MSE) value. Apart from MSE value, other evaluation measure values for FCM partitioning prove to be better than the rest of the models. Therefore, the results demonstrate that the model can be applied to other biomedicine and engineering fields dealing with different complex issues related to imprecision and uncertainty.
Originality/value
The originality of paper includes framework showing two-way flow for fuzzy system construction which is further used by the authors in designing the three simulation models with different configurations, including the partitioning methods for prediction of patient survival after surgery. Several experiments were carried out using different shuffles of data to validate the parameters of the model. The performances of the models were compared using various evaluation measures such as MSE.
Details
Keywords
N. Ahmad, M.G.M. Khan, S.M.K. Quadri and M. Kumar
The purpose of this research paper is to discuss a software reliability growth model (SRGM) based on the non‐homogeneous Poisson process which incorporates the Burr type X…
Abstract
Purpose
The purpose of this research paper is to discuss a software reliability growth model (SRGM) based on the non‐homogeneous Poisson process which incorporates the Burr type X testing‐effort function (TEF), and to determine the optimal release‐time based on cost‐reliability criteria.
Design/methodology/approach
It is shown that the Burr type X TEF can be expressed as a software development/testing‐effort consumption curve. Weighted least squares estimation method is proposed to estimate the TEF parameters. The SRGM parameters are estimated by the maximum likelihood estimation method. The standard errors and confidence intervals of SRGM parameters are also obtained. Furthermore, the optimal release‐time determination based on cost‐reliability criteria has been discussed within the framework.
Findings
The performance of the proposed SRGM is demonstrated by using actual data sets from three software projects. Results are compared with other traditional SRGMs to show that the proposed model has a fairly better prediction capability and that the Burr type X TEF is suitable for incorporating into software reliability modelling. Results also reveal that the SRGM with Burr type X TEF can estimate the number of initial faults better than that of other traditional SRGMs.
Research limitations/implications
The paper presents the estimation method with equal weight. Future research may include extending the present study to unequal weight.
Practical implications
The new SRGM may be useful in detecting more faults that are difficult to find during regular testing, and in assisting software engineers to improve their software development process.
Originality/value
The incorporated TEF is flexible and can be used to describe the actual expenditure patterns more faithfully during software development.
Details
Keywords
N. Ahmad, M.G.M. Khan and L.S. Rafi
The purpose of this paper is to investigate how to incorporate the exponentiated Weibull (EW) testing‐effort function (TEF) into inflection S‐shaped software reliability growth…
Abstract
Purpose
The purpose of this paper is to investigate how to incorporate the exponentiated Weibull (EW) testing‐effort function (TEF) into inflection S‐shaped software reliability growth models (SRGMs) based on non‐homogeneous Poisson process (NHPP). The aim is also to present a more flexible SRGM with imperfect debugging.
Design/methodology/approach
This paper reviews the EW TEFs and discusses inflection S‐shaped SRGM with EW testing‐effort to get a better description of the software fault detection phenomenon. The SRGM parameters are estimated by weighted least square estimation (WLSE) and maximum‐likelihood estimation (MLE) methods. Furthermore, the proposed models are also discussed under imperfect debugging environment.
Findings
Experimental results from three actual data applications are analyzed and compared with the other existing models. The findings reveal that the proposed SRGM has better performance and prediction capability. Results also confirm that the EW TEF is suitable for incorporating into inflection S‐shaped NHPP growth models.
Research limitations/implications
This paper presents the WLSE results with equal weight. Future research may be carried out for unequal weights.
Practical implications
Software reliability modeling and estimation are a major concern in the software development process, particularly during the software testing phase, as unreliable software can cause a failure in the computer system that can be hazardous. The results obtained in this paper may facilitate the software engineers, scientists, and managers in improving the software testing process.
Originality/value
The proposed SRGM has a flexible structure and may capture features of both exponential and S‐shaped NHPP growth models for failure phenomenon.
Details
Keywords
Bhawna Suri, Shweta Taneja and Hemanpreet Singh Kalsi
This chapter discussed the role of business intelligence (BI) in healthcare twofold strategic decision making of the organization and the stakeholders. The visualization…
Abstract
This chapter discussed the role of business intelligence (BI) in healthcare twofold strategic decision making of the organization and the stakeholders. The visualization techniques of data mining are applied for the early and correct diagnosis of the disease, patient’s satisfaction quotient and also helpful for the hospital to know their best commanders.
In this chapter, the usefulness of BI is shown at two levels: at doctor level and at hospital level. As a case study, a hospital is taken which deals with three different kinds of diseases: Breast Cancer, Diabetes, and Liver disorder. BI can be applied for taking better strategic decisions in the context of hospital and its department’s growth. At the doctor level, on the basis of various symptoms of the disease, the doctor can advise the suitable treatment to the patients. At the hospital level, the best department among all can be identified. Also, a patient’s type of admission, continued their treatments with the hospital, patient’s satisfaction quotient, etc., can be calculated. The authors have used different methods like Correlation matrix, decision tree, mosaic plots, etc., to conduct this analysis.
Details
Keywords
Chao Wang, Longfeng Zhao, André L.M. Vilela and Ming K. Lim
The purpose of this paper is to examine publication characteristics and dynamic evolution of the Industrial Management & Data Systems (IMDS) over the past 25 years from volume 94…
Abstract
Purpose
The purpose of this paper is to examine publication characteristics and dynamic evolution of the Industrial Management & Data Systems (IMDS) over the past 25 years from volume 94, issue 1, in 1994 through volume 118, issue 9, in 2018, using a bibliometric analysis, and identify the leading trends that have affected the journal during this time frame.
Design/methodology/approach
A bibliometric approach was used to provide a basic overview of the IMDS, including distribution of publication and citations, articles citing the IMDS, top-cited papers and publication patterns. Then, a complex network analysis was employed to present the most productive, influential and active authors, institutes and countries/regions. In addition, cluster analysis and alluvial diagram were used to analyze author keywords.
Findings
This study presents the basic bibliometric results for the IMDS and focuses on exploring its performance over the last 25 years. And it reveals the most productive, influential and active authors, institutes and countries/regions in IMDS. Moreover, this study detects the existence of at least five different keywords clusters and discovers how themes have evolved through the intricate citation relationships in IMDS.
Originality/value
The main contribution of this paper is the use of multiple analysis techniques from a complex network paradigm to emphasize the time evolving nature of the co-occurrence networks and to explore the variation of the collaboration networks in the IMDS. For the first time, the evolution of research themes is revealed with a purely data-driven approach.
Details
Keywords
Wasim Ahmad Bhat and S.M.K. Quadri
The purpose of this paper is to explore the challenges posed by Big Data to current trends in computation, networking and storage technology at various stages of Big Data…
Abstract
Purpose
The purpose of this paper is to explore the challenges posed by Big Data to current trends in computation, networking and storage technology at various stages of Big Data analysis. The work aims to bridge the gap between theory and practice, and highlight the areas of potential research.
Design/methodology/approach
The study employs a systematic and critical review of the relevant literature to explore the challenges posed by Big Data to hardware technology, and assess the worthiness of hardware technology at various stages of Big Data analysis. Online computer-databases were searched to identify the literature relevant to: Big Data requirements and challenges; and evolution and current trends of hardware technology.
Findings
The findings reveal that even though current hardware technology has not evolved with the motivation to support Big Data analysis, it significantly supports Big Data analysis at all stages. However, they also point toward some important shortcomings and challenges of current technology trends. These include: lack of intelligent Big Data sources; need for scalable real-time analysis capability; lack of support (in networks) for latency-bound applications; need for necessary augmentation (in network support) for peer-to-peer networks; and rethinking on cost-effective high-performance storage subsystem.
Research limitations/implications
The study suggests that a lot of research is yet to be done in hardware technology, if full potential of Big Data is to be unlocked.
Practical implications
The study suggests that practitioners need to meticulously choose the hardware infrastructure for Big Data considering the limitations of technology.
Originality/value
This research arms industry, enterprises and organizations with the concise and comprehensive technical-knowledge about the capability of current hardware technology trends in solving Big Data problems. It also highlights the areas of potential research and immediate attention which researchers can exploit to explore new ideas and existing practices.
Details
Keywords
Contemporary organisations are data-driven with sophisticated and strong Information Technology (IT) supporting the Business Intelligence (BI) systems. Due to the Industrial…
Abstract
Contemporary organisations are data-driven with sophisticated and strong Information Technology (IT) supporting the Business Intelligence (BI) systems. Due to the Industrial Revolution 4.0, businesses are subjected to volatility, uncertainty, complexity, and ambiguity (VUCA). The accuracy and agility of decision making (DM) play a key role in the success of contemporary organisations. Traditional methods of DM, i.e. based on tacit knowledge, are no longer relevant in the constantly altering business scenarios. Innovations in the IT domain have accomplished systems to gather and process business data at an exponential speed. Context-driven analytics along with computation capability and performance-driven visualisation have become an implicit need for businesses. BI systems offer the capabilities of data-driven DM simultaneously allowing organisations to predict the future business scenarios. Qualitative research is conducted in this chapter. In the research, interviews, questionnaires, and secondary data from previous research are used as data source. Case studies are discussed to clarify the business use cases of BI systems and their impact on managerial DM. Theoretical foundations are stated basis a thorough literature review of the available body of knowledge. The current environment demands data-driven DM in an organisation at all levels, i.e. strategic, tactical, and operational. Heterogeneous data sources add unlimited value to the decision support systems (DSSs). The BI systems have become an integral part of the technology landscape and an essential element in managerial DM. Contemporary businesses have deployed BI systems in all the functions.
Details