Sathyaraj R, Ramanathan L, Lavanya K, Balasubramanian V and Saira Banu J
The innovation in big data is increasing day by day in such a way that the conventional software tools face several problems in managing the big data. Moreover, the occurrence of…
Abstract
Purpose
The innovation in big data is increasing day by day in such a way that the conventional software tools face several problems in managing the big data. Moreover, the occurrence of the imbalance data in the massive data sets is a major constraint to the research industry.
Design/methodology/approach
The purpose of the paper is to introduce a big data classification technique using the MapReduce framework based on an optimization algorithm. The big data classification is enabled using the MapReduce framework, which utilizes the proposed optimization algorithm, named chicken-based bacterial foraging (CBF) algorithm. The proposed algorithm is generated by integrating the bacterial foraging optimization (BFO) algorithm with the cat swarm optimization (CSO) algorithm. The proposed model executes the process in two stages, namely, training and testing phases. In the training phase, the big data that is produced from different distributed sources is subjected to parallel processing using the mappers in the mapper phase, which perform the preprocessing and feature selection based on the proposed CBF algorithm. The preprocessing step eliminates the redundant and inconsistent data, whereas the feature section step is done on the preprocessed data for extracting the significant features from the data, to provide improved classification accuracy. The selected features are fed into the reducer for data classification using the deep belief network (DBN) classifier, which is trained using the proposed CBF algorithm such that the data are classified into various classes, and finally, at the end of the training process, the individual reducers present the trained models. Thus, the incremental data are handled effectively based on the training model in the training phase. In the testing phase, the incremental data are taken and split into different subsets and fed into the different mappers for the classification. Each mapper contains a trained model which is obtained from the training phase. The trained model is utilized for classifying the incremental data. After classification, the output obtained from each mapper is fused and fed into the reducer for the classification.
Findings
The maximum accuracy and Jaccard coefficient are obtained using the epileptic seizure recognition database. The proposed CBF-DBN produces a maximal accuracy value of 91.129%, whereas the accuracy values of the existing neural network (NN), DBN, naive Bayes classifier-term frequency–inverse document frequency (NBC-TFIDF) are 82.894%, 86.184% and 86.512%, respectively. The Jaccard coefficient of the proposed CBF-DBN produces a maximal Jaccard coefficient value of 88.928%, whereas the Jaccard coefficient values of the existing NN, DBN, NBC-TFIDF are 75.891%, 79.850% and 81.103%, respectively.
Originality/value
In this paper, a big data classification method is proposed for categorizing massive data sets for meeting the constraints of huge data. The big data classification is performed on the MapReduce framework based on training and testing phases in such a way that the data are handled in parallel at the same time. In the training phase, the big data is obtained and partitioned into different subsets of data and fed into the mapper. In the mapper, the features extraction step is performed for extracting the significant features. The obtained features are subjected to the reducers for classifying the data using the obtained features. The DBN classifier is utilized for the classification wherein the DBN is trained using the proposed CBF algorithm. The trained model is obtained as an output after the classification. In the testing phase, the incremental data are considered for the classification. New data are first split into subsets and fed into the mapper for classification. The trained models obtained from the training phase are used for the classification. The classified results from each mapper are fused and fed into the reducer for the classification of big data.
Details
Keywords
This article has been withdrawn as it was published elsewhere and accidentally duplicated. The original article can be seen here: 10.1108/eb014561. When citing the article, please…
Abstract
This article has been withdrawn as it was published elsewhere and accidentally duplicated. The original article can be seen here: 10.1108/eb014561. When citing the article, please cite: D. Walters, C.A Rands, (1983), “Computers in Retailing”, International Journal of Physical Distribution & Materials Management, Vol. 13 Iss: 4, pp. 3 - 36.
Since the first Volume of this Bibliography there has been an explosion of literature in all the main areas of business. The researcher and librarian have to be able to uncover…
Abstract
Since the first Volume of this Bibliography there has been an explosion of literature in all the main areas of business. The researcher and librarian have to be able to uncover specific articles devoted to certain topics. This Bibliography is designed to help. Volume III, in addition to the annotated list of articles as the two previous volumes, contains further features to help the reader. Each entry within has been indexed according to the Fifth Edition of the SCIMP/SCAMP Thesaurus and thus provides a full subject index to facilitate rapid information retrieval. Each article has its own unique number and this is used in both the subject and author index. The first Volume of the Bibliography covered seven journals published by MCB University Press. This Volume now indexes 25 journals, indicating the greater depth, coverage and expansion of the subject areas concerned.
Details
Keywords
Wenrui Jin, Zhaoxu He and Qiong Wu
Due to the market trend of low-volume and high-variety, the manufacturing industry is paying close attention to improve the ability to hedge against variability. Therefore, in…
Abstract
Purpose
Due to the market trend of low-volume and high-variety, the manufacturing industry is paying close attention to improve the ability to hedge against variability. Therefore, in this paper the assembly line with limited resources is balanced in a robust way that has good performance under all possible scenarios. The proposed model allows decision makers to minimize a posteriori regret of the selected choice and hedge against the high cost caused by variability.
Design/methodology/approach
A generalized resource-constrained assembly line balancing problem (GRCALBP) with an interval data of task times is modeled and the objective is to find an assignment of tasks and resources to the workstations such that the maximum regret among all the possible scenarios is minimized. To properly solve the problem, the regret evaluation, an exact solution method and an enhanced meta-heuristic algorithm, Whale Optimization Algorithm, are proposed and analyzed. A problem-specific coding scheme and search mechanisms are incorporated.
Findings
Theory analysis and computational experiments are conducted to evaluated the proposed methods and their superiority. Satisfactory results show that the constraint generation technique-based exact method can efficiently solve instances of moderate size to optimality, and the performance of WOA is enhanced due to the modified searching strategy.
Originality/value
For the first time a minmax regret model is considered in a resource-constrained assembly line balancing problem. The traditional Whale Optimization Algorithm is modified to overcome the inferior capability and applied in discrete and constrained assembly line balancing problems.
Details
Keywords
D. Walters and C.A Rands
Along with organisations in other fields, retailers have been using computers in management systems since the mid‐1960s and, in some cases, much earlier. Over this period, there…
Abstract
Along with organisations in other fields, retailers have been using computers in management systems since the mid‐1960s and, in some cases, much earlier. Over this period, there have been dramatic changes in the computer technology available for use by management, together with considerable accumulated experience in using them, particularly in retailing. However, this has, in many industries, been offset by an increase in the problems facing managers; in retailing, for instance, companies now have to face economies in which disposable incomes have been squeezed, whilst buying patterns are changing rapidly and becoming difficult to predict. A consequence of this is that to survive today retailers must be far better at product range planning, cash planning and control of capital than they needed to be in the 1960s. They may be helped in this by an increasing understanding of how to manage product range, cash flow or funds allocation problems, and also by the availability of more advanced computing facilities which allow managers to apply this understanding more effectively. These facilities vary from the range of computers on offer (mainframe to micros) to data flow networks, automated data input, visual display terminals and specialist soft‐ware for retail planning and control (e.g. distribution packages).
This paper aims to explore the applicability of Systems Dynamics Methodology (SDM) to the formulation of long‐range cash flow policies. It also explains how the information…
Abstract
This paper aims to explore the applicability of Systems Dynamics Methodology (SDM) to the formulation of long‐range cash flow policies. It also explains how the information generated from the model aids in understanding the behaviour of cash flow through time and helps in determining cash deficits, excess cash, including timing, and the contruction of cash budgets under different cash control policies. After a brief introduction which explains the basic ideas behind SDM, the structure of the model is developed and described and the results of a hypothetical example analysed. This is followed by some comments on practical aspects of implementing the model in real life and its potential for cash flow planning and control.
This paper aims to explore the applicability of Systems Dynamics Methodology (SDM) to the formulation of long‐range cash flow policies. It also explains how the information…
Abstract
This paper aims to explore the applicability of Systems Dynamics Methodology (SDM) to the formulation of long‐range cash flow policies. It also explains how the information generated from the model aids in understanding the behaviour of cashflow through time and helps in determining cash deficits, excess cash, including timing, and the contruction of cash budgets under different cash control policies. After a brief introduction which explains the basic ideas behind SDM, the structure of the model is developed and described and the results of a hypothetical example analysed. This is followed by some comments on practical aspects of implementing the model in real life and its potential for cash flow planning and control.
Cornelia Dröge and Richard Germain
Examines empirically which of a range of variables affect managers′perceptions of the management information system (MIS) designed tosupport logistics. The results suggest that…
Abstract
Examines empirically which of a range of variables affect managers′ perceptions of the management information system (MIS) designed to support logistics. The results suggest that the adoption of computer software, the use of specific informational control devices and some aspects of logistics organisation have an effect in both smaller and larger firms. Other variables, such as the title and tenure of the senior logistics executive, do not systematically predict variance in managers′ perceptions of logistics MIS.
Details
Keywords
Mehdi Habibi, Yunus Dawji, Ebrahim Ghafar-Zadeh and Sebastian Magierowski
Nanopore-based molecular sensing and measurement, specifically DNA sequencing, is advancing at a fast pace. Some embodiments have matured from coarse particle counters to enabling…
Abstract
Purpose
Nanopore-based molecular sensing and measurement, specifically DNA sequencing, is advancing at a fast pace. Some embodiments have matured from coarse particle counters to enabling full human genome assembly. This evolution has been powered not only by improvements in the sensors themselves, but also in the assisting microelectronic CMOS readout circuitry closely interfaced to them. In this light, this paper aims to review established and emerging nanopore-based sensing modalities considered for DNA sequencing and CMOS microelectronic methods currently being used.
Design/methodology/approach
Readout and amplifier circuits, which are potentially appropriate for conditioning and conversion of nanopore signals for downstream processing, are studied. Furthermore, arrayed CMOS readout implementations are focused on and the relevant status of the nanopore sensor technology is reviewed as well.
Findings
Ion channel nanopore devices have unique properties compared with other electrochemical cells. Currently biological nanopores are the only variants reported which can be used for actual DNA sequencing. The translocation rate of DNA through such pores, the current range at which these cells operate on and the cell capacitance effect, all impose the necessity of using low-noise circuits in the process of signal detection. The requirement of using in-pixel low-noise circuits in turn tends to impose challenges in the implementation of large size arrays.
Originality/value
The study presents an overview on the readout circuits used for signal acquisition in electrochemical cell arrays and investigates the specific requirements necessary for implementation of nanopore-type electrochemical cell amplifiers and their associated readout electronics.