Search results
1 – 10 of 12Samar Ali Shilbayeh and Sunil Vadera
This paper aims to describe the use of a meta-learning framework for recommending cost-sensitive classification methods with the aim of answering an important question that arises…
Abstract
Purpose
This paper aims to describe the use of a meta-learning framework for recommending cost-sensitive classification methods with the aim of answering an important question that arises in machine learning, namely, “Among all the available classification algorithms, and in considering a specific type of data and cost, which is the best algorithm for my problem?”
Design/methodology/approach
This paper describes the use of a meta-learning framework for recommending cost-sensitive classification methods for the aim of answering an important question that arises in machine learning, namely, “Among all the available classification algorithms, and in considering a specific type of data and cost, which is the best algorithm for my problem?” The framework is based on the idea of applying machine learning techniques to discover knowledge about the performance of different machine learning algorithms. It includes components that repeatedly apply different classification methods on data sets and measures their performance. The characteristics of the data sets, combined with the algorithms and the performance provide the training examples. A decision tree algorithm is applied to the training examples to induce the knowledge, which can then be used to recommend algorithms for new data sets. The paper makes a contribution to both meta-learning and cost-sensitive machine learning approaches. Those both fields are not new, however, building a recommender that recommends the optimal case-sensitive approach for a given data problem is the contribution. The proposed solution is implemented in WEKA and evaluated by applying it on different data sets and comparing the results with existing studies available in the literature. The results show that a developed meta-learning solution produces better results than METAL, a well-known meta-learning system. The developed solution takes the misclassification cost into consideration during the learning process, which is not available in the compared project.
Findings
The proposed solution is implemented in WEKA and evaluated by applying it to different data sets and comparing the results with existing studies available in the literature. The results show that a developed meta-learning solution produces better results than METAL, a well-known meta-learning system.
Originality/value
The paper presents a major piece of new information in writing for the first time. Meta-learning work has been done before but this paper presents a new meta-learning framework that is costs sensitive.
Details
Keywords
Afzal Sheikh, Sunil Vadera, Michael Ravey, Gary Lovatt and Grace Kelly
Over 200,000 young people in the UK embark on a smoking career annually, thus continued effort is required to understand the types of interventions that are most effective in…
Abstract
Purpose
Over 200,000 young people in the UK embark on a smoking career annually, thus continued effort is required to understand the types of interventions that are most effective in changing perceptions about smoking amongst teenagers. Several authors have proposed the use of social norms programmes, where correcting misconceptions of what is considered normal behaviour lead to improved behaviours. There are a limited number of studies showing the effectiveness of such programmes for changing teenagers’ perception of smoking habits, and hence this paper reports on the results from one of the largest social norms programmes that used a variety of interventions aimed at improving teenagers’ perceptions of smoking. The paper aims to discuss this issue.
Design/methodology/approach
A range of interventions were adopted for 57 programmes in year nine students, ranging from passive interventions such as posters and banners to active interventions such as student apps and enterprise days. Each programme consisted of a baseline survey followed by interventions and a repeat survey to calculate the change in perception. A clustering algorithm was also used to reveal the impact of combinations of interventions.
Findings
The study reveals three main findings: the use of social norms is an effective means of changing perceptions, the level of interventions and change in perceptions are positively correlated, and that the most effective combinations of interventions include the use of interactive feedback assemblies, enterprise days, parent and student apps and newsletters to parents.
Originality/value
The paper presents results from one of the largest social norm programmes aimed at improving young people’s perceptions and the first to use clustering methods to reveal the impact of combinations of intervention.
Details
Keywords
Khairy A.H. Kobbacy and Sunil Vadera
The use of AI for operations management, with its ability to evolve solutions, handle uncertainty and perform optimisation continues to be a major field of research. The growing…
Abstract
Purpose
The use of AI for operations management, with its ability to evolve solutions, handle uncertainty and perform optimisation continues to be a major field of research. The growing body of publications over the last two decades means that it can be difficult to keep track of what has been done previously, what has worked, and what really needs to be addressed. Hence, the purpose of this paper is to present a survey of the use of AI in operations management aimed at presenting the key research themes, trends and directions of research.
Design/methodology/approach
The paper builds upon our previous survey of this field which was carried out for the ten‐year period 1995‐2004. Like the previous survey, it uses Elsevier's Science Direct database as a source. The framework and methodology adopted for the survey is kept as similar as possible to enable continuity and comparison of trends. Thus, the application categories adopted are: design; scheduling; process planning and control; and quality, maintenance and fault diagnosis. Research on utilising neural networks, case‐based reasoning (CBR), fuzzy logic (FL), knowledge‐Based systems (KBS), data mining, and hybrid AI in the four application areas are identified.
Findings
The survey categorises over 1,400 papers, identifying the uses of AI in the four categories of operations management and concludes with an analysis of the trends, gaps and directions for future research. The findings include: the trends for design and scheduling show a dramatic increase in the use of genetic algorithms since 2003 that reflect recognition of their success in these areas; there is a significant decline in research on use of KBS, reflecting their transition into practice; there is an increasing trend in the use of FL in quality, maintenance and fault diagnosis; and there are surprising gaps in the use of CBR and hybrid methods in operations management that offer opportunities for future research.
Originality/value
This is the largest and most comprehensive study to classify research on the use of AI in operations management to date. The survey and trends identified provide a useful reference point and directions for future research.
Details
Keywords
The purpose of this paper is to raise awareness among manufacturing researchers and practitioners of the potential of Bayesian networks (BNs) to enhance decision making in those…
Abstract
Purpose
The purpose of this paper is to raise awareness among manufacturing researchers and practitioners of the potential of Bayesian networks (BNs) to enhance decision making in those parts of the manufacturing domain where uncertainty is a key characteristic. In doing so, the paper describes the development of an intelligent decision support system (DSS) to help operators in Motorola to diagnose and correct faults during the process of product system testing.
Design/methodology/approach
The intelligent (DSS) combines BNs and an intelligent user interface to produce multi‐media advice for operators.
Findings
Surveys show that the system is effective in considerably reducing fault correction times for most operators and most fault types and in helping inexperienced operators to approach the performance levels of experienced operators.
Originality/value
Such efficiency improvements are of obvious value in manufacturing. In this particular case, additional benefit was derived when the product testing facility was moved from the UK to China as the system was able to help the new operators to get close to the historical performance level of experienced operators.
Details
Keywords
Mohamad Saraee, Seyed Vahid Moosavi and Shabnam Rezapour
This paper aims to present a practical application of Self Organizing Map (SOM) and decision tree algorithms to model a multi‐response machining process and to provide a set of…
Abstract
Purpose
This paper aims to present a practical application of Self Organizing Map (SOM) and decision tree algorithms to model a multi‐response machining process and to provide a set of control rules for this process.
Design/methodology/approach
SOM is a powerful artificial neural network approach used for analyzing and visualizing high‐dimensional data. Wire electrical discharge machining (WEDM) process is a complex and expensive machining process, in which there are a lot of factors having effects on the outputs of the process. In this work, after collecting a dataset based on a series of designed experiments, the paper applied SOM to this dataset in order to analyse the underlying relations between input and output variables as well as interactions between input variables. The results are compared with the results obtained from decision tree algorithm.
Findings
Based on the analysis of the results obtained, the paper extracted interrelationships between variables as well as a set of control rules for prediction of the process outputs. The results of the new experiments based on these rules, clearly demonstrate that the paper's predictions are valid, interesting and useful.
Originality/value
To the best of the authors' knowledge, this is the first time SOM and decision tree has been applied to the WEDM process successfully.
Details
Keywords
Mohamed Zaki, Babis Theodoulidis and David Díaz Solís
Although the financial markets are regulated by robust systems and rules that control their efficiency and try to protect investors from various manipulation schemes, markets…
Abstract
Purpose
Although the financial markets are regulated by robust systems and rules that control their efficiency and try to protect investors from various manipulation schemes, markets still suffer from frequent attempts to mislead or misinform investors in order to generate illegal profits. The impetus to effectively and systematically address such schemes presents many challenges to academia, industry and relevant authorities. This paper aims to discuss these issues.
Design/methodology/approach
The paper describes a case study on fraud detection using data mining techniques that help analysts to identify possible instances of touting based on spam e‐mails. Different data mining techniques such as decision trees, neural networks and linear regression are shown to offer great potential for this emerging domain. The application of these techniques is demonstrated using data from the Pink Sheets market.
Findings
Results strongly suggest the cumulative effect of “stock touting” spam e‐mails is key to understanding the patterns of manipulations associated with touting e‐mail campaigns, and that data mining techniques can be used to facilitate fraud investigations of spam e‐mails.
Practical implications
The approach proposed and the paper's findings could be used retroactively to help the relevant authorities and organisations identify abnormal behaviours in the stock market. It could also be used proactively to warn analysts and stockbrokers of possible cases of market abuse.
Originality/value
This research studies the relationships between the cumulative volume of spam touts and a number of financial indicators using different supervised classification techniques. The paper aims to contribute to a better understanding of the market manipulation problem and provide part of a unified framework for the design and analysis of market manipulation systems.
Details
Keywords
The purpose of this research is to improve efficiency of the traditional scheduling methods and explore a more effective approach to solving the scheduling problem in supply…
Abstract
Purpose
The purpose of this research is to improve efficiency of the traditional scheduling methods and explore a more effective approach to solving the scheduling problem in supply networks with genetic algorithms (GAs).
Design/methodology/approach
This paper develops two methods with GAs for detailed production scheduling in supply networks. The first method adopts a GA to job shop scheduling in any node of the supply network. The second method is developed for collective scheduling in an industrial cluster using a modified GA (MGA). The objective is to minimize the total makespan. The proposed method was verified on some experiments.
Findings
The suggested GAs can improve detailed production scheduling in supply networks. The results of the experiments show that the proposed MGA is a very efficient and effective algorithm. The MGA creates the manufacturing schedule for each factory and transport operation schedule very quickly.
Research limitations/implications
For future research, an expert system will be adopted as an intelligent interface between the MRPII or ERP and the MGA.
Originality/value
From the mathematical point of view, a supply network is a digraph, which has loops and therefore the proposed GAs take into account loops in supply networks. The MGA enables dividing jobs between factories. This algorithm is based on operation codes, where each chromosome is a set of four‐positions genes. This encoding method includes both manufacture operations and long transport operations.
Details
Keywords
Markov chains and queuing theory are widely used analysis, optimization and decision‐making tools in many areas of science and engineering. Real life systems could be modelled and…
Abstract
Purpose
Markov chains and queuing theory are widely used analysis, optimization and decision‐making tools in many areas of science and engineering. Real life systems could be modelled and analysed for their steady‐state and time‐dependent behaviour. Performance measures such as blocking probability of a system can be calculated by computing the probability distributions. A major hurdle in the applicability of these tools to complex large problems is the curse of dimensionality problem because models for even trivial real life systems comprise millions of states and hence require large computational resources. This paper describes the various computational dimensions in Markov chains modelling and briefly reports on the author's experiences and developed techniques to combat the curse of dimensionality problem.
Design/methodology/approach
The paper formulates the Markovian modelling problem mathematically and shows, using case studies, that it poses both storage and computational time challenges when applied to the analysis of large complex systems.
Findings
The paper demonstrates using intelligent storage techniques, and concurrent and parallel computing methods that it is possible to solve very large systems on a single or multiple computers.
Originality/value
The paper has developed an interesting case study to motivate the reader and have computed and visualised data for steady‐state analysis of the system performance for a set of seven scenarios. The developed methods reviewed in this paper allow efficient solution of very large Markov chains. Contemporary methods for the solution of Markov chains cannot solve Markov models of the sizes considered in this paper using similar computing machines.
Details
Keywords
Khairy A.H. Kobbacy, Hexin Wang and Wenbin Wang
Many supply contracts are employed in practice to improve the performance of supply chains. But there is a lack of research that can offer guidance to practitioners in choosing…
Abstract
Purpose
Many supply contracts are employed in practice to improve the performance of supply chains. But there is a lack of research that can offer guidance to practitioners in choosing the best supply contract among a group of popular contracts. This paper aims to fill this gap by developing an intelligent rule‐based supply contract design system for choosing the best contract and its parameters from a supplier's point of view.
Design/methodology/approach
The approach used in this paper is based on the comparison of several supply contracts that are encountered in supply chain practice. The paper aims at identifying the conditions under which one supply contract outperforms another from the supplier's perspective. To facilitate the implementation of the decision‐making rules that are developed in this research, an intelligent decision support system is developed.
Findings
Six popular contracts are analysed; returns policy (RP), quantity discount (QD), target rebate (TR), backup agreement (BA), quantity flexibility (QF), and quantity commitment (QC). The main findings are: QD contracts generate larger expected profits for the supplier than TR contracts do when the demand is exogenous, an RP contract is better than a QD contract when the wholesale profit margin is sufficiently large and that the optimal QC contract always provides a higher expected service level than BA and QF contracts.
Originality/value
The paper presents an approach for developing an intelligent supply contract design system that can offer guidance to practitioners in choosing the best supply contract for a particular supplier.
Details