Joerg Leukel and Vijayan Sugumaran
Process models specific to the supply chain domain are an important tool for the analysis of interorganizational interfaces and requirements of information technology (IT) systems…
Abstract
Purpose
Process models specific to the supply chain domain are an important tool for the analysis of interorganizational interfaces and requirements of information technology (IT) systems supporting supply chain decision-making. The purpose of this study is to examine the effectiveness of supply chain process models for novice analysts in conveying domain semantics compared to alternative textual representations.
Design/methodology/approach
A laboratory experiment with graduate students as proxies for novice analysts was conducted. Participants were randomly assigned to either the diagram group, which worked with “thread diagrams” created from the modeling grammar “Supply Chain Operation Reference (SCOR) model”, or the text group, which worked with semantically equivalent textual representations. Domain understanding was measured using cognitively demanding information acquisition for two different domains.
Findings
Diagram users were more accurate in identifying product-related information and organizing this information in a graph compared to those using the textual representation. The authors found considerable improvements in domain understanding, and using the diagrams was perceived as easy as using the texts.
Originality/value
The study's findings are unique in providing empirical evidence for supply chain process models being an effective representation for novice analysts. Such evidence is lacking in prior research because of the evaluation methods used, which are limited to scenario, case study and informed argument. This study adds the diagram user's perspective to that literature and provides a rigorous empirical evaluation by contrasting diagrammatic and textual representations.
Details
Keywords
Joerg Leukel, Julian González and Martin Riekert
Machine learning (ML) models are increasingly being used in industrial maintenance to predict system failures. However, less is known about how the time windows for reading data…
Abstract
Purpose
Machine learning (ML) models are increasingly being used in industrial maintenance to predict system failures. However, less is known about how the time windows for reading data and making predictions affect performance. Therefore, the purpose of this research is to assess the impact of different sliding windows on prediction performance.
Design/methodology/approach
The authors conducted a factorial experiment using high dimensional machine data covering two years of operation, taken from a real industrial case for the production of high-precision milled and turned parts. The impacts of different reading and prediction windows were tested for three ML algorithms (random forest, support vector machines and logistic regression) and four metrics (accuracy, precision, recall and F-score).
Findings
The results reveal (1) the critical role of the prediction window contingent upon the application domain, (2) a non-monotonic relationship between the reading window and performance, and (3) how sliding window selection can systematically be used to improve different facets of performance.
Originality/value
The study's findings advance the knowledge of ML-based failure prediction, by highlighting how systematic variation of two important but yet understudied factors contributes to the development of more useful prediction models.