The present study is aimed to determine the infoecology of scientific articles in the field of smart manufacturing (SM). The researchers designed a general framework for the…
Abstract
Purpose
The present study is aimed to determine the infoecology of scientific articles in the field of smart manufacturing (SM). The researchers designed a general framework for the investigation of infoecology.
Design/methodology/approach
The qualitative and quantitative data collection methods are applied to collect data from the Scopus and experts. The bibliometric technique, clustering and graph mining are applied to analysis data by Scopus data analysis tools, VOSviewer and Excel software.
Findings
It is concluded that researchers paid attention to “Flow Control”, “Embedded Systems”, “IoT”, “Big Data” and “Cyber-Physical System” more than other infocenose. Finally, a thematic model presented based on the infoecology of SM in Scopus for future studies. Also, as future work, designing a “research-related” metamodel for SM would be beneficial for the researchers, to highlight the main future research directions.
Practical implications
The results of the present study can be applied to the following issues: (1) To make decisions based on research and scientific evidence and conduct scientific research on real needs and issues in the field of SM, (2) Holding the workshops on infoecology to determine research priorities with the presence of experts in related industries, (3) Determining the most important areas of research in order to improve the index of applied research, (4) Assist in prioritizing research in the field of SM to select a set of research and technological activities and allocate resources effectively to these activities, (5) Helping to increase the relationship between research and technological activities with the economic and long-term goals of industry and society, (6) Helping to prioritize the issues of SM in research and technology in order to target the allocation of financial and human capital and solving the main challenges and take advantage of opportunities, (7) Helping to avoid fragmentation of work and providing educational infrastructure based on prioritized research needs and (8) Helping to hold start-ups and the activities of knowledge-based companies based on research priorities in the field of SM.
Originality/value
The analysis results demonstrated that the information ecosystem of SM studies dynamically developed over time. The continuous conduction flow of scientific studies in this field brought continuous changes into the infoecology of this field.
Details
Keywords
Noorullah Renigunta Mohammed and Moulana Mohammed
The purpose of this study for eHealth text mining domains, cosine-based visual methods (VM) assess the clusters more accurately than Euclidean; which are recommended for tweet data…
Abstract
Purpose
The purpose of this study for eHealth text mining domains, cosine-based visual methods (VM) assess the clusters more accurately than Euclidean; which are recommended for tweet data models for clusters assessment. Such VM determines the clusters concerning a single viewpoint or none, which are less informative. Multi-viewpoints (MVP) were used for addressing the more informative clusters assessment of health-care tweet documents and to demonstrate visual analysis of cluster tendency.
Design/methodology/approach
In this paper, the authors proposed MVP-based VM by using traditional topic models with visual techniques to find cluster tendency, partitioning for cluster validity to propose health-care recommendations based on tweets. The authors demonstrated the effectiveness of proposed methods on different real-time Twitter health-care data sets in the experimental study. The authors also did a comparative analysis of proposed models with existing visual assessment tendency (VAT) and cVAT models by using cluster validity indices and computational complexities; the examples suggest that MVP VM were more informative.
Findings
In this paper, the authors proposed MVP-based VM by using traditional topic models with visual techniques to find cluster tendency, partitioning for cluster validity to propose health-care recommendations based on tweets.
Originality/value
In this paper, the authors proposed multi-viewpoints distance metric in topic model cluster tendency for the first time and visual representation using VAT images using hybrid topic models to find cluster tendency, partitioning for cluster validity to propose health-care recommendations based on tweets.
Details
Keywords
Diego Rojas, Juan Estrada, Kim P. Huynh and David T. Jacho-Chávez
The efficient distribution of bank notes is a first-order responsibility of central banks. The authors study the distribution patterns of bank notes with an administrative dataset…
Abstract
The efficient distribution of bank notes is a first-order responsibility of central banks. The authors study the distribution patterns of bank notes with an administrative dataset from the Bank of Canada’s Currency Inventory Management Strategy. The single note inspection procedure generates a sample of 900 million bank notes in which the authors can trace the length of the stay of a bank note in the market. The authors define the duration of the bank note circulation cycle as beginning on the date the bank note is first shipped by the Bank of Canada to a financial institution and ending when it is returned to the Bank of Canada. In addition, the authors provide information regarding where the bank note is shipped and later received, as well as the physical fitness of the bank note upon return to the Bank of Canada’s distribution centers. K–prototype clustering classifies bank notes into types. A hazard model estimates the duration of bank note circulation cycles based on their clusters and characteristics. An adaptive elastic net provides an algorithm for dimension reduction. It is found that while the distribution of the duration is affected by fitness measures, their effects are negligible when compared with the influence exerted by the clusters related to bank note denominations.
Details
Keywords
Lorenzo Ardito, Veronica Scuotto, Manlio Del Giudice and Antonio Messeni Petruzzelli
The purpose of this paper is to scrutinize and classify the literature linking Big Data analytics and management phenomena.
Abstract
Purpose
The purpose of this paper is to scrutinize and classify the literature linking Big Data analytics and management phenomena.
Design/methodology/approach
An objective bibliometric analysis is conducted, supported by subjective assessments based on the studies focused on the intertwining of Big Data analytics and management fields. Specifically, deeper descriptive statistics and document co-citation analysis are provided.
Findings
From the document co-citation analysis and its evaluation, four clusters depicting literature linking Big Data analytics and management phenomena are revealed: theoretical development of Big Data analytics; management transition to Big Data analytics; Big Data analytics and firm resources, capabilities and performance; and Big Data analytics for supply chain management.
Originality/value
To the best of the authors’ knowledge, this is one of the first attempts to comprehend the research streams which, over time, have paved the way to the intersection between Big Data analytics and management fields.
Details
Keywords
Francesco Ciampi, Giacomo Marzi, Stefano Demi and Monica Faraoni
Designing knowledge management (KM) systems capable of transforming big data into information characterised by strategic value is a major challenge faced nowadays by firms in…
Abstract
Purpose
Designing knowledge management (KM) systems capable of transforming big data into information characterised by strategic value is a major challenge faced nowadays by firms in almost all industries. However, in the managerial field, big data is now mainly used to support operational activities while its strategic potential is still largely unexploited. Based on these considerations, this study proposes an overview of the literature regarding the relationship between big data and business strategy.
Design/methodology/approach
A bibliographic coupling method is applied over a dataset of 128 peer-reviewed articles, published from 2013 (first year when articles regarding the big data-business strategy relationship were published) to 2019. Thereafter, a systematic literature review is presented on 116 papers, which were found to be interconnected based on the VOSviewer algorithm.
Findings
This study discovers the existence of four thematic clusters. Three of the clusters relate to the following topics: big data and supply chain strategy; big data, personalisation and co-creation strategies and big data, strategic planning and strategic value creation. The fourth cluster concerns the relationship between big data and KM and represents a ‘bridge’ between the other three clusters.
Research limitations/implications
Based on the bibliometric analysis and the systematic literature review, this study identifies relevant understudied topics and research gaps, which are suggested as future research directions.
Originality/value
This is the first study to systematise and discuss the literature concerning the relationship between big data and firm strategy.
Details
Keywords
Riccardo Rialti, Giacomo Marzi, Cristiano Ciappei and Donatella Busso
Recently, several manuscripts about the effects of big data on organizations used dynamic capabilities as their main theoretical approach. However, these manuscripts still lack…
Abstract
Purpose
Recently, several manuscripts about the effects of big data on organizations used dynamic capabilities as their main theoretical approach. However, these manuscripts still lack systematization. Consequently, the purpose of this paper is to systematize the literature on big data and dynamic capabilities.
Design/methodology/approach
A bibliometric analysis was performed on 170 manuscripts extracted from the Clarivate Analytics Web of Science Core Collection database. The bibliometric analysis was integrated with a literature review.
Findings
The bibliometric analysis revealed four clusters of papers on big data and dynamic capabilities: big data and supply chain management, knowledge management, decision making, business process management and big data analytics. The systematic literature review helped to clarify each clusters’ content.
Originality/value
To the authors’ best knowledge, minimal attention has been paid to systematizing the literature on big data and dynamic capabilities.
Details
Keywords
Ajree Ducol Malawani, Achmad Nurmandi, Eko Priyo Purnomo and Taufiqur Rahman
This paper aims to examine tweet posts regarding Typhoon Washi to contend the usefulness of social media and big data as an aid of post-disaster management. Through topic…
Abstract
Purpose
This paper aims to examine tweet posts regarding Typhoon Washi to contend the usefulness of social media and big data as an aid of post-disaster management. Through topic modelling and content analysis, this study examines the priorities of the victims expressed in Twitter and how the priorities changed over a year.
Design/methodology/approach
Social media, particularly Twitter, was where the data gathered. Using big data technology, the gathered data were processed and analysed according to the objectives of the study. Topic modelling was used in clustering words from different topics. Clustered words were then used for content analysis in determining the needs of the victims. Word frequency count was also used in determining what words were repeatedly used during the course period. To validate the gathered data online, government documents were requested and concerned government agencies were also interviewed.
Finding
Findings of this study argue that housing and relief goods have been the top priorities of the victims. Victims are seeking relief goods, especially when they are in evacuation centres. Also, the lack of legal basis hinders government officials from integrating social media information unto policymaking.
Research limitation
This study only reports Twitter posts containing keywords either, Sendong, SendongPH, Washi or TyphoonWashi. The keywords were determined based on the words that trended after Typhoon Washi struck.
Practical implication
For social media and big data to be adoptable and efficacious, supporting and facilitating conditions are necessary. Structural, technical and financial support, as well as legal framework, should be in place. Maintaining and sustaining positive attitude towards it should be taken care of.
Originality/value
Although many studies have been conducted on the usefulness of social media in times of disaster, many of these focused on the use of social media as medium that can efficiently spread information, and little has been done on how the government can use both social media and big data in collecting and analysing the needs of the victims. This study fills those gaps in social big data literature.
Details
Keywords
This work can be used as a building block in other settings such as GPU, Map-Reduce, Spark or any other. Also, DDPML can be deployed on other distributed systems such as P2P…
Abstract
Purpose
This work can be used as a building block in other settings such as GPU, Map-Reduce, Spark or any other. Also, DDPML can be deployed on other distributed systems such as P2P networks, clusters, clouds computing or other technologies.
Design/methodology/approach
In the age of Big Data, all companies want to benefit from large amounts of data. These data can help them understand their internal and external environment and anticipate associated phenomena, as the data turn into knowledge that can be used for prediction later. Thus, this knowledge becomes a great asset in companies' hands. This is precisely the objective of data mining. But with the production of a large amount of data and knowledge at a faster pace, the authors are now talking about Big Data mining. For this reason, the authors’ proposed works mainly aim at solving the problem of volume, veracity, validity and velocity when classifying Big Data using distributed and parallel processing techniques. So, the problem that the authors are raising in this work is how the authors can make machine learning algorithms work in a distributed and parallel way at the same time without losing the accuracy of classification results. To solve this problem, the authors propose a system called Dynamic Distributed and Parallel Machine Learning (DDPML) algorithms. To build it, the authors divided their work into two parts. In the first, the authors propose a distributed architecture that is controlled by Map-Reduce algorithm which in turn depends on random sampling technique. So, the distributed architecture that the authors designed is specially directed to handle big data processing that operates in a coherent and efficient manner with the sampling strategy proposed in this work. This architecture also helps the authors to actually verify the classification results obtained using the representative learning base (RLB). In the second part, the authors have extracted the representative learning base by sampling at two levels using the stratified random sampling method. This sampling method is also applied to extract the shared learning base (SLB) and the partial learning base for the first level (PLBL1) and the partial learning base for the second level (PLBL2). The experimental results show the efficiency of our solution that the authors provided without significant loss of the classification results. Thus, in practical terms, the system DDPML is generally dedicated to big data mining processing, and works effectively in distributed systems with a simple structure, such as client-server networks.
Findings
The authors got very satisfactory classification results.
Originality/value
DDPML system is specially designed to smoothly handle big data mining classification.
Details
Keywords
Sunday Adewale Olaleye, Emmanuel Mogaji, Friday Joseph Agbo, Dandison Ukpabi and Akwasi Gyamerah Adusei
The data economy mainly relies on the surveillance capitalism business model, enabling companies to monetize their data. The surveillance allows for transforming private human…
Abstract
Purpose
The data economy mainly relies on the surveillance capitalism business model, enabling companies to monetize their data. The surveillance allows for transforming private human experiences into behavioral data that can be harnessed in the marketing sphere. This study aims to focus on investigating the domain of data economy with the methodological lens of quantitative bibliometric analysis of published literature.
Design/methodology/approach
The bibliometric analysis seeks to unravel trends and timelines for the emergence of the data economy, its conceptualization, scientific progression and thematic synergy that could predict the future of the field. A total of 591 data between 2008 and June 2021 were used in the analysis with the Biblioshiny app on the web interfaced and VOSviewer version 1.6.16 to analyze data from Web of Science and Scopus.
Findings
This study combined findable, accessible, interoperable and reusable (FAIR) data and data economy and contributed to the literature on big data, information discovery and delivery by shedding light on the conceptual, intellectual and social structure of data economy and demonstrating data relevance as a key strategic asset for companies and academia now and in the future.
Research limitations/implications
Findings from this study provide a steppingstone for researchers who may engage in further empirical and longitudinal studies by employing, for example, a quantitative and systematic review approach. In addition, future research could expand the scope of this study beyond FAIR data and data economy to examine aspects such as theories and show a plausible explanation of several phenomena in the emerging field.
Practical implications
The researchers can use the results of this study as a steppingstone for further empirical and longitudinal studies.
Originality/value
This study confirmed the relevance of data to society and revealed some gaps to be undertaken for the future.
Details
Keywords
Pulkit Tiwari, P. Vigneswara Ilavarasan and Sushil Punia
The purpose of this paper is to provide a systematic literature review on the technological aspects of smart cities and to give insights about current trends, sources of research…
Abstract
Purpose
The purpose of this paper is to provide a systematic literature review on the technological aspects of smart cities and to give insights about current trends, sources of research, contributing authors and countries. It is required to understand technical concepts like information technology, big data analytics, Internet of Things and blockchain needed to implement smart city models successfully.
Design/methodology/approach
The data were collected from the Scopus database, and analysis techniques like bibliometric analysis, network analysis and content analysis were used to obtain research trends, publications growth, top contributing authors and nations in the domain of smart cities. Also, these analytical techniques identified various fields within the literature on smart cities and supported to design a conceptual framework for Industry 4.0 adoption in a smart city.
Findings
The bibliometric analysis shows that research publications have increased significantly over the last couple of years. It has found that developing countries like China is leading the research on smart cities. The network analytics and article classification identified six domains within the literature on smart cities. A conceptual framework for the smart city has proposed for the successful implementation of Industry 4.0 technologies.
Originality/value
This paper explores the role of Industry 4.0 technologies in smart cities. The bibliometric data on publications from the year 2013 to 2018 were used and investigated by using advanced analytical techniques. The paper reviewS key technical concepts for the successful execution of a smart city model. It also gives an idea about various technical considerations required for the implementation of the smart city model through a conceptual framework.