Search results

1 – 50 of over 308000
Per page
102050
Citations:
Loading...
Access Restricted. View access options
Article
Publication date: 2 October 2018

Alexander M. Soley, Joshua E. Siegel, Dajiang Suo and Sanjay E. Sarma

The purpose of this paper is to develop a model to estimate the value of information generated by and stored within vehicles to help people, businesses and researchers.

481

Abstract

Purpose

The purpose of this paper is to develop a model to estimate the value of information generated by and stored within vehicles to help people, businesses and researchers.

Design/methodology/approach

The authors provide a taxonomy for data within connected vehicles, as well as for actors that value such data. The authors create a monetary value model for different data generation scenarios from the perspective of multiple actors.

Findings

Actors value data differently depending on whether the information is kept within the vehicle or on peripheral devices. The model shows the US connected vehicle data market is worth between US$11.6bn and US$92.6bn.

Research limitations/implications

This model estimates the value of vehicle data, but a lack of academic references for individual inputs makes finding reliable inputs difficult. The model performance is limited by the accuracy of the authors’ assumptions.

Practical implications

The proposed model demonstrates that connected vehicle data has higher value than people and companies are aware of, and therefore we must secure these data and establish comprehensive rules pertaining to data ownership and stewardship.

Social implications

Estimating the value of data of vehicle data will help companies understand the importance of responsible data stewardship, as well as drive individuals to become more responsible digital citizens.

Originality/value

This is the first paper to propose a model for computing the monetary value of connected vehicle data, as well as the first paper to provide an estimate of this value.

Details

Digital Policy, Regulation and Governance, vol. 20 no. 6
Type: Research Article
ISSN: 2398-5038

Keywords

Access Restricted. View access options
Article
Publication date: 22 August 2018

Lorna Uden and Pasquale Del Vecchio

This paper aims to define a conceptual framework for transforming Big Data into organizational value by focussing on the perspectives of service science and activity theory. In…

659

Abstract

Purpose

This paper aims to define a conceptual framework for transforming Big Data into organizational value by focussing on the perspectives of service science and activity theory. In coherence with the agenda on evolutionary research on intellectual capital (IC), the study also provides momentum for researchers and scholars to explore emerging trends and implications of Big Data for IC management.

Design/methodology/approach

The paper adopts a qualitative and integrated research method based on a constructive review of existing literature related to IC management, Big Data, service science and activity theory to identify features and processes of a conceptual framework emerging at the intersection of previously identified research topics.

Findings

The proposed framework harnesses the power of Big Data, collectively created by the engagement of multiple stakeholders based on the concepts of service ecosystems, by using activity theory. The transformation of Big Data for IC management addresses the process of value creation based on a set of critical dimensions useful to identify goals, main actors and stakeholders, processes and motivations.

Research limitations/implications

The paper indicates how organizational values can be created from Big Data through the co-creation of value in service ecosystems. Activity theory is used as theoretical lens to support IC ecosystem development. This research is exploratory; the framework offers opportunities for refinement and can be used to spearhead directions for future research.

Practical implications

The paper proposes a framework for transforming Big Data into organizational values for IC management in the context of entrepreneurial universities as pivotal contexts of observation that can be replicated in different fields. The framework provides guidelines that can be used to help organizations intending to embark on the emerging paradigm of Big Data for IC management for their competitive advantages.

Originality/value

The paper’s originality is in bringing together research from Big Data, value co-creation from service ecosystems and activity theory to address the complex issues involved in IC management. A further element of originality offered involves integrating such multidisciplinary perspectives as a lens for shaping the complex process of value creation from Big Data in relationship to IC management. The concept of how IC ecosystems can be designed is also introduced.

Access Restricted. View access options
Article
Publication date: 2 October 2017

Sarah Cheah and Shenghui Wang

This study aims to construct mechanisms of big data-driven business model innovation from the market, strategic and economic perspectives and core logic of business model…

3580

Abstract

Purpose

This study aims to construct mechanisms of big data-driven business model innovation from the market, strategic and economic perspectives and core logic of business model innovation.

Design/methodology/approach

The authors applied deductive reasoning and case analysis method on manufacturing firms in China to validate the mechanisms.

Findings

The authors have developed an integrated framework to deduce the elements of big data-driven business model innovation. The framework comprises three elements: perspectives, business model processes and big data-driven business model innovations. As we apply the framework on to three Chinese companies, it is evident that the mechanisms of business model innovation based on big data is a progressive and dynamic process.

Research limitations/implications

The case sample is relatively small, which is a typical trade-off in qualitative research.

Practical implications

A robust infrastructure that seamlessly integrates internet of things, front-end customer systems and back-end production systems is pivotal for companies. The management has to ensure its organization structure, climate and human resources are well prepared for the transformation.

Social implications

When provided with a convenient crowdsourcing platform to provide feedback and witness their suggestions being implemented, users are more likely to share insights about their use experience.

Originality/value

Extant studies of big data and business model innovation remain disparate. By adding a new dimension of intellectual and economic resource to the resource-based view, this paper posits an important link between big data and business model innovation. In addition, this study has contributed to the theoretical lens of value by contextualizing the value components of a business model and providing an integrated framework.

Details

Journal of Chinese Economic and Foreign Trade Studies, vol. 10 no. 3
Type: Research Article
ISSN: 1754-4408

Keywords

Access Restricted. View access options
Article
Publication date: 6 May 2021

Salvador Barragan

The purpose of this paper is to examine the possible implications of applying the infonomics methodology and measurement model within records and information management (RIM) to…

420

Abstract

Purpose

The purpose of this paper is to examine the possible implications of applying the infonomics methodology and measurement model within records and information management (RIM) to reduce organizations’ electronic footprint. By analyzing content using infonomics, it is possible for RIM managers in the private sector to keep only information with the highest value and change their behavior around keeping content beyond its infonomic value. This, in turn, may reduce the stress upon natural resources that are used in maintaining information data centers.

Design/methodology/approach

This paper examines different theories of evaluating information value and describes the role of infonomics in analyzing information as an asset to minimize its electronic footprint. Its focus is on the implications of applying a set of measurements that go beyond the information valuing models currently used in RIM; thereby, this study addresses how information that has superseded its business value may be eliminated.

Findings

This paper concludes that infonomics could elevate RIM function and alter how RIM managers within the private sector value information. Further, the inclusion of infonomics into RIM models may create new roles for RIM managers and extend the influence and reach of RIM. This may also lead to valuing all content and eliminating content that no longer has any business value. This may also eliminate the need for large data storage centers that harness and exhaust nonrenewable resources. Future developments must be watched and analyzed to see if this becomes a norm.

Practical implications

This paper will be of interest to stakeholders responsible for valuing information, appraisal of information, life-cycle management, records management, InfoSec and big data analytics.

Originality/value

The work is original but parts of this subject have been previously addressed in another study.

Details

Records Management Journal, vol. 31 no. 3
Type: Research Article
ISSN: 0956-5698

Keywords

Available. Open Access. Open Access
Article
Publication date: 8 July 2021

Johann Eder and Vladimir A. Shekhovtsov

Medical research requires biological material and data collected through biobanks in reliable processes with quality assurance. Medical studies based on data with unknown or…

1864

Abstract

Purpose

Medical research requires biological material and data collected through biobanks in reliable processes with quality assurance. Medical studies based on data with unknown or questionable quality are useless or even dangerous, as evidenced by recent examples of withdrawn studies. Medical data sets consist of highly sensitive personal data, which has to be protected carefully and is available for research only after the approval of ethics committees. The purpose of this research is to propose an architecture to support researchers to efficiently and effectively identify relevant collections of material and data with documented quality for their research projects while observing strict privacy rules.

Design/methodology/approach

Following a design science approach, this paper develops a conceptual model for capturing and relating metadata of medical data in biobanks to support medical research.

Findings

This study describes the landscape of biobanks as federated medical data lakes such as the collections of samples and their annotations in the European federation of biobanks (Biobanking and Biomolecular Resources Research Infrastructure – European Research Infrastructure Consortium, BBMRI-ERIC) and develops a conceptual model capturing schema information with quality annotation. This paper discusses the quality dimensions for data sets for medical research in-depth and proposes representations of both the metadata and data quality documentation with the aim to support researchers to effectively and efficiently identify suitable data sets for medical studies.

Originality/value

This novel conceptual model for metadata for medical data lakes has a unique focus on the high privacy requirements of the data sets contained in medical data lakes and also stands out in the detailed representation of data quality and metadata quality of medical data sets.

Details

International Journal of Web Information Systems, vol. 17 no. 5
Type: Research Article
ISSN: 1744-0084

Keywords

Access Restricted. View access options
Article
Publication date: 10 April 2023

Natasja Van Buggenhout, Wendy Van den Broeck, Ine Van Zeeland and Jo Pierson

Media users daily exchange personal data for “free” personalised media. Is this a fair trade, or user “exploitation”? Do personalisation benefits outweigh privacy risks?

670

Abstract

Purpose

Media users daily exchange personal data for “free” personalised media. Is this a fair trade, or user “exploitation”? Do personalisation benefits outweigh privacy risks?

Design/methodology/approach

This study surveyed experts in three consecutive online rounds (e-Delphi). The authors explored personal data processing value for media, personalisation relevance, benefits and risks for users. The authors scrutinised the value-exchange between media and users and determined whether media communicate transparently, or use “dark patterns” to obtain more personal data.

Findings

Communication to users must be clear, correct and concise (prevent user deception). Experts disagree on “payment” with personal data for “free” personalised media. This study discerned obstacles and solutions to substantially balance the interests of media and users (fair value exchange). Personal data processing must be transparent, profitable to media and users. Media can agree “sector-wide” on personalisation transparency. Fair, secure and transparent information disclosure to media is possible through shared responsibility and effort.

Originality/value

This study’s innovative contribution is threefold: Firstly, focus on professional stakeholders’ opinion in the value network. Secondly, recommendations to clearly communicate personalised media value, benefits and risks to users. This allows media to create codes of conduct that increase user trust. Thirdly, expanding literature explaining how media realise personal data value, deal with stakeholder interests and position themselves in the data processing debate. This research improves understanding of personal data value, processing benefits and potential risks in a regional context and European regulatory framework.

Details

Digital Policy, Regulation and Governance, vol. 25 no. 3
Type: Research Article
ISSN: 2398-5038

Keywords

Access Restricted. View access options
Article
Publication date: 1 April 1986

Richard Pollard

Relatively little microcomputer software has been designed specifically for the storage and retrieval of bibliographic data. Information retrieval packages for mainframes and…

152

Abstract

Relatively little microcomputer software has been designed specifically for the storage and retrieval of bibliographic data. Information retrieval packages for mainframes and minicomputers have been scaled down to run on microcomputers, however, these programs are expensive, unwieldy, and inflexible. For this reason, microcomputer database management systems (DBMS) are often used as an alternative. In this article, criteria for evaluating DBMS used for the storage and retrieval of bibliographic data are discussed. Two popular types of microcomputer DBMS, file management systems and relational database management systems, are evaluated with respect to these criteria. File management systems are appropriate when a relatively small number of simple records are to be stored, and retrieval time for multi‐valued data items is not a critical factor. Relational database management systems are indicated when large numbers of complex records are to be stored, and retrieval time for multi‐valued data items is critical. However, successful use of relational database management systems often requires programming skills.

Details

The Electronic Library, vol. 4 no. 4
Type: Research Article
ISSN: 0264-0473

Access Restricted. View access options
Article
Publication date: 10 May 2023

Pietro Pavone, Paolo Ricci and Massimiliano Calogero

This paper aims to investigate the literacy corpus regarding the potential of big data to improve public decision-making processes and direct these processes toward the creation…

397

Abstract

Purpose

This paper aims to investigate the literacy corpus regarding the potential of big data to improve public decision-making processes and direct these processes toward the creation of public value. This paper presents a map of current knowledge in a sample of selected articles and explores the intersecting points between data from the private sector and the public dimension in relation to benefits for society.

Design/methodology/approach

A bibliometric analysis was performed to provide a retrospective review of published content in the past decade in the field of big data for the public interest. This paper describes citation patterns, key topics and publication trends.

Findings

The findings indicate a propensity in the current literature to deal with the issue of data value creation in the private dimension (data as input to improve business performance or customer relations). Research on data for the public good has so far been underestimated. Evidence shows that big data value creation is closely associated with a collective process in which multiple levels of interaction and data sharing develop between both private and public actors in data ecosystems that pose new challenges for accountability and legitimation processes.

Research limitations/implications

The bibliometric method focuses on academic papers. This paper does not include conference proceedings, books or book chapters. Consequently, a part of the existing literature was excluded from the investigation and further empirical research is required to validate some of the proposed theoretical assumptions.

Originality/value

Although this paper presents the main contents of previous studies, it highlights the need to systematize data-driven private practices for public purposes. This paper offers insights to better understand these processes from a public management perspective.

Details

Meditari Accountancy Research, vol. 32 no. 2
Type: Research Article
ISSN: 2049-372X

Keywords

Access Restricted. View access options
Article
Publication date: 5 October 2018

Jing Zeng and Zaheer Khan

The purpose of this paper is to examine how managers orchestrate, bundle and leverage resources from big data for value creation in emerging economies.

1855

Abstract

Purpose

The purpose of this paper is to examine how managers orchestrate, bundle and leverage resources from big data for value creation in emerging economies.

Design/methodology/approach

The authors grounded the theoretical framework in two perspectives: the resource management and entrepreneurial orientation (EO). The study utilizes an inductive, multiple-case research design to understand the process of creating value from big data.

Findings

The findings suggest that EO is vital through which companies based in emerging economies can create value through big data by bundling and orchestrating resources thus improving performance.

Originality/value

This is one of the first studies to have integrated resource orchestration theory and EO in the context of big data and explicate the utility of such theoretical integration in understanding the value creation strategies through big data in the context of emerging economies.

Details

Management Decision, vol. 57 no. 8
Type: Research Article
ISSN: 0025-1747

Keywords

Access Restricted. View access options
Article
Publication date: 24 December 2024

Matti Haverila, Mohammad Osman Gani, Fariah Ahmed Dina and Muhammad Mohiuddin

This paper aims to examine the interrelationships between user-centric measures and their impact on the firm’s perceived financial performance as the respondents’ decision-making…

28

Abstract

Purpose

This paper aims to examine the interrelationships between user-centric measures and their impact on the firm’s perceived financial performance as the respondents’ decision-making role changes.

Design/methodology/approach

The data was collected jointly with SurveyMonkey, a marketing research company, from marketing professionals working in companies with at least limited experience deploying big data marketing analytics (BDMA) applications. The respondents originated from Canada and the USA, and out of 970 responses in the initial sample, 236 were working in companies with at least limited experience in BDMA deployment. The data analysis used partial least squares structural equation modeling and necessary condition analysis.

Findings

All hypotheses except one were accepted. Perceived value for money positively and significantly impacted user satisfaction, positively and significantly impacted perceived financial performance. Also, the decision-making role positively and significantly impacted the perceived value for money and user satisfaction but not the perceived financial performance.

Originality/value

The research contributes to understanding how the decision-maker’s role impacts the perceived user-related performance measures in the BDMA context.

Details

Journal of Systems and Information Technology, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1328-7265

Keywords

Access Restricted. View access options
Article
Publication date: 21 October 2024

Anders Haug

Studies show that data quality (DQ) issues are extremely costly for companies. To address such issues, as a starting point, there is a need to understand what DQ is. In his…

74

Abstract

Purpose

Studies show that data quality (DQ) issues are extremely costly for companies. To address such issues, as a starting point, there is a need to understand what DQ is. In his context, the 1996 paper “Anchoring data quality dimensions ontological foundations” by Wand and Wang has been highly influential on the understanding of DQ. However, the present study demonstrates that some of the assumptions made in their paper can be challenged. On this basis, this study seeks to develop clearer definitions.

Design/methodology/approach

The assumptions behind Wand and Wang’s DQ classification are discussed, on which basis three counter-propositions are formulated. These are investigated through a representation theoretical approach involving analyses of deficient mappings between real-world and information system states. On this basis, an intrinsic DQ classification is derived. A case study is conducted to investigate the value of the developed DQ classification.

Findings

The representation theoretical analysis and the case study support the three propositions. These give rise to the development of a DQ classification that includes three primary intrinsic DQ dimensions (accuracy, completeness and conciseness), which are associated with six primary value-level DQ deficiencies (inaccuracy, incorrectness, meaninglessness, incompleteness, absence and redundancy). The case study supports the value of extending Wand and Wang’s DQ classification with the three additional data deficiencies.

Research limitations/implications

By improving the conceptual clarity of DQ, this study provides future research with an improved basis for studies and discussions of DQ.

Originality/value

The study advances the understanding of DQ by providing additional clarity.

Details

Industrial Management & Data Systems, vol. 125 no. 1
Type: Research Article
ISSN: 0263-5577

Keywords

Available. Open Access. Open Access
Article
Publication date: 12 November 2024

Hai-xi Jiang and Nan-ping Jiang

A more accurate comprehension of data elements and the exploration of new laws governing contemporary data in both theoretical and practical domains…

92

Abstract

Purpose

A more accurate comprehension of data elements and the exploration of new laws governing contemporary data in both theoretical and practical domains constitute a significant research topic.

Design/methodology/approach

Based on the perspective of evolutionary economics, this paper re-examines economic history and existing literature to study the following: changes in the “connotation of production factors” in economics caused by the evolution of production factors; the economic paradoxes formed by data in the context of social production processes and business models, which traditional theoretical frameworks fail to solve; the disruptive innovation of classical theory of value by multiple theories of value determination and the conflicts between the data market monopoly as well as the resulting distribution of value and the real economic society. The research indicates that contemporary advancements in data have catalyzed transformative innovation within the field of economics.

Findings

The research indicates that contemporary advancements in data have catalyzed disruptive innovation in the field of economics.

Originality/value

This paper, grounded in academic research, identifies four novel issues arising from contemporary data that cannot be adequately addressed within the confines of the classical economic theoretical framework.

Details

China Political Economy, vol. 7 no. 2
Type: Research Article
ISSN: 2516-1652

Keywords

Access Restricted. View access options
Book part
Publication date: 29 September 2023

Torben Juul Andersen

This chapter outlines how the comprehensive North American and European datasets were collected and explains the ensuing data cleaning process outlining three alternative methods…

Abstract

This chapter outlines how the comprehensive North American and European datasets were collected and explains the ensuing data cleaning process outlining three alternative methods applied to deal with missing values, the complete case, the multiple imputation (MI), and the K-nearest neighbor (KNN) methods. The complete case method is the conventional approach adopted in many mainstream management studies. We further discuss the implied assumption underlying use of this technique, which is rarely assessed, or tested in practice and explain the alternative imputation approaches in detail. Use of North American data is the norm but we also collected a European dataset, which is rarely done to enable subsequent comparative analysis between these geographical regions. We introduce the structure of firms organized within different industry classification schemes for use in the ensuing comparative analyses and provide base information on missing values in the original and cleaned datasets. The calculated performance indicators derived from the sampled data are defined and presented. We show how the three alternative approaches considered to deal with missing values have significantly different effects on the calculated performance measures in terms of extreme estimate ranges and mean performance values.

Details

A Study of Risky Business Outcomes: Adapting to Strategic Disruption
Type: Book
ISBN: 978-1-83797-074-2

Keywords

Access Restricted. View access options
Book part
Publication date: 3 October 2018

Sotirios Zygiaris

Abstract

Details

Database Management Systems
Type: Book
ISBN: 978-1-78756-695-8

Access Restricted. View access options
Article
Publication date: 23 September 2024

Amilson de Araujo Durans and Emerson Wagner Mainardes

This study assesses whether the strategic orientation of financial institutions to provide value to customers influences the dimensions of personal data privacy perceived by…

612

Abstract

Purpose

This study assesses whether the strategic orientation of financial institutions to provide value to customers influences the dimensions of personal data privacy perceived by consumers of banking services. We also analysed whether these dimensions directly influence the value in use and, indirectly, the reputation of financial institutions.

Design/methodology/approach

Based on the literature, a model was developed to verify the proposed relationships. To test the model, we collected data via an online questionnaire from 2,422 banking customers, with analysis using structural equation modelling with partial least squares estimation.

Findings

The results suggest that strategic value orientation tends to have a direct positive influence on the constructs knowledge, control, willingness to value privacy and trust in sharing personal information and a direct negative influence on the personal data privacy experience. Three dimensions of personal data privacy (knowledge, willingness to value privacy and trust in sharing personal information) tend to have a direct positive influence on value in use. The results showed that the dimensions of personal data privacy experience and control had a significant and negative impact on the value in use construct. Another finding is the positive influence of value in use on organizational reputation. Investing in strategic value orientation can generate consumer perceptions of personal data privacy, which is reflected in the value in use and reputation of banks.

Originality/value

This study is theoretically original because it brings up the organizational reputation of financial institutions based on the strategic orientation to offer value to customers, personal data privacy and the value in use of banking services. The study of these relationships is unprecedented in the literature.

Details

International Journal of Bank Marketing, vol. 43 no. 2
Type: Research Article
ISSN: 0265-2323

Keywords

Available. Open Access. Open Access
Article
Publication date: 3 September 2024

Arturo Basaure, Juuso Töyli and Petri Mähönen

This study aims to investigate the impact of ex-ante regulatory interventions on emerging digital markets related to data sharing and combination practices. Specifically, it…

311

Abstract

Purpose

This study aims to investigate the impact of ex-ante regulatory interventions on emerging digital markets related to data sharing and combination practices. Specifically, it evaluates how such interventions influence market contestability by considering data network effects and the economic value of data.

Design/methodology/approach

The research uses agent-based modeling and simulations to analyze the dynamics of value generation and market competition related to the regulatory obligations on data sharing and combination practices.

Findings

Results show that while the promotion of data sharing through data portability and interoperability has a positive impact on the market, restricting data combination may damage value generation or, at best, have no positive impact even when it is imposed only on those platforms with very large market shares. More generally, the results emphasize the role of regulators in enabling the market through interoperability and service multihoming. Data sharing through portability fosters competition, while the usage of complementary data enhances platform value without necessarily harming the market. Service provider multihoming complements these efforts.

Research limitations/implications

Although agent-based modeling and simulations describe the dynamics of data markets and platform competition, they do not provide accurate forecasts of possible market outcomes.

Originality/value

This paper presents a novel approach to understanding the dynamics of data value generation and the effects of related regulatory interventions. In the absence of real-world data, agent-based modeling provides a means to understand the general dynamics of data markets under different regulatory decisions that have yet to be implemented. This analysis is timely given the emergence of regulatory concerns on how to stimulate a competitive digital market and a shift toward ex-ante regulation, such as the regulatory obligations to large gatekeepers set in the Digital Markets Act.

Details

Digital Policy, Regulation and Governance, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2398-5038

Keywords

Access Restricted. View access options
Article
Publication date: 3 February 2022

Omm Al-Banin Feyzbakhsh, Fahimeh Babalhavaeji, Navid Nezafati, Nadjla Hariri and Fatemeh Nooshinfard

This study aimed to present a model for open-data management for developing innovative information flow in Iranian knowledge-based companies (businesses).

1175

Abstract

Purpose

This study aimed to present a model for open-data management for developing innovative information flow in Iranian knowledge-based companies (businesses).

Design/methodology/approach

The method was mixed (qualitative-quantitative) and data collection tools were interview and questionnaire. The qualitative part was done to identify the influential components in open data management (ecosystem) using the grounded theory method. A questionnaire was developed based on the results of the qualitative section and the theoretical foundations, and the quantitative section was conducted by analytical survey method and the model was extracted using factor analysis and the integration of the qualitative section.

Findings

Seven categories of entrepreneurial incentives, sustainable value, innovative features, challenges and barriers, actors, business model and requirements are the main categories that should be considered in open data management (ecosystem) with all categories of research have a significant relationship with open data management.

Originality/value

The study focused on open data management from an innovation paradigm perspective and its role in developing innovative information flow. The study aimed to identify the key components of the open data ecosystem, open-data value creation, and the need to use the “open data” approach to develop data-driven and knowledge-based businesses in Iran–an emerging approach largely ignored.

Details

Aslib Journal of Information Management, vol. 74 no. 3
Type: Research Article
ISSN: 2050-3806

Keywords

Access Restricted. View access options
Article
Publication date: 5 March 2018

Jong Kyou Jeon

The purpose of this paper is to examine the relationship between trade integration and intra-regional business cycle synchronization using value-added trade data. Most empirical…

183

Abstract

Purpose

The purpose of this paper is to examine the relationship between trade integration and intra-regional business cycle synchronization using value-added trade data. Most empirical studies analyzing the relationship between trade integration and business cycle synchronization use gross trade data which suffer from double-counting. Double-counting distorts the empirical results on the estimated relationship between trade integration and business cycle synchronization. This paper explores the relationship using value-added trade data to be free from distortions caused by double-counting.

Design/methodology/approach

Gross trade data on exports and imports are decomposed into sub-categories following Koopman et al. (2014). Then, value-added data on exports and imports without double-counted terms are built to measure value-added bilateral trade intensity and value-added intra-industry trade intensity. Using this value-added trade intensities, the author run panel regressions for Europe and East Asian countries to examine how value-added trade intensities are correlated with output co-movements.

Findings

The paper finds that for European countries, the positive association between trade and business cycle co-movements is more evidently observed and the role of intra-industry trade increasing the business cycle synchronization is also more clearly revealed by value-added trade data. On the other hand, for East Asian countries, value-added trade data reveal that it is very uncertain whether increased trade contributes to stronger synchronization of business cycles and intra-industry trade is truly the major factor which deepens the business cycle co-movements.

Research limitations/implications

First, the paper examines the relationship only by running static panel regression. There is a need to employ different methodologies such as instrumental variable regression or dynamic panel regression. Second, financial integration and policy coordination within a region are also other relevant factors which influence the intra-regional business cycle synchronization. There is a need to examine the relationship using value-added trade data with the variables measuring the degree of financial integration and policy coordination. Third, value-added trade data used in this paper has limited coverage of East Asian countries. There is also a need to extend the value-added data set to cover more countries and industries.

Originality/value

Most empirical literature studying the relationship between trade integration and business cycle synchronization rely on gross trade data. This paper would be the first attempt to study the relationship using value-added trade data. Duval et al. (2014) also use value-added data, but their value-added data are not supported by a solid accounting framework which decomposes a country’s gross exports into various value-added components by source and additional double-counted terms. Value-added data in this paper computed based on Koopman et al. (2014) are the total domestic value exports that are ultimately consumed abroad via final and intermediate exports. The author believes that value-added data in this paper are most relevant in estimating the relationship between trade integration and business cycle synchronization.

Details

Journal of Korea Trade, vol. 22 no. 1
Type: Research Article
ISSN: 1229-828X

Keywords

Access Restricted. View access options
Article
Publication date: 18 May 2020

Eleni-Laskarina Makri, Zafeiroula Georgiopoulou and Costas Lambrinoudakis

This study aims to assist organizations to protect the privacy of their users and the security of the data that they store and process. Users may be the customers of the…

335

Abstract

Purpose

This study aims to assist organizations to protect the privacy of their users and the security of the data that they store and process. Users may be the customers of the organization (people using the offered services) or the employees (users who operate the systems of the organization). To be more specific, this paper proposes a privacy impact assessment (PIA) method that explicitly takes into account the organizational characteristics and employs a list of well-defined metrics as input, demonstrating its applicability to two hospital information systems with different characteristics.

Design/methodology/approach

This paper presents a PIA method that employs metrics and takes into account the peculiarities and other characteristics of the organization. The applicability of the method has been demonstrated on two Hospital Information Systems with different characteristics. The aim is to assist the organizations to estimate the criticality of potential privacy breaches and, thus, to select the appropriate security measures for the protection of the data that they collect, process and store.

Findings

The results of the proposed PIA method highlight the criticality of each privacy principle for every data set maintained by the organization. The method employed for the calculation of the criticality level, takes into account the consequences that the organization may experience in case of a security or privacy violation incident on a specific data set, the weighting of each privacy principle and the unique characteristics of each organization. So, the results of the proposed PIA method offer a strong indication of the security measures and privacy enforcement mechanisms that the organization should adopt to effectively protect its data.

Originality/value

The novelty of the method is that it handles security and privacy requirements simultaneously, as it uses the results of risk analysis together with those of a PIA. A further novelty of the method is that it introduces metrics for the quantification of the requirements and also that it takes into account the specific characteristics of the organization.

Details

Information & Computer Security, vol. 28 no. 4
Type: Research Article
ISSN: 2056-4961

Keywords

Access Restricted. View access options
Article
Publication date: 30 July 2018

Mauro Romanelli

The aim of this study is to provide a conceptual framework to explain how museums sustain intellectual capital and promote value co-creation moving from designing virtual…

1142

Abstract

Purposes

The aim of this study is to provide a conceptual framework to explain how museums sustain intellectual capital and promote value co-creation moving from designing virtual environments to introducing and managing Big Data.

Design/methodology/approach

This study is based on archival and qualitative data considering the literature related to the introduction of virtual environments and Big Data within museums.

Findings

Museums contribute to sustaining intellectual capital and in promoting value co-creation developing a Big Data-driven strategy and innovation.

Practical implications

By introducing and managing Big Data, museums contribute to creating a community by improving knowledge within cultural ecosystems while strengthening the users as active participants and the museum’s professionals as user-centred mediators.

Originality/value

As audience-driven and knowledge-oriented organisations moving from designing virtual environments to following a Big data-driven strategy, museums should select organisational and strategic choices for driving change.

Details

Meditari Accountancy Research, vol. 26 no. 3
Type: Research Article
ISSN: 2049-372X

Keywords

Access Restricted. View access options
Article
Publication date: 29 August 2019

Vivekanand Venkataraman, Syed Usmanulla, Appaiah Sonnappa, Pratiksha Sadashiv, Suhaib Soofi Mohammed and Sundaresh S. Narayanan

The purpose of this paper is to identify significant factors of environmental variables and pollutants that have an effect on PM2.5 through wavelet and regression analysis.

691

Abstract

Purpose

The purpose of this paper is to identify significant factors of environmental variables and pollutants that have an effect on PM2.5 through wavelet and regression analysis.

Design/methodology/approach

In order to provide stable data set for regression analysis, multiresolution analysis using wavelets is conducted. For the sampled data, multicollinearity among the independent variables is removed by using principal component analysis and multiple linear regression analysis is conducted using PM2.5 as a dependent variable.

Findings

It is found that few pollutants such as NO2, NOx, SO2, benzene and environmental factors such as ambient temperature, solar radiation and wind direction affect PM2.5. The regression model developed has high R2 value of 91.9 percent, and the residues are stationary and not correlated indicating a sound model.

Research limitations/implications

The research provides a framework for extracting stationary data and other important features such as change points in mean and variance, using the sample data for regression analysis. The work needs to be extended across all areas in India and for various other stationary data sets there can be different factors affecting PM2.5.

Practical implications

Control measures such as control charts can be implemented for significant factors.

Social implications

Rules and regulations can be made more stringent on the factors.

Originality/value

The originality of this paper lies in the integration of wavelets with regression analysis for air pollution data.

Details

International Journal of Quality & Reliability Management, vol. 36 no. 10
Type: Research Article
ISSN: 0265-671X

Keywords

Access Restricted. View access options
Article
Publication date: 8 August 2018

Milla Ratia, Jussi Myllärniemi and Nina Helander

As the health care sector is changing rapidly, there is a growing need to develop new ways to make data-driven decisions, especially at the organizational level. Data utilization…

1138

Abstract

Purpose

As the health care sector is changing rapidly, there is a growing need to develop new ways to make data-driven decisions, especially at the organizational level. Data utilization, like business intelligence (BI) activities, benefits health care organizations. The purpose of this paper is to study the potential of Big Data and the utilization of BI tools in creating value in the private health care industry in Finland.

Design/methodology/approach

Intellectual capital (IC) components and Möller et al.’s (2005) work on value capabilities are used as a framework to point out the roles of data utilization and BI tools in value creation. Thematic interviews enable understanding of the value creation based on Big Data potential and utilization of BI tools in the Finnish private health care industry.

Findings

The findings will provide an understanding of the existing data sources and BI tools used in private health care. In addition, it provides an insight into the future-oriented Big Data potential, which can create new business concepts. The approach provides valuable insights for value identifying the future needs of data utilization and creates an understanding on the current state within the private health care sector.

Originality/value

Data-driven value creation is one of the most discussed topics in private health care sector. By analyzing the current data-source utilization, challenges with data and BI tool utilization and the future vision and development roadmaps, the authors gain a better understanding of the IC components and value creation capabilities.

Details

Meditari Accountancy Research, vol. 26 no. 3
Type: Research Article
ISSN: 2049-372X

Keywords

Access Restricted. View access options
Article
Publication date: 12 August 2014

Sanat Agrawal, Deon J. de Beer and Yashwant Kumar Modi

This paper aims to convert surface data directly to a three-dimensional (3D) stereolithography (STL) part. The Geographic Information Systems (GIS) data available for a terrain…

387

Abstract

Purpose

This paper aims to convert surface data directly to a three-dimensional (3D) stereolithography (STL) part. The Geographic Information Systems (GIS) data available for a terrain are the data of its surface. It doesn’t have information for a solid model. The data need to be converted into a three-dimensional (3D) solid model for making physical models by additive manufacturing (AM).

Design/methodology/approach

A methodology has been developed to make the wall and base of the part and tessellates the part with triangles. A program has been written which gives output of the part in STL file format. The elevation data are interpolated and any singularity present is removed. Extensive search techniques are used.

Findings

AM technologies are increasingly being used for terrain modeling. However, there is not enough work done to convert the surface data into 3D solid model. The present work aids in this area.

Practical implications

The methodology removes data loss associated with intermediate file formats. Terrain models can be created in less time and less cost. Intricate geometries of terrain can be created with ease and great accuracy.

Social implications

The terrain models can be used for GIS education, educating the community for catchment management, conservation management, etc.

Originality/value

The work allows direct and automated conversion of GIS surface data into a 3D STL part. It removes intermediate steps and any data loss associated with intermediate file formats.

Details

Rapid Prototyping Journal, vol. 20 no. 5
Type: Research Article
ISSN: 1355-2546

Keywords

Access Restricted. View access options
Article
Publication date: 7 January 2022

Xiaobo Wu, Liping Liang and Siyuan Chen

As various different and even contradictory concepts are proposed to depict a firm's capabilities related to big data, and extant relevant research is fragmented and scattered in…

1854

Abstract

Purpose

As various different and even contradictory concepts are proposed to depict a firm's capabilities related to big data, and extant relevant research is fragmented and scattered in several disciplines, there is currently a lack of holistic and comprehensive understanding of how big data alters value creation by facilitating firm capabilities. To narrow this gap, this study aims to synthesize current knowledge on the firm capabilities and transformation of value creation facilitated by big data.

Design/methodology/approach

The authors adopt an inductive and rigorous approach to conduct a systematic review of 185 works, following the “Grounded Theory Literature-Review Method”.

Findings

The authors introduce and develop the concept of big data competency, present an inductive framework to open the black box of big data competency following the logic of virtual value chain, provide a structure of big data competency that consists of two dimensions, namely, big data capitalization and big data exploitation, and further explain the evolution of value creation structure from value chain to value network by connecting the attributes of big data competency (i.e. connectivity and complementarity) with the transformation of value creation (i.e. optimizing and pioneering).

Originality/value

The big data competency, an inclusive concept of firm capabilities to deal with big data, is proposed. Based on this concept, the authors highlight the significant contributions that extant research has made toward our understanding of how big data alters value creation by facilitating firm capabilities. Besides, the authors provide a future research agenda that academics can rely on to study the strategic management of big data.

Details

Management Decision, vol. 60 no. 3
Type: Research Article
ISSN: 0025-1747

Keywords

Access Restricted. View access options
Article
Publication date: 6 November 2019

Jan Michael Nolin

Principled discussions on the economic value of data are frequently pursued through metaphors. This study aims to explore three influential metaphors for talking about the…

1240

Abstract

Purpose

Principled discussions on the economic value of data are frequently pursued through metaphors. This study aims to explore three influential metaphors for talking about the economic value of data: data are the new oil, data as infrastructure and data as an asset.

Design/methodology/approach

With the help of conceptual metaphor theory, various meanings surrounding the three metaphors are explored. Meanings clarified or hidden through various metaphors are identified. Specific emphasis is placed on the economic value of ownership of data.

Findings

In discussions on data as economic resource, the three different metaphors are used for separate purposes. The most used metaphor, data are the new oil, communicates that ownership of data could lead to great wealth. However, with data as infrastructure data have no intrinsic value. Therefore, profits generated from data resources belong to those processing the data, not those owning it. The data as an asset metaphor can be used to convince organizational leadership that they own data of great value.

Originality/value

This is the first scholarly investigation of metaphors communicating economic value of data. More studies in this area appear urgent, given the power of such metaphors, as well as the increasing importance of data in economics.

Details

Journal of Information, Communication and Ethics in Society, vol. 18 no. 1
Type: Research Article
ISSN: 1477-996X

Keywords

Access Restricted. View access options
Article
Publication date: 10 November 2014

Vasily Bunakov, Catherine Jones, Brian Matthews and Michael Wilson

The purpose of this paper is to suggest an approach to data value considerations that is related to the generalized notion of authenticity and can be applied to the design of…

1044

Abstract

Purpose

The purpose of this paper is to suggest an approach to data value considerations that is related to the generalized notion of authenticity and can be applied to the design of preservation policies. There has been considerable progress in the scalable architectures for policy-driven digital collection preservation as well as in modeling preservation costs. However, modeling the value of both digital artifacts and collections seems a more elusive topic that has yet to find a proper methodology and means of expression.

Design/methodology/approach

A top-down conceptual analysis was developed and the principles of information technology service management and quality management were applied to the domain of digital preservation. Then, in a bottom-up analysis, the various notions of authenticity in digital preservation projects, reference models and conceptual papers were reviewed.

Findings

The top-down and bottom-up analyses have a meeting point, establishing a close relation between the concepts of data authenticity and data value.

Originality/value

The generalized understanding of authenticity can support the design of sensible preservation policies and their application to the formation and long-term maintenance of digital collections.

Details

OCLC Systems & Services: International digital library perspectives, vol. 30 no. 4
Type: Research Article
ISSN: 1065-075X

Keywords

Available. Open Access. Open Access
Article
Publication date: 15 July 2022

Susanne Leitner-Hanetseder and Othmar M. Lehner

With the help of “self-learning” algorithms and high computing power, companies are transforming Big Data into artificial intelligence (AI)-powered information and gaining…

6118

Abstract

Purpose

With the help of “self-learning” algorithms and high computing power, companies are transforming Big Data into artificial intelligence (AI)-powered information and gaining economic benefits. AI-powered information and Big Data (simply data henceforth) have quickly become some of the most important strategic resources in the global economy. However, their value is not (yet) formally recognized in financial statements, which leads to a growing gap between book and market values and thus limited decision usefulness of the underlying financial statements. The objective of this paper is to identify ways in which the value of data can be reported to improve decision usefulness.

Design/methodology/approach

Based on the authors' experience as both long-term practitioners and theoretical accounting scholars, the authors conceptualize and draw up a potential data value chain and show the transformation from raw Big Data to business-relevant AI-powered information during its process.

Findings

Analyzing current International Financial Reporting Standards (IFRS) regulations and their applicability, the authors show that current regulations are insufficient to provide useful information on the value of data. Following this, the authors propose a Framework for AI-powered Information and Big Data (FAIIBD) Reporting. This framework also provides insights on the (good) governance of data with the purpose of increasing decision usefulness and connecting to existing frameworks even further. In the conclusion, the authors raise questions concerning this framework that may be worthy of discussion in the scholarly community.

Research limitations/implications

Scholars and practitioners alike are invited to follow up on the conceptual framework from many perspectives.

Practical implications

The framework can serve as a guide towards a better understanding of how to recognize and report AI-powered information and by that (a) limit the valuation gap between book and market value and (b) enhance decision usefulness of financial reporting.

Originality/value

This article proposes a conceptual framework in IFRS to regulators to better deal with the value of AI-powered information and improve the good governance of (Big)data.

Details

Journal of Applied Accounting Research, vol. 24 no. 2
Type: Research Article
ISSN: 0967-5426

Keywords

Access Restricted. View access options
Article
Publication date: 4 May 2012

Christopher Zakrzewicz, B. Wade Brorsen and Brian C. Briggeman

Consistent and reliable data on farmland values is critical to assessing the overall financial health of agricultural producers. However, little is known about the idiosyncrasies…

379

Abstract

Purpose

Consistent and reliable data on farmland values is critical to assessing the overall financial health of agricultural producers. However, little is known about the idiosyncrasies and similarities of standard land value data sources – US Department of Agriculture (USDA), Federal Reserve Bank land value surveys, and transaction prices. The purpose of this paper is to determine the differences and similarities of land value movements from three land value data sources.

Design/methodology/approach

In addition to Oklahoma transaction prices, two survey sources are considered: the USDA annual report and the quarterly Tenth District Survey of Agricultural Credit Conditions administered by the Federal Reserve Bank of Kansas City. The paper describes each data set and identifies differences in data sampling, collection, and reporting. Average values of Oklahoma farmland across data sources are examined. USDA estimates are regressed against quarterly Federal Reserve values across multiple states to determine the point in time represented by USDA estimates. Granger causality tests determine if Federal Reserve land value estimates anticipate movements in USDA land value estimates.

Findings

It is found that all three data sources are highly correlated, but transaction prices tend to be higher, especially for irrigated cropland and ranchland. USDA land values are reported as representing land values on January first, but instead they more closely represent first and second quarter land values according to a multi‐state comparison to changes in quarterly Federal Reserve land values. Given the finding that first quarter Federal Reserve Bank land values lead USDA land values and that they are published before the USDA release, Federal Reserve land values are a timely indicator of agricultural producers' financial position.

Originality/value

No previous research has addressed the topic of how various sources of agricultural land values compare.

Details

Agricultural Finance Review, vol. 72 no. 1
Type: Research Article
ISSN: 0002-1466

Keywords

Available. Open Access. Open Access
Article
Publication date: 22 November 2022

Kedong Yin, Yun Cao, Shiwei Zhou and Xinman Lv

The purposes of this research are to study the theory and method of multi-attribute index system design and establish a set of systematic, standardized, scientific index systems…

999

Abstract

Purpose

The purposes of this research are to study the theory and method of multi-attribute index system design and establish a set of systematic, standardized, scientific index systems for the design optimization and inspection process. The research may form the basis for a rational, comprehensive evaluation and provide the most effective way of improving the quality of management decision-making. It is of practical significance to improve the rationality and reliability of the index system and provide standardized, scientific reference standards and theoretical guidance for the design and construction of the index system.

Design/methodology/approach

Using modern methods such as complex networks and machine learning, a system for the quality diagnosis of index data and the classification and stratification of index systems is designed. This guarantees the quality of the index data, realizes the scientific classification and stratification of the index system, reduces the subjectivity and randomness of the design of the index system, enhances its objectivity and rationality and lays a solid foundation for the optimal design of the index system.

Findings

Based on the ideas of statistics, system theory, machine learning and data mining, the focus in the present research is on “data quality diagnosis” and “index classification and stratification” and clarifying the classification standards and data quality characteristics of index data; a data-quality diagnosis system of “data review – data cleaning – data conversion – data inspection” is established. Using a decision tree, explanatory structural model, cluster analysis, K-means clustering and other methods, classification and hierarchical method system of indicators is designed to reduce the redundancy of indicator data and improve the quality of the data used. Finally, the scientific and standardized classification and hierarchical design of the index system can be realized.

Originality/value

The innovative contributions and research value of the paper are reflected in three aspects. First, a method system for index data quality diagnosis is designed, and multi-source data fusion technology is adopted to ensure the quality of multi-source, heterogeneous and mixed-frequency data of the index system. The second is to design a systematic quality-inspection process for missing data based on the systematic thinking of the whole and the individual. Aiming at the accuracy, reliability, and feasibility of the patched data, a quality-inspection method of patched data based on inversion thought and a unified representation method of data fusion based on a tensor model are proposed. The third is to use the modern method of unsupervised learning to classify and stratify the index system, which reduces the subjectivity and randomness of the design of the index system and enhances its objectivity and rationality.

Details

Marine Economics and Management, vol. 5 no. 2
Type: Research Article
ISSN: 2516-158X

Keywords

Access Restricted. View access options
Article
Publication date: 8 May 2017

Tingting Zhang, William Yu Chung Wang and David J. Pauleen

This paper aims to investigate the value of big data investments by examining the market reaction to company announcements of big data investments and tests the effect for firms…

1827

Abstract

Purpose

This paper aims to investigate the value of big data investments by examining the market reaction to company announcements of big data investments and tests the effect for firms that are either knowledge intensive or not.

Design/methodology/approach

This study is based on an event study using data from two stock markets in China.

Findings

The stock market sees an overall index increase in stock prices when announcements of big data investments are revealed by grouping all the listed firms included in the sample. Increased stock prices are also the case for non-knowledge intensive firms. However, the stock market does not seem to react to big data investment announcements by testing the knowledge intensive firms along.

Research limitations/implications

This study contributes to the literature on assessing the economic value of big data investments from the perspective of big data information value chain by taking an unexpected change in stock price as the measure of the financial performance of the investment and by comparing market reactions between knowledge intensive firms and non-knowledge intensive firms. Findings of this study can be used to refine practitioners’ understanding of the economic value of big data investments to different firms and provide guidance to their future investments in knowledge management to maximize the benefits along the big data information value chain. However, findings of study should be interpreted carefully when applying them to companies that are not publicly traded on the stock market or listed on other financial markets.

Originality/value

Based on the concept of big data information value chain, this study advances research on the economic value of big data investments. Taking the perspective of stock market investors, this study investigates how the stock market reacts to big data investments by comparing the reactions to knowledge-intensive firms and non-knowledge-intensive firms. The results may be particularly interesting to those publicly traded companies that have not previously invested in knowledge management systems. The findings imply that stock investors tend to believe that big data investment could possibly increase the future returns for non-knowledge-intensive firms.

Details

Journal of Knowledge Management, vol. 21 no. 3
Type: Research Article
ISSN: 1367-3270

Keywords

Access Restricted. View access options
Article
Publication date: 5 April 2011

Turkka Näppilä, Katja Moilanen and Timo Niemi

The purpose of this paper is to introduce an expressive query language, called relational XML query language (RXQL), capable of dealing with heterogeneous Extensible Markup…

520

Abstract

Purpose

The purpose of this paper is to introduce an expressive query language, called relational XML query language (RXQL), capable of dealing with heterogeneous Extensible Markup Language (XML) documents in data‐centric applications. In RXQL, data harmonization (i.e. the removal of heterogeneous factors from XML data) is integrated with typical data‐centric features (e.g. grouping, ordering, and aggregation).

Design/methodology/approach

RXQL is based on the XML relation representation, developed in the authors' previous work. This is a novel approach to unambiguously represent semistructured data relationally, which makes it possible in RXQL to manipulate XML data in a tuple‐oriented way, while XML data are typically manipulated in a path‐oriented way.

Findings

The user is able to describe the result of an RXQL query straightforwardly based on non‐XML syntax. The analysis of this description, through the mechanism developed in this paper, affords the automatic construction of the query result. This feature increases significantly the declarativeness of RXQL compared to the path‐oriented XML languages where the user needs to control the construction of the result extensively.

Practical implications

The authors' formal specification of the construction of the query result can be considered as an abstract implementation of RXQL.

Originality/value

RXQL is a declarative query language capable of integrating data harmonization seamlessly with other data‐centric features in the manipulation of heterogeneous XML data. So far, these kinds of XML query languages have been missing. Obviously, the expressive power of RXQL can be achieved by computationally complete XML languages, such as XQuery. However, these are not actual query languages, and the query formulation in them usually presupposes programming skills that are beyond the ordinary end‐user.

Details

International Journal of Web Information Systems, vol. 7 no. 1
Type: Research Article
ISSN: 1744-0084

Keywords

Access Restricted. View access options
Article
Publication date: 19 August 2019

Edward C. Malthouse, Alexander Buoye, Nathaniel Line, Dahlia El-Manstrly, Tarik Dogru and Jay Kandampully

The purpose of this paper is to assess the role of platforms in diffusing data value across multiple stakeholders.

988

Abstract

Purpose

The purpose of this paper is to assess the role of platforms in diffusing data value across multiple stakeholders.

Design/methodology/approach

Seminal theoretical and managerial work has been critically examined in order to justify the need for improving/extending the contemporary understanding of the data value creation process.

Findings

The results suggest that existing frameworks and conceptualizations of reciprocal data value provide incomplete understanding of the role of platforms in data value diffusion.

Research limitations/implications

This paper provides service researchers with a better understanding of the role of platforms in data value diffusion. Future research can develop and validate new frameworks that reflect the proposed extended/improved view of data value creation.

Practical implications

Service and hospitality managers will be able to more effectively manage the role of platforms in data value diffusion. Specifically, this paper proposes that, in order for data to become a source of competitive advantage, there must be a symbiotic relationship among all the stakeholders of the data ecosystem.

Originality/value

The authors discuss how data creates value for different stakeholders in the hospitality industry.

Details

Journal of Service Management, vol. 30 no. 4
Type: Research Article
ISSN: 1757-5818

Keywords

Access Restricted. View access options
Article
Publication date: 8 August 2018

Matteo La Torre, Vida L. Botes, John Dumay, Michele Antonio Rea and Elza Odendaal

As Big Data is creating new underpinnings for organisations’ intellectual capital (IC) and knowledge management, this paper aims to analyse the implications of Big Data for IC…

2390

Abstract

Purpose

As Big Data is creating new underpinnings for organisations’ intellectual capital (IC) and knowledge management, this paper aims to analyse the implications of Big Data for IC accounting to provide new conceptual and practical insights about the future of IC accounting.

Design/methodology/approach

Based on a conceptual framework informed by decision science theory, the authors explain the factors supporting Big Data’s value and review the academic literature and practical evidence to analyse the implications of Big Data for IC accounting.

Findings

In reflecting on Big Data’s ability to supply a new value for IC and its implications for IC accounting, the authors conclude that Big Data represents a new IC asset, and this represents a rationale for a renewed wave of interest in IC accounting. IC accounting can contribute to understand the determinants of Big Data’s value, such as data quality, security and privacy issues, data visualisation and users’ interaction. In doing so, IC measurement, reporting and auditing need to keep focusing on how human capital and organisational and technical processes (structural capital) can unlock or even obstruct Big Data’s value for IC.

Research limitations/implications

The topic of Big Data in IC and accounting research is in its infancy; therefore, this paper acts at a normative level. While this represents a research limitation of the study, it is also a call for future empirical studies.

Practical implications

Once again, practitioners and researchers need to face the challenge of avoiding the trap of IC accountingisation to make IC accounting relevant for the Big Data revolution. Within the euphoric and utopian views of the Big Data revolution, this paper contributes to enriching awareness about the practical factors underpinning Big Data’s value for IC and foster the cognitive and behavioural dynamic between data, IC information and user interaction.

Social implications

The paper is relevant to prepares, users and auditors of financial statements.

Originality/value

This paper aims to instill a novel debate on Big Data into IC accounting research by providing new avenues for future research.

Details

Meditari Accountancy Research, vol. 26 no. 3
Type: Research Article
ISSN: 2049-372X

Keywords

Access Restricted. View access options
Article
Publication date: 2 April 2019

S.M. Riad Shams and Ludovico Solima

Big data management research and practice, however, have received enormous interest from academia and industry; the extant literature demonstrates that the authors have limited…

1129

Abstract

Purpose

Big data management research and practice, however, have received enormous interest from academia and industry; the extant literature demonstrates that the authors have limited understanding and challenges in this knowledge-stream to fully capitalize with its potentials. One of the contemporary challenges is to accurately verify data veracity, and developing value from the verified data for an organization and its stakeholders. Consequently, the purpose of this paper is to develop insights on how the authors could strategically deal with the contemporary challenges in strategic management of big data, related to data veracity and data value.

Design/methodology/approach

The inductive–constructivist approach is followed to develop insights at the intersection of dynamic capabilities theory and stakeholder relationship management concept, in order to strategically deal with the contemporary challenges in big data management, related to data veracity and data value.

Findings

At the intersection of dynamic capabilities theory and stakeholder relationship management concept, an implication is acknowledged, which has research and practical significance to strategically verify data source, its veracity and value. Following this implication, a framework of a data incubator is proposed to proactively develop insights on veracity and value of data. Empirical insights are also presented in this study to support this initial framework.

Practical implications

For future research in strategic management of big data, academics will have contextual understanding on the particular interconnected and interdependent attributes from dynamic capabilities and stakeholder relationship management research streams to further enhance the understanding on big data management. For practice, these insights will be useful for executives to focus on specific attributes of the proposed data incubator to confirm data veracity and develop insights on how to design, deliver and evaluate stakeholder value based on the verified data.

Originality/value

Following a synthesis at the intersection of dynamic capabilities theory and stakeholder relationship management concept, this study introduces a data incubator to meaningfully deal with the big data management challenges, related to veracity and value of data.

Details

Management Decision, vol. 57 no. 8
Type: Research Article
ISSN: 0025-1747

Keywords

Access Restricted. View access options
Article
Publication date: 28 September 2007

Alasdair J.G. Gray, Werner Nutt and M. Howard Williams

Distributed data streams are an important topic of current research. In such a setting, data values will be missed, e.g. due to network errors. This paper aims to allow this…

332

Abstract

Purpose

Distributed data streams are an important topic of current research. In such a setting, data values will be missed, e.g. due to network errors. This paper aims to allow this incompleteness to be detected and overcome with either the user not being affected or the effects of the incompleteness being reported to the user.

Design/methodology/approach

A model for representing the incomplete information has been developed that captures the information that is known about the missing data. Techniques for query answering involving certain and possible answer sets have been extended so that queries over incomplete data stream histories can be answered.

Findings

It is possible to detect when a distributed data stream is missing one or more values. When such data values are missing there will be some information that is known about the data and this is stored in an appropriate format. Even when the available data are incomplete, it is possible in some circumstances to answer a query completely. When this is not possible, additional meta‐data can be returned to inform the user of the effects of the incompleteness.

Research limitations/implications

The techniques and models proposed in this paper have only been partially implemented.

Practical implications

The proposed system is general and can be applied wherever there is a need to query the history of distributed data streams. The work in this paper enables the system to answer queries when there are missing values in the data.

Originality/value

This paper presents a general model of how to detect, represent, and answer historical queries over incomplete distributed data streams.

Details

International Journal of Web Information Systems, vol. 3 no. 1/2
Type: Research Article
ISSN: 1744-0084

Keywords

Access Restricted. View access options
Article
Publication date: 24 January 2023

Dawn Holmes, Judith Zolkiewski and Jamie Burton

Despite data being a hot topic, little is known about how data can be successfully used in interactions in business-to-business relationships, specifically in the boundary…

715

Abstract

Purpose

Despite data being a hot topic, little is known about how data can be successfully used in interactions in business-to-business relationships, specifically in the boundary spanning contexts of firms working together to use data and create value. Hence, this study aims to investigate the boundary spanning context of data-driven customer value projects to understand the outcomes of such activities, including the types of value created, how resulting value is shared between the interacting firms, the types of capabilities required for firms to deliver value from data and in what contexts different outcomes are created and different capabilities required.

Design/methodology/approach

Three abductive case studies were undertaken with firms from different business-to-business domains. Data were coded in NVivo and interpreted using template analysis and cross-case comparison. Findings were sense checked with the case study companies and other practitioners for accuracy, relevance and resonance.

Findings

The findings expand our understanding of firm interactions when extracting value from data, and this study presents 15 outcomes of value created by the firms in the study. This study illustrates the complexity and intertwined nature of the process of value creation, which emphasises the need to understand distinct types of outcomes of value creation and how they benefit the firms involved. This study goes beyond this by categorising these outcomes as unilateral (one actor benefits), developmental (one actor benefits from the other) or bilateral (both actors benefit).

Research limitations/implications

This research is exploratory in nature. This study provides a basis for further exploration of how firm interactions surrounding the implementation of data-driven customer value projects can benefit the firms involved and offers some transferable knowledge which is of particular relevance to practitioners.

Practical implications

This research contributes to the understanding of data-driven customer-focused projects and offers some practical management tools. The identification of outcomes helps define project goals and helps connect these goals to strategy. The organisation of outcomes into themes and contexts helps managers allocate appropriate human resources to oversee projects, mitigating the impacts of a current lack of talent in this area. Additionally, using the findings of this research, firms can develop specific capabilities to exploit the project outcomes and the opportunities such projects provide. The findings can also be used to enhance relationships between firms and their customers, providing customer value.

Originality/value

This work builds on research that explores the creation of value from data and how value is created in boundary spanning contexts. This study expands existing work by providing greater insight into the mechanics and outcomes of value creation and by providing specific examples of value created. This study also offers some recommendations of capability requirements for firms undertaking such work.

Details

Journal of Business & Industrial Marketing, vol. 38 no. 6
Type: Research Article
ISSN: 0885-8624

Keywords

Access Restricted. View access options
Article
Publication date: 5 June 2017

Sune Dueholm Müller and Preben Jensen

The development within storage and processing technologies combined with the growing collection of data has created opportunities for companies to create value through the…

1812

Abstract

Purpose

The development within storage and processing technologies combined with the growing collection of data has created opportunities for companies to create value through the application of big data. The purpose of this paper is to focus on how small and medium-sized companies in Denmark are using big data to create value.

Design/methodology/approach

The research is based on a literature review and on data collected from 457 Danish companies through an online survey. The paper looks at big data from the perspective of SMEs in order to answer the following research question: to what extent does the application of big data create value for small and medium-sized companies.

Findings

The findings show clear links between the application of big data and value creation. The analysis also shows that the value created through big data does not arise from data or technology alone but is dependent on the organizational context and managerial action. A holistic perspective on big data is advocated, not only focusing on the capture, storage, and analysis of data, but also leadership through goal setting and alignment of business strategies and goals, IT capabilities, and analytical skills. Managers are advised to communicate the business value of big data, adapt business processes to data-driven business opportunities, and in general act on the basis of data.

Originality/value

The paper provides researchers and practitioners with empirically based insights into how the application of big data creates value for SMEs.

Details

Business Process Management Journal, vol. 23 no. 3
Type: Research Article
ISSN: 1463-7154

Keywords

Available. Open Access. Open Access
Article
Publication date: 4 April 2023

Orlando Troisi, Anna Visvizi and Mara Grimaldi

Digitalization accelerates the need of tourism and hospitality ecosystems to reframe business models in line with a data-driven orientation that can foster value creation and…

7488

Abstract

Purpose

Digitalization accelerates the need of tourism and hospitality ecosystems to reframe business models in line with a data-driven orientation that can foster value creation and innovation. Since the question of data-driven business models (DDBMs) in hospitality remains underexplored, this paper aims at (1) revealing the key dimensions of the data-driven redefinition of business models in smart hospitality ecosystems and (2) conceptualizing the key drivers underlying the emergence of innovation in these ecosystems.

Design/methodology/approach

The empirical research is based on semi-structured interviews collected from a sample of hospitality managers, employed in three different accommodation services, i.e. hotels, bed and breakfast (B&Bs) and guesthouses, to explore data-driven strategies and practices employed on site.

Findings

The findings allow to devise a conceptual framework that classifies the enabling dimensions of DDBMs in smart hospitality ecosystems. Here, the centrality of strategy conducive to the development of data-driven innovation is stressed.

Research limitations/implications

The study thus developed a conceptual framework that will serve as a tool to examine the impact of digitalization in other service industries. This study will also be useful for small and medium-sized enterprises (SMEs) managers, who seek to understand the possibilities data-driven management strategies offer in view of stimulating innovation in the managers' companies.

Originality/value

The paper reinterprets value creation practices in business models through the lens of data-driven approaches. In this way, this paper offers a new (conceptual and empirical) perspective to investigate how the hospitality sector at large can use the massive amounts of data available to foster innovation in the sector.

Details

European Journal of Innovation Management, vol. 26 no. 7
Type: Research Article
ISSN: 1460-1060

Keywords

Access Restricted. View access options
Article
Publication date: 2 November 2018

Ossi Ylijoki and Jari Porras

The purpose of this paper is to present a process-theory-based model of big data value creation in a business context. The authors approach the topic from the viewpoint of a…

2380

Abstract

Purpose

The purpose of this paper is to present a process-theory-based model of big data value creation in a business context. The authors approach the topic from the viewpoint of a single firm.

Design/methodology/approach

The authors reflect current big data literature in two widely used value creation frameworks and arrange the results according to a process theory perspective.

Findings

The model, consisting of four probabilistic processes, provides a “recipe” for converting big data investments into firm performance. The provided recipe helps practitioners to understand the ingredients and complexities that may promote or demote the performance impact of big data in a business context.

Practical implications

The model acts as a framework which helps to understand the necessary conditions and their relationships in the conversion process. This helps to focus on success factors which promote positive performance.

Originality/value

Using well-established frameworks and process components, the authors synthetize big data value creation-related papers into a holistic model which explains how big data investments translate into economic performance, and why the conversion sometimes fails. While the authors rely on existing theories and frameworks, the authors claim that the arrangement and application of the elements to the big data context is novel.

Details

Business Process Management Journal, vol. 25 no. 5
Type: Research Article
ISSN: 1463-7154

Keywords

Access Restricted. View access options
Article
Publication date: 11 March 2025

Desheng Liu, Mingzhu Li, Mingsheng Li and Jing Shi

Data assets and digital resources (DADRs) are among the world’s most valuable resources, yet their economic value is often underrepresented in GDP statistics and corporate…

8

Abstract

Purpose

Data assets and digital resources (DADRs) are among the world’s most valuable resources, yet their economic value is often underrepresented in GDP statistics and corporate financial statements. This underrepresentation stems from several factors, such as the complexities of valuing data assets, the absence of standardized accounting principles for data and other intangible assets and conflicting views on the need for such accounting. In this study, we strive to reconcile conflicting views by empirically investigating whether such accounting is necessary from the perspective of investors, namely, do investors care about the accounting treatment of DADR?

Design/methodology/approach

We leverage a unique event and adopt a well-established event-study approach to examine investors’ responses to a recent regulatory announcement regarding the accounting treatment of data assets. In August 2023, China’s Ministry of Finance introduced the Interim Provisions on the Accounting Treatment of Enterprise Data Resources (hereafter referred to as the Interim Provisions), marking the world’s first formalized framework for data asset accounting. This event provides an ideal context for this inquiry.

Findings

Our findings indicate that markets respond positively to the announcement, particularly for firms with more DADR proxied in different ways. However, the positive market reaction is significantly smaller for companies with higher levels of intangible asset intensity. This result aligns with the emerging literature, suggesting that firms with high intangible intensity experience greater information asymmetry and reduced value relevance of financial statements due to inadequate accounting treatment of intangibles. Moreover, the economic implications are notable. A long–short portfolio strategy, which involves buying stocks of firms in the top quartile of DADR proxies and selling those in the bottom quartile, yields an annualized cumulative abnormal return (CAR) of over 3.00%.

Originality/value

The novel insights from this study help reconcile conflicting views on the need for accounting treatment of data and other intangible assets because investors care about the accounting of data assets. Moreover, our research indicates an urgent need for the development of clear accounting guidelines for data and other intangibles, which would improve the consistency and reliability of financial reporting, benefiting all stakeholders. Finally, our findings hold important implications for regulators and accounting standard setters, especially given the ongoing debates regarding accounting for intangible assets.

Details

Journal of Accounting Literature, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0737-4607

Keywords

Access Restricted. View access options
Article
Publication date: 12 August 2014

Yu-Ting Cheng and Chih-Ching Yang

Constructing a fuzzy control chart with interval-valued fuzzy data is an important topic in the fields of medical, sociological, economics, service and management. In particular…

309

Abstract

Purpose

Constructing a fuzzy control chart with interval-valued fuzzy data is an important topic in the fields of medical, sociological, economics, service and management. In particular, when the data illustrates uncertainty, inconsistency and is incomplete which is often the. case of real data. Traditionally, we use variable control chart to detect the process shift with real value. However, when the real data is composed of interval-valued fuzzy, it is not feasible to use such an approach of traditional statistical process control (SPC) to monitor the fuzzy control chart. The purpose of this paper is to propose the designed standardized fuzzy control chart for interval-valued fuzzy data set.

Design/methodology/approach

The general statistical principles used on the standardized control chart are applied to fuzzy control chart for interval-valued fuzzy data.

Findings

When the real data is composed of interval-valued fuzzy, it is not feasible to use such an approach of traditional SPC to monitor the fuzzy control chart. This study proposes the designed standardized fuzzy control chart for interval-valued fuzzy data set of vegetable price from January 2009 to September 2010 in Taiwan obtained from Council of Agriculture, Executive Yuan. Empirical studies are used to illustrate the application for designing standardized fuzzy control chart. More related practical phenomena can be explained by this appropriate definition of fuzzy control chart.

Originality/value

This paper uses a simpler approach to construct the standardized interval-valued chart for fuzzy data based on traditional standardized control chart which is easy and straightforward. Moreover, the control limit of the designed standardized fuzzy control chart is an interval with (LCL, UCL), which consists of the conventional range of classical standardized control chart.

Details

Management Decision, vol. 52 no. 7
Type: Research Article
ISSN: 0025-1747

Keywords

Access Restricted. View access options
Article
Publication date: 11 March 2025

Masresha Belete Asnakew, Melkam Ayalew Gebru, Wuditu Belete, Takele Abebe and Yeshareg Baye Simegn

This study aims to identify determinants of single-family residential property values and fill the gap by analyzing respondents’ willingness to pay/receive data alongside real…

10

Abstract

Purpose

This study aims to identify determinants of single-family residential property values and fill the gap by analyzing respondents’ willingness to pay/receive data alongside real transaction data. Ordinal logistic regression and ordinal least square regression were used.

Design/methodology/approach

Ordinal logistic regression effectively analyzes willingness-to-pay/receive data, accommodating the ordered nature of property value responses while incorporating multiple influencing factors. Ordinal least square regression quantifies the impact of continuous and categorical predictors on real transaction data.

Findings

Findings revealed strong associations between property values and several variables. Analysis of willingness-to-pay/accept data from 232 respondents showed significant impacts of factors such as the number of rooms, site area, construction material, property orientation, property age and proximity to bus stations and the central business district (p < 0.05). Similarly, ordinal least square regression analysis of transaction data confirmed the significance of most of these factors, except for property orientation, which indicates the difference of preference in the local market or reporting inconsistencies, demand further investigation. Variables such as views, proximity to wetlands, roads, green areas, religious institutions and schools were statistically insignificant across both data sets (p > 0.05).

Practical implications

It provides a robust basis for housing and urban development strategies. The stakeholders such as real estate developers, urban planners and policymakers are encouraged to incorporate these findings into housing policies, land value capture initiatives and urban planning frameworks to enhance residential property value and align with sustainable urban development goals.

Originality/value

This study contributes original insights into single-family residential property valuation by integrating willingness-to-pay and transaction data, substantiating the determinants of property value.

Details

International Journal of Housing Markets and Analysis, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1753-8270

Keywords

Access Restricted. View access options
Book part
Publication date: 18 September 2006

Joel A.C. Baum and Bill McKelvey

The potential advantage of extreme value theory in modeling management phenomena is the central theme of this paper. The statistics of extremes have played only a very limited…

Abstract

The potential advantage of extreme value theory in modeling management phenomena is the central theme of this paper. The statistics of extremes have played only a very limited role in management studies despite the disproportionate emphasis on unusual events in the world of managers. An overview of this theory and related statistical models is presented, and illustrative empirical examples provided.

Details

Research Methodology in Strategy and Management
Type: Book
ISBN: 978-0-76231-339-6

Access Restricted. View access options
Book part
Publication date: 15 March 2021

Irfan Khan

In the age of data, enterprises have more information available to them than ever before, yet many organizations still struggle to harness its full potential. In this chapter, we…

Abstract

In the age of data, enterprises have more information available to them than ever before, yet many organizations still struggle to harness its full potential. In this chapter, we explore the data value equation and how it translates into an end-to-end data management strategy that enables enterprises to turn their business data into business value. Starting with the concept of “amount,” the chapter looks at the challenge of storing big data. The second element of the equation relates to the “quality” of data and its fundamental role in enabling confident decision-making. Finally, the third element of the equation focuses on the importance of the consumption of that data in analytics tools that not only visualize the data but proactively help users uncover, explore, and act on insights. By yielding the highest value at every stage of this equation, businesses can see more, understand more, and do more with their data.

Details

The Machine Age of Customer Insight
Type: Book
ISBN: 978-1-83909-697-6

Keywords

Access Restricted. View access options
Book part
Publication date: 14 June 2002

Alex R. Hoen

Abstract

Details

An Input-output Analysis of European Integration
Type: Book
ISBN: 978-0-44451-088-4

Access Restricted. View access options
Book part
Publication date: 11 June 2009

Anca E. Cretu and Roderick J. Brodie

Companies in all industries are searching for new sources of competitive advantage since the competition in their marketplace is becoming increasingly intensive. The…

Abstract

Companies in all industries are searching for new sources of competitive advantage since the competition in their marketplace is becoming increasingly intensive. The resource-based view of the firm explains the sources of sustainable competitive advantages. From a resource-based view perspective, relational based assets (i.e., the assets resulting from firm contacts in the marketplace) enable competitive advantage. The relational based assets examined in this work are brand image and corporate reputation, as components of brand equity, and customer value. This paper explores how they create value. Despite the relatively large amount of literature describing the benefits of firms in having strong brand equity and delivering customer value, no research validated the linkage of brand equity components, brand image, and corporate reputation, simultaneously in the customer value–customer loyalty chain. This work presents a model of testing these relationships in consumer goods, in a business-to-business context. The results demonstrate the differential roles of brand image and corporate reputation on perceived quality, customer value, and customer loyalty. Brand image influences the perception of quality of the products and the additional services, whereas corporate reputation actions beyond brand image, estimating the customer value and customer loyalty. The effects of corporate reputation are also validated on different samples. The results demonstrate the importance of managing brand equity facets, brand image, and corporate reputation since their differential impacts on perceived quality, customer value, and customer loyalty. The results also demonstrate that companies should not limit to invest only in brand image. Maintaining and enhancing corporate reputation can have a stronger impact on customer value and customer loyalty, and can create differential competitive advantage.

Details

Business-To-Business Brand Management: Theory, Research and Executivecase Study Exercises
Type: Book
ISBN: 978-1-84855-671-3

Access Restricted. View access options
Article
Publication date: 14 December 2020

Morten Brinch, Jan Stentoft and Dag Näslund

While big data creates business value, knowledge on how value is created remains limited and research is needed to discover big data’s value mechanism. The purpose of this paper…

958

Abstract

Purpose

While big data creates business value, knowledge on how value is created remains limited and research is needed to discover big data’s value mechanism. The purpose of this paper is to explore value creation capabilities of big data through an alignment perspective.

Design/methodology/approach

The paper is based on a single case study of a service division of a large Danish wind turbine generator manufacturer based on 18 semi-structured interviews.

Findings

A strategic alignment framework comprising human, information technology, organization, performance, process and strategic practices are used as a basis to identify 15 types of alignment capabilities and their inter-dependent variables fostering the value creation of big data. The alignment framework is accompanied by seven propositions to obtain alignment of big data in service processes.

Research limitations/implications

The study demonstrates empirical anchoring of how alignment capabilities affect a company’s ability to create value from big data as identified in a service supply chain.

Practical implications

Service supply chains and big data are complex matters. Therefore, understanding how alignment affects a company’s ability to create value of big data may help the company to overcome challenges of big data.

Originality/value

The study demonstrates how value from big data can be created following an alignment logic. By this, both critical and complementary alignment capabilities have been identified.

Details

Supply Chain Management: An International Journal, vol. 26 no. 3
Type: Research Article
ISSN: 1359-8546

Keywords

Available. Open Access. Open Access
Article
Publication date: 14 August 2017

Xiu Susie Fang, Quan Z. Sheng, Xianzhi Wang, Anne H.H. Ngu and Yihong Zhang

This paper aims to propose a system for generating actionable knowledge from Big Data and use this system to construct a comprehensive knowledge base (KB), called GrandBase.

2149

Abstract

Purpose

This paper aims to propose a system for generating actionable knowledge from Big Data and use this system to construct a comprehensive knowledge base (KB), called GrandBase.

Design/methodology/approach

In particular, this study extracts new predicates from four types of data sources, namely, Web texts, Document Object Model (DOM) trees, existing KBs and query stream to augment the ontology of the existing KB (i.e. Freebase). In addition, a graph-based approach to conduct better truth discovery for multi-valued predicates is also proposed.

Findings

Empirical studies demonstrate the effectiveness of the approaches presented in this study and the potential of GrandBase. The future research directions regarding GrandBase construction and extension has also been discussed.

Originality/value

To revolutionize our modern society by using the wisdom of Big Data, considerable KBs have been constructed to feed the massive knowledge-driven applications with Resource Description Framework triples. The important challenges for KB construction include extracting information from large-scale, possibly conflicting and different-structured data sources (i.e. the knowledge extraction problem) and reconciling the conflicts that reside in the sources (i.e. the truth discovery problem). Tremendous research efforts have been contributed on both problems. However, the existing KBs are far from being comprehensive and accurate: first, existing knowledge extraction systems retrieve data from limited types of Web sources; second, existing truth discovery approaches commonly assume each predicate has only one true value. In this paper, the focus is on the problem of generating actionable knowledge from Big Data. A system is proposed, which consists of two phases, namely, knowledge extraction and truth discovery, to construct a broader KB, called GrandBase.

Details

PSU Research Review, vol. 1 no. 2
Type: Research Article
ISSN: 2399-1747

Keywords

Access Restricted. View access options
Article
Publication date: 16 January 2023

Atiyeh Seifian, Mohamad Bahrami, Sajjad Shokouhyar and Sina Shokoohyar

This study uses the resource-based view (RBV) and isomorphism to investigate the influence of data-based resources (i.e. bigness of data, data accessibility (DA) and data

390

Abstract

Purpose

This study uses the resource-based view (RBV) and isomorphism to investigate the influence of data-based resources (i.e. bigness of data, data accessibility (DA) and data completeness (DC)) on big data analytics (BDA) use under the moderation effect of organizational culture (i.e. IT proactive climate). It also analyzes the possible relationship between BDA implementation and value creation.

Design/methodology/approach

The empirical validation of the research model was performed through a cross-sectional procedure to gather survey-based responses. The data obtained from a sample of 190 IT executives having relevant educational backgrounds and experienced in the field of big data and business analytics were analyzed using structural equation modeling.

Findings

BDA usage can generate significant value if supported by proper levels of DA and DC, which are benefits obtained from the bigness of data (high volume, variety and velocity of data). In addition, data-driven benefits have stronger impacts on BDA usage in firms with higher levels of IT proactive climate.

Originality/value

The present paper has extended the existing literature as it demonstrates facilitating characteristic of data-based resources (i.e. DA and DC) on BDA implementation which can be intensified with an established IT proactive climate in the firm. Additionally, it provides further theoretical and practical insights which are illustrated ahead.

Details

Benchmarking: An International Journal, vol. 30 no. 10
Type: Research Article
ISSN: 1463-5771

Keywords

Access Restricted. View access options
Article
Publication date: 6 June 2018

Morten Brinch

The value of big data in supply chain management (SCM) is typically motivated by the improvement of business processes and decision-making practices. However, the aspect of value

7587

Abstract

Purpose

The value of big data in supply chain management (SCM) is typically motivated by the improvement of business processes and decision-making practices. However, the aspect of value associated with big data in SCM is not well understood. The purpose of this paper is to mitigate the weakly understood nature of big data concerning big data’s value in SCM from a business process perspective.

Design/methodology/approach

A content-analysis-based literature review has been completed, in which an inductive and three-level coding procedure has been applied on 72 articles.

Findings

By identifying and defining constructs, a big data SCM framework is offered using business process theory and value theory as lenses. Value discovery, value creation and value capture represent different value dimensions and bring a multifaceted view on how to understand and realize the value of big data.

Research limitations/implications

This study further elucidates big data and SCM literature by adding additional insights to how the value of big data in SCM can be conceptualized. As a limitation, the constructs and assimilated measures need further empirical evidence.

Practical implications

Practitioners could adopt the findings for conceptualization of strategies and educational purposes. Furthermore, the findings give guidance on how to discover, create and capture the value of big data.

Originality/value

Extant SCM theory has provided various views to big data. This study synthesizes big data and brings a multifaceted view on its value from a business process perspective. Construct definitions, measures and research propositions are introduced as an important step to guide future studies and research designs.

Details

International Journal of Operations & Production Management, vol. 38 no. 7
Type: Research Article
ISSN: 0144-3577

Keywords

1 – 50 of over 308000
Per page
102050