Abstract
Purpose
eXplainable artificial intelligence (XAI) is an evaluation framework that allows users to understand artificial intelligence (AI) processes and increases the reliability of AI-produced results. XAI assists managers in making better decisions by providing transparency and interpretability in AI systems. This study explores the development of XAI in business management research.
Design/methodology/approach
This study collects and analyzes business management research related to XAI using common management keywords as the basis. We used the success/failure system to explore its research guidelines XAI in business management.
Findings
The study found significant growth in XAI research within business management. This research will be discussed from various management disciplinary perspectives to help scholars understand the current research directions. Additionally, we utilize a success/failure system to explore how this theory can be applied to artificial intelligence and business management research.
Originality/value
The success/failure system offers a comprehensive framework encompassing the evolution of the cosmos, nature, and ecology. This theory can offer valuable insights for business management in XAI and competitive societies, governments, and enterprises, enabling them to formulate effective strategies for the future.
Keywords
Citation
Chang, T.-S. and Bau, D.-Y. (2024), "eXplainable artificial intelligence (XAI) in business management research: a success/failure system perspective", Journal of Electronic Business & Digital Economics, Vol. ahead-of-print No. ahead-of-print. https://doi.org/10.1108/JEBDE-07-2024-0019
Publisher
:Emerald Publishing Limited
Copyright © 2024, Tsung-Sheng Chang and Dong-Yih Bau
License
Published in Journal of Electronic Business & Digital Economics . Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode
1. Introduction
Artificial intelligence (AI) research and technology began to gain traction following the Dartmouth Workshop in 1956, during which researchers deliberated the possibility of creating machines that could mimic human minds (Howard, 2019). In the 1970s and 1980s, AI development garnered minimal attention. However, since 2010, advancements in computer hardware efficiency and the emergence of algorithms and deep learning have spurred significant improvements in learning from data and handling complex tasks. These advances have fostered a proliferation of AI applications, encompassing image and speech recognition and the rapid generation of various data types to meet user needs.
The AI generates outcomes based on the input provided by individuals in the form of ideas or queries. As the user provides more detailed descriptions of their conditions, the AI response typically becomes more precise and comprehensive, meeting the user’s expectations. Generative AI can greatly enhance work productivity for businesses and organizations by significantly reducing the time and cost associated with manual content creation and writing (Wu, Mou, Li, & Xu, 2020; Korzynski et al., 2023; Zarifhonarvar, 2024).
Nevertheless, many issues surround the information currently generated by AI. Some scholars argue that generative AI-produced content lacks accuracy and rationality, with specific information being outdated compared to what individuals can access through internet searches (Dwivedi, Kshetri, et al., 2023; Gupta, Akiri, Aryal, Parker, & Praharaj, 2023; Zarifhonarvar, 2024). Therefore, to demystify AI’s operations as a black box, there is an increasing emphasis on eXplainable Artificial Intelligence (XAI) concepts and principles.
The XAI aims to enhance AI by providing results through a transparent and explainable decision-making process, which may encompass visualizations, data sources, statistics, and more. The distinction between XAI and traditional AI lies primarily in the transparency and interpretability of the decision-making processes. While traditional AI often operates as a “black box,” providing little insight into how decisions are made, XAI focuses on making these processes understandable and transparent to users (Rai, 2020). Some more mature machine learning algorithms provide interpretable analysis, such as Shapley additive explanations (SHAP) and Local Interpretable Model-Agnostic Explanations (LIME). These techniques help in explaining the reasoning behind AI decisions, making them more accessible to human understanding (Černevičienė & Kabašinskas, 2024; Dwivedi, Dave, et al., 2023). The additional information provided by XAI empowers users to comprehend the context of the content, ensures the integrity of the AI’s analyses, and instills confidence in the content generated by the AI (Langer et al., 2021). For instance, in the context of a student or academic research report, software tools such as AI Text Classifier and Hive moderation can be employed to ascertain whether generative AI was used in the article.
Additionally, Turnitin AI detection can utilized to identify instances of plagiarism. The Turnitin system will furnish the user with a list of similar sources, enabling them to discern the extent of relevance between their articles and others, pinpointing any issues precisely. The results generated by this tool are persuasive and supported by detailed information. The XAI offers users information on “interpretation”, “transparency”, and “explainability” (Bunn, 2020; Von Eschenbach, 2021). These three terms belong to distinct categories in their definitions, ranging from basic to advanced. Various arguments exist for explainability and transparency, but their common purpose is augmenting individual decision-making and use patterns.
In today’s business environment, data-driven decision-making is essential for effective management (Paschek, Luminosu, & Negrut, 2020). Traditional data analysis tools, however, no longer suffice to address the growing complexity and uncertainty in modern business operations. XAI presents a transformative opportunity for businesses by automating and optimizing decision-making processes, enhancing transparency, and providing clear insights into AI-generated outcomes. XAI enables organizations to harness AI for evaluation and planning while equipping employees—even those with limited experience—to make more accurate judgments based on explainable AI content (Albahri et al., 2023; Karyamsetty, Khan, & Nayyar, 2024). This capability proves crucial in improving the quality of managerial decisions across various business domains such as supply chain management, production, and customer service. By cultivating a deeper understanding of task-related contexts and potential risks, XAI strengthens decision-making reliability and offers businesses increased flexibility and a competitive edge in tackling future challenges.
Currently, business management employs bibliometrics, reviews, and meta-analysis to analyze the evolution of the AI research field. For instance, Han et al. (2021) extensively reviewed the literature to assess AI’s role in business-to-business marketing. Sestino and De Mauro (2022) conducted text mining on the Elsevier and Scopus databases, summarizing 3,780 AI-related studies in business and marketing. Their study categorized these studies into three themes: implications, applications, and methods, through a systematic literature review. Cubric (2020) combined bibliometrics and meta-analysis to scrutinize AI-related studies, evaluating the quality of papers published between 2000 and 2019 in the context of business management and AI.
However, it is worth noting that there needs to be more discourse on XAI, with most scholars still exploring traditional AI applications. Some experts posit that XAI possesses the potential to spearhead the next generation of AI and should warrant academic attention (Ali et al., 2023; Arrieta et al., 2020; Minh, Wang, Li, & Nguyen, 2022). Consequently, this study aims to investigate the current state of XAI within the academic realm of business management. We will leverage recent literature statistics on XAI to assess its status. Furthermore, we will employ the success/failure system theory to appraise the XAI and business management research.
The success/failure system, introduced by Bau (2018a), differs from conventional business theories in a key way: it views failure factors, reasons, and events not merely as setbacks, but as strategic opportunities for development and the creation of new business activities. The success/failure systems do not assume any specific factor as inherently correct or indicative of success. Instead, it treats successful and failed outcomes as valuable sources of insight, offering a balanced perspective on organizational decision-making.
From the past model of human decision-making, it is found that even information is transparent and explainable. However, human decision-making often leads to varying interpretations, both positive and negative. For instance, a scholar specializing in technology adoption behavior and stock price prediction might applies for a university position. Some may view this individual as a multidisciplinary talent, while others may perceive a lack of focus, potentially leading to a rejection. In the context of XAI, where transparency and interpretability are critical, however, the final decision-making still relies on human judgment. Thus, the theory allows us to explore how organizations can integrate AI-driven insights while remaining aware of potential risks.
We hope this theory can offer a novel perspective and serve as a reference for future studies. We will explore the following two research questions to achieve our research objectives.
What is the current situation of studies between XAI and business management research?
How can success/failure systems serve as an academic basis between XAI and business management research?
2. XAI in business management research
This section elucidates the data collection process and presents the results. We collected the data from the web to elucidate the current state of research at the XAI (applied/used) in business management. Explore the importance of XAI and business management research, which can help scholars develop future research directions.
2.1 Data collection process
Due to the limitations and resource constraints in our research, we examined two distinct databases: Elsevier ScienceDirect and EBSCO-Business Source Corporate Plus. We conducted separate evaluations for the results obtained from each database. This study data was collected on October 1, 2023, representing a static record. Given that XAI is an emerging AI topic, our observation period spans from 2016 to 2023. There is no information before 2016.
We searched for specific keywords because “Business” is our discussion. According to the Journal Citation Reports, the business and management categories fall under the broader economics and business group. Therefore, we employed “business,” “economics,” and “management” as the three keyword combinations for our search and statistical analysis. Initially, we used “business management” as a keyword. However, it yielded few results. We additionally used the keyword “management” to bring in research from various unrelated fields, such as engineering, complicating the dataset. Similarly, using “XAI” as a keyword introduced many specialized terms, leading us to avoid simplified keywords to prevent the inclusion of irrelevant articles. Our final keyword combinations were: “Explainable Artificial Intelligence” and “business,” “Explainable Artificial Intelligence” and “economics,” and “Explainable Artificial Intelligence” and “marketing.” Among these, marketing represents the most established and widely studied field within business management.
To perform content analysis, we initially classified the journals into major recurring publication types and their corresponding disciplines, including Science Citation Index (SCI) and Social Sciences Citation Index (SSCI). We cross-check the content to ascertain their associated research areas. Subsequently, we sought confirmation from three professors in the Faculty of Management to ensure the absence of categorization bias.
2.2 Number of XAI studies in business management
Figure 1 displays the number of papers on XAI in business management, encompassing 2016 to October 2023. In 2020, the topic of XAI gained increased attention. While there is not an explosive surge in the volume of studies, the first keywords combination in average annual growth rates of papers between 2021 and 2023 are strikingly high at 110, 69, and 64%, respectively. These figures signify the emergence of XAI as a significant and noteworthy trend in scientific research.
When searching Elsevier ScienceDirect for the keyword “Explainable Artificial Intelligence” in isolation, we find 2,770 studies employing this term in 2016, followed by 7,910 until 2020, 11,643 in 2021, and 15,315 in 2022, with a further increase to 17,740 in 2023. The trend in EBSCO-Business Source Corporate Plus is also ascending annually. After conducting a time trend analysis, we estimate that by 2025, the number of articles will surpass 20,000 per year. Compared to all studies, there are few studies on XAI in business management, possibly due to the absence of mature products supporting corporate activities and the scarcity of indicative cases.
Current XAI Topics in Business Management Studies.
While we employed keywords to conduct the literature search, it was unavoidable that specific papers related to computer science would surface. Therefore, we observe the first keyword combination as a content analysis. Table 1 provides a summary of the outcomes. In summary, the predominant types of current research encompass bibliometrics, reviews, conceptual exploration, and trend discussions. These articles guide scholars in XAI’s future directions (e.g. Angelov, Soares, Jiang, Arnold, & Atkinson, 2021; Brasse, Broder, Förster, Klier, & Sigler, 2023; Islam, Ahmed, Barua, & Begum, 2022).
In business management, finance, and accounting emerge as XAI’s most explored research areas. These studies encompass the utilization of AI in finance, stock market forecasting, risk analysis, and related topics, offering direct assistance to supervisors in enhancing financial management effectiveness (Lachuer & Jabeur, 2022; Zhang, Cho, & Vasarhelyi, 2022). For example, Zhang, Wu, Qu, and Chen (2022) propose an XAI approach that integrates an ensemble method and an interpretation framework for financial distress prediction. The LightGBM model achieved the highest AUC (0.92), providing explanations that help individual companies identify key factors leading to financial distress, while counterfactual explanations offer strategies for improvement. Second, researchers have conducted numerous studies on business operations, particularly in digital transformation, work productivity, and supply chain management, exploring how XAI can assist enterprises in addressing current challenges (Johnson, Albizri, Harfouche, & Tutun, 2023; Senoner, Netland, & Feuerriegel, 2022). Such like Bhatia and Albarrak (2023) propose a model called XAI-based Faster RCNN (Regions with Convolutional Neural Networks) to evaluate the contents of food items using web systems and QR codes. The system retrieves detailed nutritional information and uncovers hidden details embedded in QR codes, enhancing food safety and quality by ensuring the accuracy of food security information.
Furthermore, our findings reveal a significant body of research that employs XAI to facilitate sustainable business operations, assisting organizations in managing resources such as electricity, water, and waste (Ha, Sah, Park, & Lee, 2022; Maarif, Saleh, Habibi, Fitriyani, & Syafrudin, 2023). Other studies include information systems research, analyzing social network sites, and embracing the potential of XAI are crucial for staying ahead in today’s digital landscape (Leichtmann, Humer, Hinterreiter, Streit, & Mara, 2023; Yılmaz Benk, Badur, & Mardikyan, 2022). Conversely, we observed a comparatively lower volume of research papers on XAI in marketing, retail, and customer service. This trend may be attributed to the fewer innovative XAI applications in these business activities. XAI can be applied in different business activities. We have compiled some current technologies and potential business applications in Table 2.
Regarding other research paper categories, transportation topics dominate, followed by law and ethics. It is worth noting that these patterns emerge because specific categories unrelated to business management were excluded based on keyword criteria. Although the number of papers in this survey is small, it does not mean that the number of research papers in this field in XAI is small.
We utilized Python to conduct semantic analysis on the gathered research, encompassing paper titles, abstracts, and keywords. We categorized significant terms and visualized them in word clouds (see Figure 2). Among these terms, “Model” is the most frequent, occurring 846 times, followed by “Learn” with 665 instances and “Data” with 637 occurrences. Notably, we observed that several words are more closely linked to business management within these texts. Specifically, “Predict” appears 448 times, “Decision” 344 times, “Technology” 177 times, “Manage” 147 times, and “Industry” 134 times. “Risk” is mentioned 118 times, “Market” and “Trust” appear 74 times, “Financial” 73 times, “Privacy” 68 times, and “Ethics” 46 times.
In addition to the data described above, we observe several key discussions in the existing literature. First, there are relatively few empirical studies on XAI in business management. Recent studies, such as Masialeti, Talaei-Khoei, and Yang (2024), analyze portfolios of AI applications, while earlier research remains limited in scope. Most current academic discussions focus predominantly on generative AI, likely due to the limited adoption of XAI products within organizations. Second, numerous studies propose potential future developments for XAI in specific management fields, such as smart cities (Javed et al., 2023) and supply chain management (Mugurusi & Oluka, 2021). These studies identify potential research directions based on scholars’ experiences and existing literature but lack solid theoretical foundations. Although they guide future scholars on relevant topics, they fail to provide a specific theoretical framework to support these directions. Third, much of the existing literature remains focused on technical discussions, with limited exploration of XAI’s practical applications in business management. While technical details are essential, solely focusing on technology overlooks the broader value of XAI in enhancing decision-making processes and management efficiency. Technology-oriented studies often fail to thoroughly examine how XAI influences organizational decision-making or improves operational performance.
In sum, the current literature on XAI’s application in business management is still in its early stages. Future research should explore applying XAI technology effectively within business environments and expand the theoretical framework to offer more concrete guidance for scholars and practitioners. Additionally, further empirical research on XAI applications across various management domains will drive the advancement of theory and practice in this emerging field.
3. Success/failure system perspective
This section introduces the success/failure system and explains why this theory has been used to explain the development of XAI and business management research. We will use the success/failure system to present different propositions for XAI and business management research, which will serve as a reference for future scholars and researchers.
3.1 Success/failure system
The success/failure system is a concept proposed by Bau (2018a). In his research (Bau, 2018b, 2019a), this system (theory) can explain the evolutions across ecology, nature, and the cosmos. This theory is mainly derived from Einsteinian science. Einstein (Einstein & Calaprice, 2011) believed that “nature [the universe] is a perfect structure, seen from the standpoint of reason and logical analysis.” Einstein’s principle theory aims to uncover universal laws in biology, addressing crucial questions in the field. One significant contribution of this hypothesis is clarifying humanity’s position in the universe teeming with life and the subsequent imperative to preserve and sustain our planet, Earth. Additionally, the hypothesis catalyzes the realms of physics and biology, presenting a scientific breakthrough. This integration has the potential to ignite interdisciplinary curiosity, bringing science closer to its ultimate goal of unity.
Due to the rigorous nature of Einsteinian science in revealing the laws of nature, any alteration or deviation from the fundamental principles is deemed incorrect. In other words, when applying the principle theory to study the mesocosmos, an individual will eventually and consensually reveal the success/failure system as the law of nature at the mesocosmic level we live in. The success/failure system holds great significance as it represents a natural law that governs the mesocosmic level within our vast universe (Bau, 2019a) Thus, the primary contribution of the success/failure system in Einsteinian science forms a part of a theory of the universe. Whereas the success/failure system governs the mesocosmoc level, cosmic inertia is a law of nature that accounts for the universe as a single significant whole.
The success/failure system is a theoretical concept that can offer valuable insights and methods for exploring research in business management and XAI. In the realm of business management, this system allows for the evaluation of a business or organization’s performance. Managers can better understand their business’s functioning and make more informed decisions by analyzing various factors contributing to success and failure.
3.2 Success/failure system and XAI
In order to develop the success/failure system hypothesis as a principle theory, it is crucial to consider a wide range of complex and interconnected facts, collectively known as general facts. This general fact encompasses various mathematical concepts, including the interdependence of conditions for success and causes of failure. By treating this general fact as an axiom and employing discrete mathematical reasoning, we can construct a hypothetico-deductive system to establish the principle of the success/failure system, which can be described as follows (see Figure 3):
- (1)
Suppose we acknowledge the general fact that various factors influence the success of a business.
- (2)
In the case of a successful enterprise, there are typically multiple critical success factors at play. These factors, in turn, can be attributed to other sub-factors influenced by success. All of these factors contribute to the success of the business. The same is true for artificial intelligence. The results generated by iteration are com-posed of one layer after another. We cannot fully believe that the result of AI is an absolute answer (value).
- (3)
As we observe these key success factors, we realize that their usefulness and effectiveness stem primarily from their interdependencies with other success factors. Consequently, these factors form a network system that enables the company to achieve its goals. Similarly, mechanical learning and deep learning have many algorithms and iterations, which is a network evolutionary derivation process, and the number of these iterations and variations should be able to be returned to the user.
- (4)
The principle of the success/failure system can be explained using mathematics (Bau, 2019b): Partial Ordering (PO) conditions for success = Partial Ordering (PO) causes of failure. Therefore, we assume that an XAI or firm’s success is solely the result of the presence of success factors. It is also plausible that certain failure factors shape the outcomes, thereby establishing a complex success/failure system. For an organization, the occurrence of either failure or success factors is a fact of life that is difficult to correct. However, XAI should be able to correct this by making system adjustments to realize the logic of the success or failure system.
In summary, in traditional AI responses, users only get their answers, making it challenging for users to assess the credibility of those answers. AI generates results that users can accept (success) or discard (failure). The XAI transforms the black box within which machine learning and deep learning operate into a white box, revealing the evidence and the reasoning source. The XAI’s outcome is contextual and data percentage evaluated, and empowers users to directly assess correct (success) factors they accept or reject.
Considering the success/failure system, we advocate reporting AI results and provider users directly recalibrating the untrustworthy (failure) or success factors. This system supports the adjustment of different factors, as outlined in point 4, empowering users to enhance the effectiveness of AI in aiding and facilitating tasks. This concept we call “Calibration of XAI.”
Calibration in XAI describes how individuals interpret the output information of XAI results. They subsequently adjust, correct, and test, hoping to obtain better results from artificial intelligence. The efficacy of calibration is not solely determined by artificial intelligence. When people encounter interpretable and transparent information, they may feel inclined and need to retrain and execute XAI, and this leads to repeated Calibration behavior in pursuit of the optimal solution. Thus, we recommend that AI software provides services for interpreting data. It does not require confirmation of the correctness or failure of its content. Instead, it should inform the user about the weight of these factors in forming the result, allowing them to assess the content’s correctness and significance independently. The system must offer functionality enabling users to reapply interpretable data as calibrated artificial intelligence.
This study proposes to utilize the success/failure system to analyze or explain business management and XAI applications. Additionally, we provide directions for further academic development. Our objective is to enhance contributions to the evolution of social science by employing this system concept or law.
3.3 Success/failure system in business management
Early research in business management extensively examined the factors contributing to success and failure. For example, Belassi and Tukel (1996) researched project management activities to investigate critical factors contributing to project success or failure. Their study objective was to develop a framework that facilitates project evaluation. Malik and Khan (2021) discuss the key factors for the success and failure of enterprise resource planning (ERP) systems in developing countries. However, these studies revealed negative effects on the discussion of failure factors, and few studies have explored its importance to success.
Most theories in business management, such as the Technology Acceptance Model, Resource-based Theory, Dynamic Capabilities, and Innovation Diffusion, generally promote a positive outlook. They offer valuable insights and frameworks for understanding and analyzing various social phenomena. Thus, many business management studies mostly explore those success factors that will bring about favoring performance in terms of research objectives. Only a few studies consider failure, such as Toots’ case study of the Estonian e-participation portal Osale.ee. Researchers discovered that e-participation systems face three challenging factors (Toots, 2019). The inherent complexity of these challenges renders e-participation systems susceptible to failure. Kaur, Dhir, Singh, Sahu, and Almotairi (2020) adopted the Innovation Resistance Theory to explore barriers to user mobile payment and study negative predictors.
The achievement of business goals should consider the coexistence of success and failure factors. We may consider that the term failure factors will not bring success. Therefore, it is easy to overlook its importance. Cecez-Kecmanovic, Kautz, and Abrahall (2014) mentioned that success or failure factors may cause the success or failure of project product development. Some scholars also mention the existence of this phenomenon (Fowler & Horan, 2007; Rese & Baier, 2011; Luo & Chang, 2023). Some scholars attribute failure factors to the formation of key factors (Bennett & Snyder, 2017; Say & Vasudeva, 2020). Concerning the success/failure system, certain success factors can exert an influence on the failure system.
The success/failure system is a theory that can integrate positive and negative factors. However, regarding negative factors, there will be a negative relationship in the construction model. In recent years, reports that use failure as a factor in developing academic research have received increasing attention, such as Castro-Arce and Vanclay (2020) and Nambisan, Wright, and Feldman (2019). The failure factor is a key that positively affects success, which may affect other success factors, thus triggering a chain reaction. For example, during the pandemic of Covid-19, supply chain disruptions occurred worldwide (Moosavi, Fathollahi-Fard, & Dulebenets, 2022). However, some companies can obtain new orders because of the production disruption in regional countries. When examining the cases of technological development, it becomes evident that factors such as high technology costs and the absence of government support often lead many technologies to take an interest in other technologies. In short, some companies’ success is due to the problems/failures of certain factors, and only through such opportunities can companies succeed.
Building on the concepts discussed above, we examine the development of Kentucky Fried Chicken (KFC) in Asia as an illustrative example. KFC’s decision to sell Portuguese-style egg tarts was a strategic move to align its offerings with local tastes in various Asian markets. This story originates from Lord Stow’s Bakery in Macau, where British expatriate Andrew Stow crafted a unique version of the Portuguese “pastel de nata” in 1989. Stow’s version gained significant popularity, sparking an egg tart trend throughout East Asia (St Cavish, 2017; Tong, 2017). At the same time, KFC introduced roasted chicken to appeal to Asian consumers. However, the production process differed substantially from their signature fried chicken, requiring significant investment in new equipment. Unfortunately, the sales of roasted chicken did not meet expectations, jeopardizing the return on investment in this new infrastructure (Li, 2024; Lichtenberg, 2012). KFC Asia sought alternative products that could leverage the existing equipment more effectively. The introduction of egg tarts capitalized on the product’s rising local popularity and allowed KFC to use its existing equipment without further capital expenditure. In 1999, Margaret Wong, Andrew Stow’s ex-wife, sold the recipe rights to KFC, initially launching the egg tarts in Hong Kong and Taiwan before expanding to mainland China and other regions (St Cavish, 2017). The egg tart quickly became a signature dessert for KFC across Asia, prompting the chain to build a dedicated egg tart factory in China to meet growing demand. By 2010, KFC had sold millions of egg tarts, reinforcing the brand’s connection with local consumers who appreciated freshly baked goods.
In the academic development of business management, the success/failure system can be conceptualized as illustrated in Figure 4. This framework is beneficial for research that examines how organizations achieve their business goals. We propose a three-tiered structure incorporating various sub-factors influencing success and failure. These sub-factors provide deeper insights into the underlying causes of success or failure within an organization, allowing researchers and practitioners to identify critical areas that impact organizational performance and outcomes. The first level is sub-factors, a new system that should not directly relate to goals. The first level explains the formation factors that affect the core success and failure factors, and there may be positive or negative relationships between the facets in this level. Moreover, a construct can impact different factors within the second level.
The second level is the core success/failure system. Unlike other theories, it must positively affect the third level. While there may be a relationship between success and failure factors, the success/failure system primarily focuses on the dependency relations within a system. It considers both the system’s overall success or failure and its individual components’ performance. Thus, we suggest that in the construction of the second level, the relationship between these two factors should not be explored, which will make the model more complicated to explain the relationship between success and failure. In addition, adding confounding factors is acceptable.
The third level involves discussing the enterprise’s (organization) goals, which typically revolve around performance, profitability, and value creation. Constructing this model can support the formulation of operational and management strategies and enable adjustments to the operational framework. This model can be applied to various research topics, including marketing, strategy, information systems, organization, and policy, providing fresh insights for academic advancement.
According to the success/failure system construction model, it is advisable to refrain from formulating hypotheses with a mediation effect, as this can lead to confusion. On the other hand, including a moderator effect is acceptable, and it is recommended to situate it between the second and third levels.
The three-tiered structure of the success/failure system is unlike existing frameworks; this model incorporates both factors (success/failure) and their underlying causes, offering a holistic view of organizational dynamics. This innovative perspective allows researchers to capture more complex relationships within organizations and provides fresh avenues for academic inquiry. By incorporating organizational goals at the third level, the model can inform strategic decision-making and the development of operational frameworks. Researchers can apply this model to investigate how businesses adjust strategies to enhance performance and value.
4. Success/failure system to evaluate XAI in business management
From a business perspective, XAI enables enterprises and financial institutions to provide clear and transparent explanations throughout the analysis and decision-making process; this allows both customers and internal decision-makers to understand the AI model’s decision-making logic and predictive outcomes, helping organizations effectively manage and mitigate risks. For instance, in Japan’s financial sector, XAI technology is widely utilized in credit scoring and loan approval processes. According to research by The Japan Research Institute (2022), the system explains the underlying risks when a customer’s loan application is declined. It provides a predictive model recommending how the customer can reduce those risks. By submitting an appropriate improvement plan, customers still have an opportunity to pass the review. This transparency enhances customer satisfaction and strengthens trust in the bank’s decision-making process. Additionally, Hitachi Consulting applies XAI in enterprise data analysis to help companies identify potential success and risk factors (Generative AI Media, 2024; Hitachi, 2020). XAI’s visual and transparent inferences reveal risks and fail factors that traditional analysis tools often overlook, thus improving corporate decision-making accuracy and risk control capabilities. This study examines the three levels of an organization, from top to bottom, in order of decision-making, management, and operations.
Business management regularly faces intricate decisions encompassing market strategy, supply chain management, and human resources. It is feasible for AI to facilitate automated corporate decision-making to support corporate operations and management. However, it is noteworthy that many of these decisions tend to address structural issues. XAI technology offers the potential for enhancing decision-making and evolving a new generation of quantifiable, evaluated, and scenario-simulating decision support systems and business intelligence tools. Moreover, XAI requires Calibration to ensure its credibility within the context of a success/failure system. To achieve this, XAI must undergo quantitative evaluation, simulate diverse scenarios, and execute Calibration procedures. Analyzing XAI’s performance across various decision-making scenarios allows us to identify its success and failure factors, ultimately enhancing decision support systems and management efficiency.
At the management level, XAI can assist in various tasks, including production, marketing planning, financial forecasting, alerting, and customer relationship management. With XAI’s guidance and promptly addressing task requirements, gaining a more precise understanding of the core issue becomes crucial. For instance, companies utilize big data to analyze customer reviews and social media discussions. They employ semantic analysis to categorize text, which necessitates recategorization and interpretation. Typically, negative words are disregarded. In fact, analyzed and compared with negative language to assess customer sentiment. Employing this principle, companies can design their responses to customers more in alignment with their expectations.
Regarding operation, the application scope of XAI can be more diversified, encompassing emerging enhancements in production management, equipment maintenance, real-time customer service, membership management, and more. However, in current applications, the integration of XAI technologies often lacks comprehensive implementation strategies and sufficient customization to meet the specific needs of each business. This limitation makes it challenging to deliver innovative management solutions and services that can effectively bring added value to the enterprise.
For example, consider the inventory management of a retail store. In the past, when the system indicated the need for restocking, it would commence restocking early. According to the success/failure principle of the system, it assumed that the product was out of stock, presenting consumers with roughly three choices: first, to purchase a substitute; second, to visit another store; and third, to abandon the purchase. For both the second and third behaviors, the store would not benefit. Consequently, a store employing an intelligent inventory system can provide restocking information promptly, evaluate potential product substitutes, and determine whether implementing price reductions could attract consumers to make a purchase. Furthermore, companies can leverage the XAI concept in marketing management to design innovative experiential services.
Based on strategy management, how could failure be a contributing factor? Because this failure factor can affect the original success factors or the existing environment to create opportunities. Kakkar and Chitrao (2021) proposed that successful innovation is a factor based on the failure of consumers’ acceptance of innovation. The story of “The Hare and The Tortoise” is an example of a simple explanation. Assuming the enterprise plays the role of the tortoise, it can succeed by causing the hare to fail. As for how to make hare fail, it is the principle of competition.
In addition, the black box problem remains a significant challenge for XAI, but it is not entirely insurmountable. By integrating existing XAI technologies, such as SHAP, LIME, and Grad-CAM, a certain level of interpretability can be achieved in business applications, allowing for the identification of key success and failure factors that impact performance. However, the design of this framework should prioritize addressing specific practical objectives, rather than attempting to fully resolve all black box concerns. Future research should further investigate how this framework can adapt to various challenges and propose potential solutions to the black box problem, ultimately enhancing its practical value and effectiveness.
Observing the historical context, the industry will always face a recession or a considerable challenge. In the past 30 years, the development of the Internet and information technology has caused many large enterprises in the 20th century to encounter problems today (e.g. Barnes & Noble, Kodak). Enterprises are in an environment of continuous competition, and the wrong choice of strategy may lead to failure. The choice of expansion strategy is a method to avoid entry failure, including the extension of technology, product, and market, as well as the reframing of existing product fields.
Several studies have proposed future directions for XAI and how it can lead business activities. Some proposals are based on past research themes and suggest future research directions. We propose a success/failure system and new strategic thinking that expects artificial intelligence to reshape new business activities.
5. Conclusion
This study contributes as follows: First, it provides a comprehensive review of current research papers in business management, offering insights into the volume of research output in this domain. With XAI emerging as a significant trend, we map the trajectory of its integration into business management research. Second, this study explores business management and the case of XAI in business management. The success/failure system framework offers a novel perspective beyond traditional methodologies. It allows for a different structured analysis of dependency relations, providing a novel method to identify the interactions between success and failure factors.
Future research on the success/failure system could focus on expanding its application across various industries, especially in areas where failure is not typically seen as a strategic opportunity, such as technology adoption and crisis management. Empirical validation through case studies or surveys would help confirm its effectiveness. Integrating the success/failure system with established business theories, like innovation resistance or organizational learning, could offer a broader theoretical framework. XAI could optimize its application in business contexts, such as financial distress prediction, by enhancing its interpretability and real-time decision support capabilities. Additionally, exploring the effectiveness of XAI across different fields, including supply chain management and innovation resistance, would further validate its utility. Moreover, adapting XAI models to meet the cultural and market-specific demands across different regions would improve global applicability. Finally, future studies should emphasize how XAI can enhance transparency and build trust among external stakeholders, such as regulators and customers. These directions would further strengthen the role of XAI in business applications and extend its impact.
This study’s limitations are as follows: First, we could not access an extensive array of databases due to Internet resource constraints, and we cannot conduct a comprehensive journal survey. Nonetheless, it is essential to note the number of papers on XAI in business management research, which has witnessed a notable increase. Second, the content analysis of the papers in this study relies on the authors’ past experiences and the perspectives of other scholars, potentially introducing personal bias into the analysis. Third, this study observed that articles in the database exhibited similarities and were published in different journals. These articles were still included in our count, as we found no justification for excluding them solely based on their diverse sources. Although such instances were not frequent, they did impact the statistical results. Fourth, this study’s keywords did not include alternative terms such as “company,” “firm,” or “corporate,” which could have expanded the search results. As a result, the number of relevant papers may have been higher. Fifth, this study systematically examines the potential advancement of XAI in business research, considering both success/failure systems. Depending on the different theoretical frameworks applied, different outcomes may be expected. The success/failure system theory builds upon the concept proposed by Bau (2018a, b). However, this concept needs more empirical support. We anticipate that future research in business management or XAI will construct new research models based on the concepts outlined in this study, thereby establishing a solid academic foundation.
Figures
Basic statistics of the disciplines classification
Disciplines | EBSCO | Science direct | Total |
---|---|---|---|
Information systems and E-commerce | 6 | 20 | 26 |
Management and decision sciences | 7 | 10 | 17 |
Finance and accounting | 14 | 21 | 35 |
Marketing, customer service, and retail | 6 | 10 | 16 |
Bibliometrics, reviews, and trend discussions | 18 | 29 | 47 |
Travel, leisure and hospitality | 3 | 7 | 10 |
Business operations, B2B, and digital transformation | 8 | 21 | 29 |
Sustainability and energy | 4 | 23 | 27 |
Knowledge management, innovation management, and human resources | 6 | 10 | 16 |
Healthcare and biotech | 2 | 26 | 28 |
Computer science | 6 | 180 | 186 |
Others – transportation, law, ethics, education, government, and agriculture | 5 | 27 | 32 |
Source(s): Authors' own work
Examples of XAI methods and success/failure system in commercial applications
XAI method | Explanation approach | Business example | Success/Failure system perspective |
---|---|---|---|
SHAP (shapley additive explanations) | Based on game theory, SHAP attributes the contribution of each variable to the model’s prediction, providing an interpretation of the output | In the financial sector, SHAP is used in credit assessment models to explain why a customer was rejected for a loan | SHAP identifies critical success (loan approval) or failure (loan rejection) factors, allowing for a granular analysis of each variable’s impact, helping to improve decision-making and risk management processes |
LIME (local interpretable model-agnostic explanations) | LIME simplifies the model locally, offering users insights into how the AI operates on a specific prediction point | LIME explains why AI has diagnosed a patient with a disease in healthcare | The success/failure system helps to analyze success (correct diagnosis) and failure (misdiagnosis) factors, guiding the refinement of diagnostic models to improve accuracy and interpretability |
Decision tree | Decision trees utilize a series of binary (yes/no) questions to split data and generate predictions, offering high interpretability | Decision trees predict customer purchasing behavior in retail, helping optimize product sales strategies | The success/failure system helps analyze which decision paths led to success (increased sales) and failure (lost sales), enabling companies to refine their strategies and improve customer satisfaction and profitability |
Large language models (LLMs) | While powerful, LLMs are challenging to interpret due to their opaque reasoning process, requiring additional tools for explanation and transparency | In customer service, LLMs are used to respond to customer inquiries automatically | The success/failure system can analyze successful (effective responses) and unsuccessful (ineffective responses) LLM outputs, helping businesses adjust and improve response quality, thereby enhancing customer experience |
Deep learning explanation models (e.g. Grad-CAM) | Visualization techniques like Grad-CAM highlight focus areas in image recognition, revealing how deep learning models operate internally | In manufacturing, deep learning models detect product defects, and visualization tools show which areas the model focuses on for defect detection | The success/failure system helps analyze which parts of the defect detection are successful and which fail, allowing manufacturers to optimize defect detection models and improve quality control processes |
Source(s): Authors' own work
Disclosure statement: No potential conflict of interest was reported by the author(s).
References
Albahri, A. S., Duhaim, A. M., Fadhel, M. A., Alnoor, A., Baqer, N. S., Alzubaidi, L., Deveci, M.… (2023). A systematic review of trustworthy and explainable artificial intelligence in healthcare: Assessment of quality, bias risk, and data fusion. Information Fusion, 96, 156–191. doi: 10.1016/j.inffus.2023.03.008.
Ali, S., Abuhmed, T., El-Sappagh, S., Muhammad, K., Alonso-Moral, J. M., Confalonieri, R., … Herrera, F. (2023). Explainable Artificial Intelligence (XAI): What we know and what is left to attain trustworthy artificial intelligence. Information Fusion, 99, 101805. doi: 10.1016/j.inffus.2023.101805.
Angelov, P. P., Soares, E. A., Jiang, R., Arnold, N. I., & Atkinson, P. M. (2021). Explainable artificial intelligence: An analytical review. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 11(5), e1424. doi: 10.1002/widm.1424.
Arrieta, A. B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., … Herrera, F. (2020). Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58, 82–115. doi: 10.1016/j.inffus.2019.12.012.
Bau, D. Y. (2018a). The success/failure system hypothesis. International Journal of Advanced Scientific Research and Management, 3(3), 30–34. doi: 10.36282/ijasrm/3.3.2018.496.
Bau, D. Y. (2018b). The cosmos with the success/failure system. International Journal of Advanced Scientific Research and Management, 3(12), 94–97. doi: 10.36282/ijasrm/3.12.2018.1044.
Bau, D. Y. (2019a). The logic of the success/failure system. International Journal of Advanced Scientific Research and Management, 4(2), 254–258. doi: 10.36282/ijasrm/4.2.2019.1199.
Bau, D. Y. (2019b). The mesocosmos: The success/failure system. International Journal of Advanced Scientific Research and Management, 4(4), 1–6. doi: 10.36282/ijasrm/4.4.2019.1296.
Belassi, W., & Tukel, O. I. (1996). A new framework for determining critical success/failure factors in projects. International Journal of Project Management, 14(3), 141–151. doi: 10.1016/0263-7863(95)00064-x.
Bennett, V. M., & Snyder, J. (2017). The empirics of learning from failure. Strategy Science, 2(1), 1–12. doi: 10.1287/stsc.2016.0020.
Bhatia, S., & Albarrak, A. S. (2023). A blockchain-driven food supply chain management using QR code and XAI-faster RCNN architecture. Sustainability, 15(3), 2579. doi: 10.3390/su15032579.
Brasse, J., Broder, H. R., Förster, M., Klier, M., & Sigler, I. (2023). Explainable artificial intelligence in information systems: A review of the status quo and future research directions. Electronic Markets, 33(1), 26. doi: 10.1007/s12525-023-00644-5.
Bunn, J. (2020). Working in contexts for which transparency is important: A recordkeeping view of explainable artificial intelligence (XAI). Records Management Journal, 30(2), 143–153. doi: 10.1108/RMJ-08-2019-0038.
Castro-Arce, K., & Vanclay, F. (2020). Transformative social innovation for sustainable rural development: An analytical framework to assist community-based initiatives. Journal of Rural Studies, 74, 45–54. doi: 10.1016/j.jrurstud.2019.11.010.
Cecez-Kecmanovic, D., Kautz, K., & Abrahall, R. (2014). Reframing success and failure of information systems. MIS Quarterly, 38(2), 561–588. doi: 10.25300/misq/2014/38.2.11.
Černevičienė, J., & Kabašinskas, A. (2024). Explainable artificial intelligence (XAI) in finance: A systematic literature review. Artificial Intelligence Review, 57(8), 216. doi: 10.1007/s10462-024-10854-8.
Cubric, M. (2020). Drivers, barriers and social considerations for AI adoption in business and management: A tertiary study. Technology in Society, 62, 101257. doi: 10.1016/j.techsoc.2020.101257.
Dwivedi, R., Dave, D., Naik, H., Singhal, S., Omer, R., Patel, P., … Ranjan, R. (2023a). Explainable AI (XAI): Core ideas, techniques, and solutions. ACM Computing Surveys, 55(9), 1–33. doi: 10.1145/3561048.
Dwivedi, Y. K., Kshetri, N., Hughes, L., Slade, E. L., Jeyaraj, A., Kar, A. K., … Wright, R. (2023b). So what if ChatGPT wrote it? Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. International Journal of Information Management, 71, 102642. doi: 10.1016/j.ijinfomgt.2023.102642.
Einstein, A., & Calaprice, A. (2011). The ultimate quotable Einstein, collected and edited by Alice Calaprice. Princeton: Princeton University Press.
Fowler, J. J., & Horan, P. (2007). Are information systems’ success and failure factors related? An exploratory study. Journal of Organizational and End User Computing, 19(2), 1–22. doi: 10.4018/joeuc.2007040101.
Generative AI Media (2024). “XAI(説明可能なAI)とは?概要や導入のメリット・デメリット、活用事例についてわかりやすく解説”. Available from: https://gen-ai-media.guga.or.jp/glossary/xai/ (accessed 10 October 2024)
Gupta, M., Akiri, C., Aryal, K., Parker, E., & Praharaj, L. (2023). From chatgpt to threatgpt: Impact of generative ai in cybersecurity and privacy. IEEE Access, 11, 80218–80245. doi: 10.1109/ACCESS.2023.3300381.
Ha, T., Sah, Y. J., Park, Y., & Lee, S. (2022). Examining the effects of power status of an explainable artificial intelligence system on users’ perceptions. Behaviour and Information Technology, 41(5), 946–958. doi: 10.1080/0144929X.2020.1846789.
Han, R., Lam, H. K., Zhan, Y., Wang, Y., Dwivedi, Y. K., & Tan, K. H. (2021). Artificial intelligence in business-to-business marketing: A bibliometric analysis of current research status, development and future directions. Industrial Management and Data Systems, 121(12), 2467–2497. doi: 10.1108/IMDS-05-2021-0300.
Hitachi, Ltd (2020). Explainable AI(XAI)を活用し、業務システムへのAIの適用や継続的な運用・改善を支援する「AI導入支援サービス」を提供開始. Available from: https://www.hitachi.co.jp/New/cnews/month/2020/01/0127.html (accessed 10 October 2024).
Howard, J. (2019). Artificial intelligence: Implications for the future of work. American Journal of Industrial Medicine, 62(11), 917–926. doi: 10.1002/ajim.23037.
Islam, M. R., Ahmed, M. U., Barua, S., & Begum, S. (2022). A systematic review of explainable artificial intelligence in terms of different application domains and tasks. Applied Sciences, 12(3), 1353. doi: 10.3390/app12031353.
Javed, A. R., Ahmed, W., Pandya, S., Maddikunta, P. K. R., Alazab, M., & Gadekallu, T. R. (2023). A survey of explainable artificial intelligence for smart cities. Electronics, 12(4), 1020. doi: 10.3390/electronics12041020.
Johnson, M., Albizri, A., Harfouche, A., & Tutun, S. (2023). Digital transformation to mitigate emergency situations: Increasing opioid overdose survival rates through explainable artificial intelligence. Industrial Management and Data Systems, 123(1), 324–344. doi: 10.1108/IMDS-04-2021-0248.
Kakkar, S., & Chitrao, P. V. (2021). Consumer resistance to innovations in ornamental gold jewellery. Academy of Marketing Studies Journal, 25(1), 1–16.
Karyamsetty, H. J., Khan, S. A., & Nayyar, A. (2024). Envisioning toward modernization of society 5.0—a prospective glimpse on status, opportunities, and challenges with XAI. In F. Al-Turjman, A. Nayyar, M. Naved, A. K. Singh, & M. Bilal (Eds), XAI Based Intelligent Systems for Society 5.0 (pp. 223–267). doi: 10.1016/B978-0-323-95315-3.00005-X.
Kaur, P., Dhir, A., Singh, N., Sahu, G., & Almotairi, M. (2020). An innovation resistance theory perspective on mobile payment solutions. Journal of Retailing and Consumer Services, 55, 102059. doi: 10.1016/j.jretconser.2020.102059.
Korzynski, P., Mazurek, G., Altmann, A., Ejdys, J., Kazlauskaite, R., Paliszkiewicz, J., … Ziemba, E. (2023). Generative artificial intelligence as a new context for management theories: Analysis of ChatGPT. Central European Management Journal, 31(1), 3–13. doi: 10.1108/CEMJ-02-2023-0091.
Lachuer, J., & Jabeur, S. B. (2022). Explainable artificial intelligence modeling for corporate social responsibility and financial performance. Journal of Asset Management, 23(7), 619–630. doi: 10.1057/s41260-022-00291-z.
Langer, M., Oster, D., Speith, T., Hermanns, H., Kästner, L., Schmidt, E., … Baum, K. (2021). What do we want from explainable artificial intelligence (XAI)?–A stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research. Artificial Intelligence, 296, 103473. doi: 10.1016/j.artint.2021.103473.
Leichtmann, B., Humer, C., Hinterreiter, A., Streit, M., & Mara, M. (2023). Effects of Explainable Artificial Intelligence on trust and human behavior in a high-risk decision task. Computers in Human Behavior, 139, 107539. doi: 10.1016/j.chb.2022.107539.
Li, S. (2024). KFC’s development strategy in China. Advances in Economics Management and Political Sciences, 88(1), 70–75. doi: 10.54254/2754-1169/88/20240909.
Lichtenberg, A.L. (2012). A historical review of five of the top fast food restaurant chains to determine the secrets of their success (p. 361). CMC Senior Theses.
Luo, X. R., & Chang, F. -K. (2023). Toward a design theory of strategic enterprise management business intelligence (SEMBI) capability maturity model. Journal of Electronic Business and Digital Economics, 2(2), 159–190. doi: 10.1108/JEBDE-11-2022-0041.
Maarif, M. R., Saleh, A. R., Habibi, M., Fitriyani, N. L., & Syafrudin, M. (2023). Energy usage forecasting model based on long short-term memory (LSTM) and eXplainable artificial intelligence (XAI). Information, 14(5), 265. doi: 10.3390/info14050265.
Malik, M.O., & Khan, N. (2021). Analysis of ERP implementation to develop a strategy for its success in developing countries. Production Planning and Control, 32(12), 1020–1035. doi: 10.1080/09537287.2020.1784481.
Masialeti, M., Talaei-Khoei, A., & Yang, A. T. (2024). Revealing the role of explainable AI: How does updating AI applications generate agility-driven performance?. International Journal of Information Management, 77, 102779. doi: 10.1016/j.ijinfomgt.2024.102779.
Minh, D., Wang, H. X., Li, Y. F., & Nguyen, T. N. (2022). Explainable artificial intelligence: A comprehensive review. Artificial Intelligence Review, 55(5), 3503–3568. doi: 10.1007/s10462-021-10088-y.
Moosavi, J., Fathollahi-Fard, A. M., & Dulebenets, M. A. (2022). Supply chain disruption during the COVID-19 pandemic: Recognizing potential disruption management strategies. International Journal of Disaster Risk Reduction, 75, 102983. doi: 10.1016/j.ijdrr.2022.102983.
Mugurusi, G., & Oluka, P. N. (2021). Towards explainable artificial intelligence (XAI) in supply chain management: A typology and research agenda. In A. Dolgui, A. Bernard, D. Lemoine, G. von Cieminski, & D. Romero (Eds), IFIP International Conference on Advances in Production Management Systems, Cham (pp. 32–38). Springer.
Nambisan, S., Wright, M., & Feldman, M. (2019). The digital transformation of innovation and entrepreneurship: Progress, challenges and key themes. Research Policy, 48(8), 103773. doi: 10.1016/j.respol.2019.103826.
Paschek, D., Luminosu, C. T., & Negrut, M. L. (2020). Data—the important prerequisite for AI decision-making for business. In G. Prostean, J. Lavios Villahoz, L. Brancu, & G. Bakacsi (Eds), Innovation in Sustainable Management and Entrepreneurship: 2019 International Symposium in Management (pp. 539–551). Cham: Springer. doi: 10.1007/978-3-030-44711-3_40.
Rai, A. (2020). Explainable AI: From black box to glass box. Journal of the Academy of Marketing Science, 48(1), 137–141. doi: 10.1007/s11747-019-00710-5.
Rese, A., & Baier, D. (2011). Success factors for innovation management in networks of small and medium enterprises. R&D Management, 41(2), 138–155. doi: 10.1111/j.1467-9310.2010.00620.x.
Say, G., & Vasudeva, G. (2020). Learning from digital failures? The effectiveness of firms’ divestiture and management turnover responses to data breaches. Strategy Science, 5(2), 117–142. doi: 10.1287/stsc.2020.0106.
Senoner, J., Netland, T., & Feuerriegel, S. (2022). Using explainable artificial intelligence to improve process quality: Evidence from semiconductor manufacturing. Management Science, 68(8), 5704–5723. doi: 10.1287/mnsc.2021.4190.
Sestino, A., & De Mauro, A. (2022). Leveraging artificial intelligence in business: Implications, applications and methods. Technology Analysis and Strategic Management, 34(1), 16–29. doi: 10.1080/09537325.2021.1883583.
St Cavish, C. (2017). [Eat it]: KFC’s egg tarts. SmartsShangHai. Available from: https://www.smartshanghai.com/articles/dining/eat-it-kfcs-egg-tarts (accessed 31 August 2024).
The Japan Research Institute (2022). AI fairness and explanation AI (XAI) overview and trend. Available from: https://www.jri.co.jp/MediaLibrary/file/advanced/advanced-technology/pdf/14496.pdf (accessed 10 October 2018).
Tong, C. H. (2017). Pastel de Nata: Marco da Gastronomia de Macau. Portugal: Universidade do Minho. Dissertations & Theses.
Toots, M. (2019). Why E-participation systems fail: The case of Estonia’s Osale. ee. Government Information Quarterly, 36(3), 546–559. doi: 10.1016/j.giq.2019.02.002.
Von Eschenbach, W. J. (2021). Transparency and the black box problem: Why we do not trust AI. Philosophy and Technology, 34(4), 1607–1622. doi: 10.1007/s13347-021-00477-0.
Wu, Y., Mou, Y., Li, Z., & Xu, K. (2020). Investigating American and Chinese subjects’ explicit and implicit perceptions of AI-generated artistic work. Computers in Human Behavior, 104, 106186. doi: 10.1016/j.chb.2019.106186.
Yılmaz Benk, G., Badur, B., & Mardikyan, S. (2022). A new 360° framework to predict customer lifetime value for multi-category e-commerce companies using a multi-output deep neural network and explainable artificial intelligence. Information, 13(8), 373. doi: 10.3390/info13080373.
Zarifhonarvar, A. (2024). Economics of ChatGPT: A labor market view on the occupational impact of artificial intelligence. Journal of Electronic Business and Digital Economics, 3(2), 100–116. doi: 10.1108/JEBDE-10-2023-0021.
Zhang, C. A., Cho, S., & Vasarhelyi, M. (2022). Explainable artificial intelligence (XAI) in auditing. International Journal of Accounting Information Systems, 46, 100572. doi: 10.1016/j.accinf.2022.100572.
Zhang, Z., Wu, C., Qu, S., & Chen, X. (2022). An explainable artificial intelligence approach for financial distress prediction. Information Processing and Management, 59(4), 102988. doi: 10.1016/j.ipm.2022.102988.
Acknowledgements
The authors would like to express their sincere gratitude to the anonymous peer reviewers for their insightful comments and constructive suggestions. Additionally, the authors also extend their appreciation to the editor for their professional guidance and efficient handling of the manuscript throughout the review process.
Corresponding author
About the authors
Tsung-Sheng Chang is an associate professor in the Department of Accounting and Information Management at Da-Yeh University in Taiwan, ROC. He served at Hewlett-Packard in Taiwan. He holds Ph.D. from the Department of Information Management at National Chung Cheng University. He has received the Project Management Training Award from the Chinese National Project Management Association (NPMA); his current research interests are AI in organization, information system and behavior, ACG culture, and online social issue. His articles appeared in the Technological Forecasting and Social Change, Behaviour and Information Technology, Journal of Enterprise Information Management, Journal of Manufacturing Technology Management, Journal of Retailing and Consumer Services, and International Journal of Computational Intelligence Systems, among others.
Dong-Yih Bau served in the Department of Accounting and Information Management at Da-Yeh University, Taiwan. He received his PhD in information systems from the University of Paris XI in France. He had served as a nuclear engineer at the Taiwan Power Company and as the director of the Da-Yeh University Library. His research interests include expert systems, tourism information management, and modern Einsteinian science. He has published many articles in international journals such as Expert Systems, Cornell Hospitality Quarterly, Cyberpsychology Behavior and Social Networking.