Abstract
Purpose
Artificial intelligence (AI) has gained significant momentum in recent years. Among AI-infused systems, one prominent application is context-aware systems. Although the fusion of AI and context awareness has given birth to personalized and timely AI-powered context-aware systems, several challenges still remain. Given the “black box” nature of AI, the authors propose that human–AI collaboration is essential for AI-powered context-aware services to eliminate uncertainty and evolve. To this end, this study aims to advance a research agenda for facilitators and outcomes of human–AI collaboration in AI-powered context-aware services.
Design/methodology/approach
Synthesizing the extant literature on AI and context awareness, the authors advance a theoretical framework that not only differentiates among the three phases of AI-powered context-aware services (i.e. context acquisition, context interpretation and context application) but also outlines plausible research directions for each stage.
Findings
The authors delve into the role of human–AI collaboration and derive future research questions from two directions, namely, the effects of AI-powered context-aware services design on human–AI collaboration and the impact of human–AI collaboration.
Originality/value
This study contributes to the extant literature by identifying knowledge gaps in human–AI collaboration for AI-powered context-aware services and putting forth research directions accordingly. In turn, their proposed framework yields actionable guidance for AI-powered context-aware service designers and practitioners.
Keywords
Citation
Jiang, N., Liu, X., Liu, H., Lim, E.T.K., Tan, C.-W. and Gu, J. (2023), "Beyond AI-powered context-aware services: the role of human–AI collaboration", Industrial Management & Data Systems, Vol. 123 No. 11, pp. 2771-2802. https://doi.org/10.1108/IMDS-03-2022-0152
Publisher
:Emerald Publishing Limited
Copyright © 2022, Emerald Publishing Limited
1. Introduction
Artificial intelligence (AI) denotes the “ability of a machine to perform cognitive functions that we associate with human minds, such as perceiving, reasoning, learning, interacting with the environment, problem solving, decision-making, and even demonstrating creativity” (Rai et al., 2019, p. 1). Advances in AI have given rise to its growing application in daily life in the likes of personal assistants and self-driving systems. According to a report from the McKinsey Global Institute, it is estimated that around 70% of enterprises would embrace at least one kind of AI technology by the year 2030 [1]. According to market research conducted by Tractica, a marketing research firm, the revenue of the global AI software market is expected to increase from around US $10.1 billion in 2018 to around US $126.0 billion by 2025 [2].
Among AI-infused systems, one prominent application is context-aware systems, designed to deliver tailored and timely services based on users' context (Hong et al., 2009). Notably, the fusion of AI and context awareness has given birth to intelligent AI-powered context-aware systems. For example, with the aid of AI, smartphones are becoming increasingly proficient in delivering tailored recommendations to their users about the applications they might require based on the latter's immediate context and usage habits (Sarker et al., 2020). But at the same time, even though AI possesses the capability to comprehend contextual information and formulate appropriate recommendations, its complexity and unpredictability could, at times, culminate in “behaviors that can be disruptive, confusing, offensive, and even dangerous” (Amershi et al., 2019, p. 1). Because it is challenging to assess the reliability of judgments from AI-powered context-aware systems (Kaur et al., 2021), scholars have alluded to the importance of incorporating human–AI collaboration in AI-powered context-aware systems to bolster their accuracy and reduce the damage caused by such systems (Kaur et al., 2021). Human–AI collaboration describes the process of developing a socio-technical system in which AI and humans can collaborate in a mutually beneficial fashion by exploiting individual strengths and coevolving through complementary improvement (Loske and Klumpp, 2021). Individually, human–AI collaboration might eliminate uncertainty when context-aware systems collect and interpret contextual data, culminating in enhanced performance of AI-powered context-aware services.
Despite past studies acknowledging the significance of human–AI collaboration (Jarrahi, 2018; Puranam, 2021; Ramchurn et al., 2021), related research is still in its infancy. To address the knowledge gap, we conducted a thorough literature review on human–AI collaboration and extracted 51 relevant articles from the international academic database: EBSCO Host Research Databases (see Appendix). Our review of published articles on human–AI collaboration points to two research streams: human–AI collaboration cultivation and human–AI collaboration outcomes. Whereas the research stream on human–AI collaboration cultivation concentrates on deciphering the conditions under which human–AI collaboration emerges and can be facilitated, its counterpart on human–AI collaboration outcomes places emphasis on the consequences of human–AI collaboration. But at the same time, prior research on human–AI collaboration cultivation either remains at a conceptual level or is confined to a narrow set of human–AI collaborative tasks such as cocreation of drawings (Oh et al., 2018), thereby generating limited insights into actionable design guidelines for human–AI collaboration in AI-powered services. Likewise, published articles on human–AI collaboration outcomes are rather fragmented, rendering it difficult to arrive at a holistic picture of the consequences arising from such collaborative actions.
In light of the current state of research on human–AI collaboration, we endeavor to bridge the two abovementioned research streams by delineating system facilitators across distinct phases of AI-powered context-aware services and producing a holistic view of human–AI collaboration outcomes. Specifically, we strive to answer two research questions (RQs):
How does the design of AI-powered context-aware services influence human-AI collaboration?
What are the impacts of human-AI collaboration?
To answer the two RQs, we delve into the role of human–AI collaboration to advance a future research agenda from two directions, namely, the effects of AI-powered context-aware services design on human–AI collaboration, as well as the impact of human–AI collaboration on individuals and organizations. To do so, we delineate AI-powered context-aware services into three stages (i.e. context acquisition, context interpretation and context application) before elucidating the factors to be considered in each stage following which, the consequences of human–AI collaboration together with future research directions are then discussed accordingly.
2. Theoretical foundation
2.1 Context-aware services
The rapid growth of mobile technologies combined with advancements in the “Internet of Things” has enabled context-aware services to become one of the emerging technologies in recent years (Perera et al., 2014). A context-aware system is a system that uses context to provide relevant information and/or services to the users, where relevancy depends on the users' task (Dey et al., 2001). Consistent with Dey et al. (2001), we embraced the commonly recognized definition of “context” that states that it is “any information that can characterize the situation to an entity that is considered relevant to the interaction between the user and the application. An entity is a person, place, or object that is considered relevant to the interaction between user and the application, including the user and applications themselves (p. 100)” (Dey et al., 2001). In a nutshell, context-aware services recognize and interpret users' aspects of their context (e.g. users' location, current time, the type of device and weather) and deliver services intelligently based on users' needs (Liu et al., 2016). Table 1 outlines prior studies that attempted to explain the process of developing context-aware services (Bernardos et al., 2008; Fischer, 2012), as well as our partition.
For instance, Bernardos et al. (2008) identified three phases in a context management system: context acquisition, information processing, and reasoning and decision. Similarly, Fischer (2012) proposed three steps when they developed a framework for context-aware systems: context discovery and acquisition, context representation and management, and context utilization. Although there are some nuances, the core phases of delivering context-aware services lie in three steps: context acquisition, context interpretation and context application, as depicted in Figure 1 below:
To elaborate on three stages for AI-powered context-aware services, we provide the following scenario as a running example in the remainder of the paper:
Sebastian is a 28-year-old man who lives alone and enjoys an intelligent home system in his daily life. He always gets up at 7:30 am and then spends around 15 minutes in the bathroom. The alarm goes off a few minutes before he gets out of bed automatically. After a while, the smart coffee maker would brew a cup of coffee with his favorite flavor once it detected that he had come out of the bathroom. Before Sebastian heads out the door, the smart closet could recommend the proper outfits based on his preference and the weather outside. The system could also turn on the air conditioner in advance when it concludes that Sebastian is coming home from work based on the GPS data. On weekends, if there is no urgent meeting derived from the calendar, the system would not set an alarm. Apart from managing his house more efficiently, the intelligent system also has the ability to help Sebastian stay healthy. When he falls asleep, the bed with sensors could monitor his breathing, heart rate, body temperature, and pressure level, thus adjusting the mattress firmness and temperature of the air conditioner.
Context acquisition refers to the process of acquiring the required contextual information. To provide the right services at the right time, in the right place, in the right way, collecting contextual information from physical, logical or virtual sensors is essential for context-aware systems (Yurur et al., 2016). Mobile devices are equipped with various physical sensors like temperature sensors, GPS, accelerometers, pressure sensors and microphones, so context like location, speed and temperature is easily accessible for users. Virtual sensors provide opportunities to collect higher-level contextual information such as calendar details and data derived from social media. In addition, logical sensors fused data from physical sensors and virtual sensors to infer complicated user statuses like activity recognition (Yurur et al., 2016). Let us consider the scenario we mentioned earlier. To ascertain Sebastian's exact location, physical sensors like GPS and accelerometers are employed in the context acquisition stage to judge whether there is motion and his exact location. Information from his calendar is also helpful for setting the alarm after checking his schedule.
Once the required contextual information is collected, the next step for a system is context interpretation, in which the system models the context and derives insights from statistical approaches. After gathering context derived from various sources, the system needs to reconcile them and produce a coherent representation for information processing in the next step. Bernardos et al. (2008) contented that this stage aimed at “preparing the informational picture to provide the system with enough information to reason about” (p. 612). In Sebastian's case, a large number of sensors are built into the intelligent home system. The core of the system lies in how to interpret data from multiple sources. For example, the system receives context like “his location is the kitchen”, “current time is 8:00 am” and “there is no motion in the kitchen”. These contextual details are then represented as factors and parameters for the reasoning algorithms. Within the extant literature, there are several context modeling techniques, such as key-value modeling and ontology-based modeling (Sezer et al., 2018). The intelligent system may infer Sebastian's desire to have breakfast based on observations like “Sebastian is static in the kitchen at 8:00 am”. Alternatively, he may read online news in the kitchen instead. Combined with motion recognition technologies from the camera, the system could gain improved confidence in identifying Sebastian's statuses and intentions, thus delivering personalized services.
The final stage is context application, which we define as the process of discovering, adapting and personalizing context-aware services for users based on contextual data (Grifoni et al., 2018). At this stage, a context-aware system fulfills its mission by developing personalized services for end users in terms of what services to deliver, when to deliver and how to deliver them. Considering the alarm setting in Sebastian's scenario, when Sebastian encounters unexpected things, he may set the alarm by himself, and the system will deliver service reactively. Yet, if Sebastian forgets to set an alarm and the system detects daily routines from his calendar, the alarm may go off proactively. On top of that, a context-aware system also needs interpretation of user feedback to foster the evolution of context awareness. For instance, when Sebastian ignores recommendations, the ignoring behavior could be interpreted as negative feedback. Following feedback, reinforcement learning could modify user profiles and even modify parameters in the context acquisition stage in the next round (Augusto et al., 2017).
2.2 AI-powered context-aware services
Context-aware services are increasingly fueled by AI. For instance, Sarker et al. (2020) built a smartphone apps’ prediction model based on one of the unsupervised machine learning techniques, namely, principal component analysis that could reduce multiple contextual dimensions. In this vein, smartphones could provide recommendations on what apps are needed based on a user's current context and usage history proactively. Ample research has alleged that AI plays an instrumental role in promoting efficiency and performance for context-aware systems (del Carmen Rodríguez-Hernández and Ilarri, 2021; Unger et al., 2020). Nonetheless, only a small amount of preceding research has gone into detail on the specific role that AI plays in context-aware systems. This research thus attempts to outline how AI affects three phases of context acquisition, context interpretation and context application for context-aware systems, respectively.
On the stage of context acquisition, contextual factors that are taken into account by context-aware systems are growing more sophisticated and go much beyond simple considerations like geographic location or time (van Engelenburg et al., 2019). AI techniques have the capability to broaden the sources of users' contextual information. For example, it is plausible that visual AI technologies could translate photos and videos into insights and extract contextual information (Sarker, 2022). Apple has acquired Emotient, Inc., a business that analyzes facial expressions to understand an individual's emotions using AI technologies (Bastug et al., 2017). Apart from getting raw data from sensors and user input directly, loosely related information could be fed into machine learning algorithms to infer uses' context. In Sebastian's scenario, a smart kitchen with AI capabilities could infer his health conditions by collecting loosely related information like how much heat he generates in the kitchen at certain times of the day, the types of containers in the refrigerator and types of kitchen waste. Furthermore, technological development on the Internet of Things allied with the incorporation of AI has led to the coordination of a large-scale network of devices (Arsénio et al., 2014) that which might broaden the collective level context. In sum, AI capabilities could broaden the sources of contextual information available to the system at the phase of context acquisition.
It is widely acknowledged that AI techniques are suitable for rule mining and context inference when it comes to the phase of the process known as context interpretation (Almagrabi and Al-Otaibi, 2020; del Carmen Rodríguez-Hernández and Ilarri, 2021). For example, to locate similar users, the context-aware recommender system in the study by Amoretti et al. (2017) utilized k-means clustering algorithms. Besides, Coppola et al. (2009) developed an architecture for context-aware services, in which AI techniques (e.g. rule-based systems, Bayesian networks and ontologies) were utilized to infer more abstract context from more concrete data. Similarly, as time evolves, users' habitual patterns could be recognized through the architecture embedded with AI. In Sebastian's case, the fact that he consistently wakes up at 7:30 a.m. does not indicate that the alarm clock rule is unbreakable. Through continuous learning of his behaviors, the system may discover that he rises 30 min earlier each Monday due to the weekly meetings.
On the final stage of context application, AI techniques not only provide users with personalized context-aware information/services, but also optimize the timing and manner of delivering context-aware information/services. Two aspects are highlighted in Motti et al. (2012)'s road map for how machine learning algorithms support the process of context-aware adaptation: context inference and presentation of context-aware services. If a user consistently resizes the interface of a recommender system, for instance, the system may evolve its presentation style and adjust the default size correspondingly. In the same vein, Sebastian's smart house with AI tracks and detects his behavioral patterns and lifestyles in the long term, hence presenting distinct services for weekdays and weekends. On weekdays, the coffee maker brews a cup of coffee automatically, but the homeowner prefers to prepare breakfast himself on weekends.
2.3 Human considerations for AI-powered context-aware services
The incorporation of AI capabilities into context-aware systems has been shown to be helpful (Coppola et al., 2009; Unger et al., 2020), yet context-aware systems still face a number of obstacles (Kaminskas and Ricci, 2012; Verbert et al., 2012). To elaborate on potential challenges faced by AI-powered context-aware services, we examined three stages of context-aware service delivery (i.e. context acquisition, context interpretation and context application), respectively.
Since AI capabilities allow the extraction of many sources of user context during the context acquisition phase, mismatches between distinct sources may threaten the contextual information correctness. For example, a system might simultaneously detect the user's movements from a wristwatch and the static conditions from a mobile phone. If the user does not provide further information, it is difficult to reach a conclusion about users' statuses. Therefore, it would be beneficial if users were able to validate their context under certain circumstances. This would reduce high ambiguity and signal the relevance and value of contextual information.
Whereas on the stage of context interpretation, the black box nature of AI algorithms might lead to user confusion about the learning processes, such that they have no clue why they are receiving certain context-aware services. Luo et al. (2020) claimed that despite context-aware systems becoming more and more intelligent, they are prone to bugs. The gap between raw sensor data and relevant interpretation of context may impair the provision of subsequent context-aware services. In this case, allowing the user to reedit the contextual data boosts AI interpretation and provides a more customized user experience. The running mode of a music recommender, for instance, may generate playlists that correspond to your running pace. Yet, guidelines, which are intended to apply to everyone, such as “recommend fast-paced music as running speed rises”, may not be appropriate for certain users, particularly those who like listening to tranquil music when they are exercising at a high intensity. By updating the context interpretation logic and fixing the interpretation errors, it will be possible for users to supplement AI-powered context-aware systems in an effective manner. To minimize possible mistakes, prompt feedback from users is less costly and more effective for evolving the context-aware adaptation engine.
Previous research has recognized the role of human augmentation, such as providing relevant information to AI-powered systems (Andrews et al., 2022; Metcalf et al., 2019). Apart from human augmentation, it is acknowledged that both humans and AI have their own unique strengths (Ramchurn et al., 2021). AI also has the ability to augment humans in three stages for context-aware services. At the phase of context acquisition, AI methods could be utilized to add another information source (e.g. image analysis), with the goal of reducing the ambiguity about users' context. AI-based systems might develop user-specific adaption mechanisms throughout the stage of context interpretation. Besides, AI-based systems may provide users with a confidence level for outcomes in the context application stage that would assist them to accept information or services they receive. In particular, humans have the capacity to process information intuitively and with social skills, while AI excels in speed and scalability. In a similar vein, Miller (2018) highlighted that humans and machines should work symbiotically to augment each other's capabilities. Moreover, Woo (2020) asserted that human–machine cocreation is vital for the success of the AI-oriented systems that enhance their capabilities separately. Hence, as context evolves with time dynamically (Mishra et al., 2019), a hybrid of human–machine effort for human–AI collaboration is imperative in AI-powered context-aware services for personal life or organizational business. We summarized the role of human–AI collaboration in Table 2 below.
2.4 Human–AI collaboration for AI-powered services
Recent advancements in AI could make it possible for people and technology to work together and integrate seamlessly (Korteling et al., 2021). As a result, more research has started to investigate the potential for AI and humans to work together in collaborative settings (Andrews et al., 2022; Nakahashi and Yamada, 2021; Peeters et al., 2021). Metaphors such as “teammates,” “partners” and “collaborators” in related studies highlighted a high degree of cooperation, complementarity and equality for human–AI collaboration (Korteling et al., 2021). In particular, to gain a comprehensive understanding of human–AI collaboration from the extant literature, we conducted a thorough literature review on human–AI collaboration through the EBSCO database with all subjects. We discovered the initial corpus of peer-reviewed academic studies using the terms “human-AI,” “AI-human,” “Artificial Intelligence-human” and “human-Artificial Intelligence” at the EBSCO database, yielding 520 studies. We then excluded papers that did not pertain to AI-based systems based on the abstracts of publications, resulting in 252 studies. Following that, we examined the identified papers and eliminated publications that were unrelated to human–AI collaboration. In the end, we located 51 relevant articles in total (see Appendix).
Our review of human–AI collaboration reveals two research streams: human–AI collaboration cultivation and human–AI collaboration outcomes. The human–AI collaboration cultivation research stream is centered on facilitating human–AI collaboration. A significant portion of the studies in the human–AI collaboration cultivation research stream proposed conceptual premises for human–AI collaboration that range from shared mental models (Andrews et al., 2022) to collective responsibility (Cañas, 2022) and calibrated trust (Okamura and Yamada, 2020). In addition, a minority of researchers are working on developing a prototype or framework to deploy human–AI collaboration. For instance, Strobelt et al. (2022) designed a prototype, which visualizes a more comprehensive picture of the AI system and enables users to modify model constraints, with the goal of enhancing human–AI collaboration. However, the existing body of research is restricted to relatively specific tasks and fragments, or focuses more on the conceptual foundations of human–AI collaboration, thereby providing little insight into a comprehensive and actionable design of AI-powered services for human–AI collaboration. While for the human–AI collaboration outcomes research stream, it endeavors to unearth the outcomes derived from human–AI collaboration for individuals, organizations and society. For example, Sturm et al. (2021) examined how businesses may combine human learning and machine learning to improve organizational learning. Yet, the fragmented structure of this field's studies makes it difficult to produce an integrated picture of the phenomena.
3. Research directions
Conceivably, human–AI collaboration is prominent for AI-powered context-aware services. Yet, to the best of our knowledge, no research has examined the role of human–AI collaboration for context-aware systems that necessitates an actionable and holistic framework for designers seeking to combine AI and context-awareness technologies and enhance human–AI collaboration. In line with the prior literature (Peeters et al., 2021; Schoonderwoerd et al., 2022), we define human–AI collaboration as the process of developing a socio-technical system in which human intelligence and AI can effectively collaborate by exploiting their individual strengths and coevolve via mutual improvement (Loske and Klumpp, 2021). A higher degree of human–AI collaboration indicates that users are willing to promote the performance of AI by integrating their own intelligence; and the AI-enabled system keeps learning users to augment human intelligence at the same time. Following two research streams in the domain of human–AI collaboration, we advance a framework around human–AI collaboration in AI-powered context-aware services which includes two facets: (1) How does the design of AI-powered context-aware services at different stages influence human–AI collaboration? (2) What are the impacts of human–AI collaboration? We developed a theoretical framework to illustrate possible RQs as depicted in Figure 2 below.
3.1 Effects of the design of AI-powered context-aware services on human–AI collaboration
Table 3 summarizes the salient AI-related topics for the design of AI-powered context-aware services at different stages and the future RQs they may address. The RQs are identified from three perspectives: design, mechanisms and boundary. The “design” viewpoint seeks to explore challenges that are centered on the design of AI-powered context-aware services, such as “how to design explanation approaches for human-AI collaboration in AI-powered context-aware systems?” while the “mechanisms” perspective focuses on the underlying mechanisms of relationships between system design and human–AI collaboration, such as the question, “how do explanation approaches influence human-AI collaboration in AI-powered context-aware systems?” In addition, the “boundary” perspective would pay attention to the moderating effects of the relationships in terms of how system design affects human–AI collaboration. For instance, “What user, system, and task characteristics moderate the effects of explanation approaches on human-AI collaboration in AI-powered context-aware systems?” Specifically, we propose the following RQs.
3.1.1 How does the design of AI-powered context-aware services in the phase of context acquisition influence human–AI collaboration?
During the phase of context acquisition, the context-aware system tries to collect users' personal context that identifies their identity, either by detecting it implicitly or by requesting it from the user explicitly. Specifically, users' preferences and contexts may be gleaned from a wide range of unrelated data that are collected by AI-powered systems that might easily trigger privacy concerns in this regard. Simultaneously, the system might have more accurate user profiles for context-aware services if users expressly designate their context. On the one hand, the system tried to prevent bothering consumers too much. On the other hand, user disclosure helps to ensure that user profiles are accurate for customization. We thus believe that independence, which measures the extent to which tasks on an AI-powered context-aware system are dependent on the system without human intervention, plays a significant role in this phase. We figure that human–AI collaboration may be influenced by the degree of independence between humans and AI when they engage in interactions. Few studies have examined the underlying mechanisms of the impact. We hence propose the following RQs linked to the mechanisms and boundaries of the impacts of independence on human–AI collaboration:
How does the system independence level influence human–AI collaboration in AI-powered context-aware systems?
What user, system and task characteristics moderate the effect of system independence on human–AI cooperation in AI-powered context-aware systems?
AI-powered systems should determine their acceptable degree of independence in the phase of context acquisition that implies a trade-off between AI participation and user involvement. It might be argued that privacy-sensitive contextual information, such as health problems, is better off being given by the user, yet certain contextual information could be automatically gathered by the system. For example, AI-powered context-aware systems may find it challenging to identify users' affective and social cues. Because of this, it would be preferable if we tilted the balance toward user involvement and AI participation. In that case, we propose the following RQs regarding the boundaries for system independence:
How to identify the boundary between AI and humans during the process of context acquisition in AI-powered context-aware systems?
Under what circumstances is a system's collection of contexts automatically better than a user's providing their context in AI-powered context-aware systems?
Users may be reluctant to share their context in order to prevent privacy concerns or disruptive user experience during their interactions. Nevertheless, if users believed that they had a shared goal with the system, they might be inspired to cooperate with the AI-powered context-aware systems. In addition, both users and AI might contribute contextual information at the same time; thus, it is vital to study ways to coherently combine data from AI and users. As a result of this, we propose the following RQs regarding the design of system independence:
How to motivate users to provide context information in synergy with the AI-powered context-aware systems?
How to integrate contextual data from AI and users coherently for AI-powered context-aware systems?
What principles should be used to identify the ultimate context for AI-powered context-aware systems when user data and AI data do not match?
3.1.2 How does the design of AI-powered context-aware services in the phase of context interpretation influence human–AI collaboration?
Ample prior research has recognized the role of explainability for human–AI collaboration (Naiseh et al., 2021; Ramchurn et al., 2021; Zerilli et al., 2022). As contended by Naiseh et al. (2021), “human-AI collaborative decision-making tools are being increasingly applied in critical domains such as healthcare … An essential requirement for their success is the ability to provide explanations about themselves that are understandable and meaningful to the users (p. 1857)”. Notably, to promote the performance of AI, explainable artificial intelligence (XAI), which aims to present understandable justifications of output or procedures to end users, has been seeing explosive growth over the last several years (Holzinger et al., 2021; Shin, 2021). During the period of context interpretation, AI algorithms are always seen as “black-boxes”, so users might have no clue how their contextual information is interpreted. This may deter users from cooperating with AI-based systems. Consequently, in accordance with Naiseh et al. (2021), we propose that the degree of explainability of a context-aware system should be considered for human–AI collaboration during the context interpretation phase.
In accordance with the definition of “interpretability” stemmed from Doshi-Velez and Kim (2017), we define explainability of a context-aware system as the capability of a context-aware system to explain its processes or outcomes in an understandable way to users. Users may have faith in AI if a context-aware system provides features that make the underlying algorithms transparent to end users. We thus consider the following RQs regarding mechanisms and boundaries of explainability on human–AI collaboration:
How do explanation approaches influence human–AI collaboration in AI-powered context-aware systems?
What user, system and task characteristics moderate the effect of explanation approaches on human–AI collaboration in AI-powered context-aware systems?
Adadi and Berrada (2018) summarized four motivations for explainability: explain to justify, explain to control, explain to improve and explain to discover, showing possibilities for different explanation approaches. Similarly, Lim et al. (2009) suggested that user experience at context-aware systems would vary depending on the kind of explanation the system provided – why explanations (Why did the system do something?) or why not explanations (Why did the system not do something?). It is still unclear how to achieve human–AI collaboration on a context-aware system via the configuration of various explanation features, or how these varied explanation features facilitate human–AI cooperation in various ways. All of these questions in terms of the design of explanation features for human–AI collaboration are interesting directions for future research:
How to design explanation approaches for human–AI collaboration in AI-powered context-aware systems?
Do various explanation approaches (“why” and why not” explanations) influence human–AI collaboration differently in AI-powered context-aware systems?
How can the quality of the explanations be evaluated in terms of human–AI collaboration in AI-powered context-aware systems?
Scrutability of a context-aware system allows users to correct system assumptions and adjust the model of context interpretation (Pu et al., 2012). Based on Cheverst et al. (2005), we define scrutability as the ability of a context-aware system to enable “a user to interrogate her user model in order to understand the system's behavior” (p. 236). For instance, Setten et al. (2004) designed a recommender system that enables users to indicate the type of context for generating recommendations. Due to the inherent mismatch between how context-aware systems represent the physical world and how users perceive the physical world, the growing level of scrutability augments users' control toward the context-aware system, thereby fostering human–AI collaboration. However, despite scrutability features have been discussed in the prior literature (Balog et al., 2019), its role on human–AI collaboration still remains unclear. Several directions for future research remain. For example, how to design features to promote the level of scrutability at context-aware systems? Does the level of scrutability impact human–AI collaboration via control beliefs or other psychological mechanisms? We thus propose the RQs like as follows:
How to design scrutability features for human–AI collaboration in AI-powered context-aware system?
When users can tune their own user models, and how do we set and measure the responsibilities of AI and humans in AI-powered context-aware systems?
How do scrutability features influence human–AI collaboration in AI-powered context-aware system?
What user, system and task characteristics moderate the effects of scrutability features on human–AI collaboration in AI-powered context-aware systems?
3.1.3 How does the design of AI-powered context-aware services in the phase of context application influence human–AI collaboration?
Users may voice their opinions on the suitability of information and services as well as the manner in which they are presented and delivered when it comes to the phase of context application for an AI-powered context-aware system. Practically, e-commerce mobile applications such as Amazon allow users to tap the “like” or “dislike” button next to recommendations while they browse the product list. Furthermore, a growing number of studies have begun to investigate the role of interactive feedback for AI-based systems. For instance, Cai et al. (2019) have validated that interactive dynamic feedback would be able to augment human intelligence during decision-making in a deep neural network (DNN)-backed content-based image retrieval (CBIR) system. In their context, doctors employ a medical image (e.g. x-ray images) to search for similar images to support their diagnosis results from an AI-powered system. After receiving imperfect results, doctors have the ability to leverage three refinement tools to express their feedback: refine-by-region (allow users to select a region of interest), refine-by-example (allow users to set an example for retrieving) and refine-by-concept (allows users to specify a clinical concept). Such feedback not only helps users improve their mental models, but it also confirms algorithmic processes in several hypothesis-test cycles, with the potential to strengthen human–AI collaboration. Users are able to convey their varied perspectives on AI-based systems via the use of several feedback tools. For example, users may indicate the probability of their willingness to accept the system output (Shrestha et al., 2019). However, it remains unclear how to design feedback tools to elicit human–AI collaboration on AI-powered context-aware systems, as well as the mechanisms and boundaries of such impacts. We thus propose the following RQs:
How to design interactive feedback tools to promote human–AI collaboration in AI-powered context-aware systems?
How to design feedback features to promote shared control between AI and users in AI-powered context-aware systems?
How to design feedback features to mitigate unpredictable AI behaviors in AI-powered context-aware systems?
What user, system and task characteristics moderate the effect of feedback features on human–AI collaboration in AI-powered context-aware systems?
How do feedback features influence human–AI collaboration in AI-powered context-aware systems?
Another essential factor that should be taken into account at the context application stage is adaptability that is the capacity of an AI-powered context-aware system to help users “recognize and adapt fluidly to new and dynamic scenario” (Dubey et al., 2020, p. 4). AI-based systems with high adaptability could modify their output in a timely manner when the circumstances changed. A higher level of adaptability assists users in recognizing their shared goals and common grounds with the system. As such, the level of adaptability has a beneficial impact on human–AI collaboration that seems to be a fruitful avenue for future research. We hence propose the following RQs on the mechanisms and boundaries of such impacts:
How does the level of system adaptability influence human–AI collaboration in AI-powered context-aware systems?
What user, system and task characteristics moderate the effect of system adaptability on human–AI collaboration in AI-powered context-aware systems?
In addition, as the context changes, AI-based systems, which have high adaptability evolve their services to better suit the new circumstances. For example, the smartphone will automatically adjust the brightness of the screen depending on the time of day, and it will lower the quantity of blue light emitted when used in the evening. However, since limited research has explored how to design system adaptability for human–AI collaboration, we thus propose the following RQ:
How to design system adaptability for human–AI collaboration in AI-powered context-aware systems?
3.2 Positive impacts of human–AI collaboration on individuals and organizations
Huang et al. (2019) characterized three goals of colearning for human–AI collaboration: “mutual understanding,” “mutual benefits” and “mutual growth”. In line with their study, we divided the impacts of human–AI collaboration into three types. Mutual understanding refers to the ability of learning entities (human or AI) to expect others and to be expected by others. Mutual benefits describes that the system gets superior results which humans or AI cannot achieve alone. Mutual growth refers to the extent to which AI and humans feel they coevolve over time. Table 4 provides the summary of research directions in terms of the positive impacts of human–AI collaboration on individuals and organizations. Human–AI collaboration fosters users' awareness in terms of their shared control and common ground with the system. We assert that how human–AI collaboration influences users' perceptions and decision-making processes deserve further investigations. Consequently, we propose the following RQs regarding the “mutual understanding” of humans and AI:
How does human–AI collaboration influence users' perceived AI mental model in AI-powered context-aware systems?
How does human–AI collaboration influence user's perceived shared control in AI-powered context-aware systems?
To guide the design of human–machine teaming systems, Mcdermott et al. (2018) described detailed requirements for systems developers. In their research, they proposed the concept of “calibrated trust” for the purpose of autonomous system design that is salient when “technology provides indicators enabling human operators to understand when and how much to trust an automated partner in context (p. 2).” Numerous research has explored the antecedents and outcomes of trust in the information systems (IS) community (Panniello et al., 2016; Seymour et al., 2021). However, due to the characteristics of AI-powered context-aware services, trust toward AI-powered context-aware services has changed significantly. Even a perfect algorithm is unable to determine how a user interprets the physical environment; hence, AI-powered context-aware systems have a tendency to provide services that are fraught with ambiguity. In this vein, users may build varying levels of trust depending on the context in which they are using the services. We thus define calibrated trust as the extent to which users have the ability to decide when and how much to trust the system in AI-powered context-aware systems. To calibrate users' trust, Mcdermott et al. (2018) suggested three requirements, including providing information source, elaborating automation mechanisms and providing confidence level. In addition, when an AI-powered context-aware system is equipped with high transparency and explainability, users are more likely to evaluate whether the intent of the AI is perceived to be aligned with that of the human, fostering positive human–AI affective relationships. Consequently, we propose the following RQs related to “mutual benefits” of humans and AI:
How does human–AI collaboration influence users' trust calibration in AI-powered context-aware systems?
How does human–AI collaboration influence users' affective relationship with AI in AI-powered context-aware systems?
Moreover, Yang et al. (2020) claimed that creating user-system coevolution and tailored outputs were two challenges that needed to be overcome in order to enhance the user experience in human–AI interaction. Notably, AI-based systems will “grow” as their capabilities evolve, absorbing new insights from fresh data and becoming “better” than ever before. In line with Huang et al. (2019), we defined mutual growth as the extent to which AI and humans feel they coevolve over time. We thus propose a related RQ related to “mutual growth” of humans and AI:
How does human–AI collaboration influence users' perceptions of mutual growth with AI-powered context-aware systems?
The use of AI as an IT strategy is becoming more commonplace in businesses. A growing number of studies are beginning to explore how employees in organizations cooperate with AI for their work (Budhwar et al., 2022; McNeese et al., 2021; Pandey et al., 2022). Initially, humans and AI are two distinct entities with different mental models. They started to foster mutual understanding through increased interactions and continuous learning over time. For instance, McNeese et al. (2021) indicate that trust in team members of human–machine teaming is associated with team performance (McNeese et al., 2021). Yet, related research is still in its early stages, with the majority of studies being conceptual in nature. It is unknown how the human–AI team influences the policies, priorities and culture of a business empirically. Notably, employees may be uncertain as to whether AI should be seen as colleagues or leaders.
Moreover, the work of Robert et al. (2020) defined AI audit as “an inspection of the AI's underlying logic, decision criteria, and data sources in an attempt to validate the AI” (Robert et al., 2020, p. 25).The IT sectors in an organization may conduct large-scale computer simulations or code checking in a periodic way. Notably, how to design AI audition for an organization is still an open question. In sum, it is promising to examine these questions related to “mutual understanding” of humans and AI in organizations in the future. For example,
How does human–AI collaboration influence an organization's policies, priorities and culture?
How does human–AI collaboration influence employees' social roles in an organization?
How to design AI audition for human–AI collaboration in organizations?
AI has been utilized to manage employees and complete organizational tasks in various ways in the workplace (Terziyan et al., 2018). For instance, it can prescreen candidates' resumes (Budhwar et al., 2022). AI-enabled chatbots may also assist new employees in settling into their positions. In addition, physical robots are capable of cooperating with workers to complete a task jointly (Tóth et al., 2022). Yet, it is uncertain how human–AI collaboration influences organizational performance. We propose the following RQ in terms of the “mutual benefits” of humans and AI:
How does human–AI collaboration improve organizational performance?
Human–AI collaboration has the potential to boost mutual growth of AI-powered context-aware systems and users, allowing context-aware systems to grasp how people reason about the environment, culture and social interactions that they can incorporate into their intelligence. Kaplan and Haenlein (2019) described three kinds of AI-based systems that include cognitive intelligence, social intelligence and emotional intelligence. Specifically, cognitive, emotional and social intelligence are connected to specific cognitive, emotional and social skills that may be learnt and imitated by both humans and AI. There has been little empirical research that investigates the consequences of human–AI collaboration on the cognitive intelligence, emotional intelligence and social intelligence of AI-enabled systems. Hence, we propose the following RQ on the “mutual growth” of humans and AI in organizations:
How does human–AI collaboration influence AI-powered systems' cognitive intelligence, emotional intelligence and social intelligence as time evolves in organizations?
3.3 Negative impacts of human–AI collaboration on individuals and organizations
Apart from positive outcomes, it is plausible that negative impacts, such as several ethical challenges existed in AI-powered context-aware services as a result of human–AI collaboration. In particular, as Lepri et al. (2018) indicated, algorithmic results should not be biased, discriminatory or unfair. However, an AI-based resume filtering system developed by Amazon was found biased against women [3]. Due to the fact that the majority of candidates are male, the system leaned toward male candidates as time evolved. AI-based systems learn discrimination from humans, and human–AI collaboration may impose human discrimination on human–AI teams. Research regarding fairness has begun to attract attention (Ferrer et al., 2021). Consequently, we propose the following RQ: How to reduce and correct AI discrimination from AI-powered context-aware systems?
Numerous AI-enabled systems generate services automatically. Due to the system's ability to infer user context from loosely related information and deliver services promptly, users may be uninformed of the reasons why they are getting such services and powerless to stop them, resulting in a loss of control over their usage experience. We propose the RQ: How to reduce and correct perceptions of “loss of control” for AI-powered context-aware systems?
In addition, intelligent AI-powered context-aware services are expected to do complicated tasks on behalf of users. Yet, it may hinder people's cognitive and social skills when they rely too much on technologies that are enabled by AI. We thus propose that “dependence lock-in” seems an undesirable outcome of overreliance on AI-powered context-aware services (Kalimeri and Tjostheim, 2020). The “dependence lock-in” problem means that users may diminish the potential of achieving tasks individually if they rely too much on human–AI collaboration that may in turn degrade users' capacity to make decisions on their own. Hence, if AI-based systems encounter system failures, users might have difficulty in solving several tasks independently. A look at lock-in problems and how to prevent this from happening is promising. We thus propose the following related RQ: How to reduce and correct “dependence lock-in” for AI-powered context-aware systems? In sum, we summarized possible future research directions for possible negative impacts of human–AI collaboration on individuals and organizations in Table 5.
4. Conclusions
AI is of great significance for our daily life. Researchers and practitioners face challenges in designing AI-powered context-aware services. We draw attention to the process of context-aware systems and emphasize the role of human–AI collaboration. In this study, we propose future RQs and theoretical perspectives in terms of two directions: the role of AI-powered context-aware services design on human–AI collaboration and the impacts of human–AI collaboration. Through our discussions, we hope to open up avenues for both theory and practice.
Figures
Developmental phases of context-aware services
Source | Phases of context-awareness process | |||
---|---|---|---|---|
Bernardos et al. (2008) | Context acquisition | Information processing | Reasoning and decision | |
Fischer (2012) | Context discovery and acquisition | Context representation and management | Context utilization | |
Perera et al. (2014) | Context acquisition | Context modeling | Context reasoning | Context dissemination |
Grifoni et al. (2018) | Context acquisition | Context representation | Context adaptation | |
Our partition | Context acquisition | Context interpretation | Context application |
Role of human–AI collaboration in context-aware systems
Phase | Definition | Role of artificial intelligence (AI) | Challenges | Human–AI collaboration | |
---|---|---|---|---|---|
Human augment AI | AI augment human | ||||
Context acquisition | The process of acquiring the required contextual information for context-aware systems |
|
|
| Provide information source to users |
Context interpretation | The process of modeling the context and deriving insights from statistical approaches for context-aware systems |
|
|
| Elaborate on adaptation mechanisms to users |
Context application | The process of the system delivers context-aware services based on users' needs and profiles for context-aware systems |
|
|
| Provide confidence level for results to users |
Design of AI-powered context-aware services on human–AI collaboration
Stage | Research topics | Possible future research questions | ||
---|---|---|---|---|
Design | Mechanisms | Boundary | ||
Context acquisition | Independence |
|
|
|
Context interpretation | Explainability |
|
|
|
Scrutability |
|
|
| |
Context application | Adaptability |
|
|
|
Feedback |
|
|
|
Positive impact of human–AI collaboration on individuals and organizations
Goal of Human–AI collaboration | Possible future research questions | |
---|---|---|
Individual level | Organizational level | |
Mutual understanding |
|
|
Mutual benefit |
|
|
Mutual growth |
|
|
Negative impact of human–AI collaboration on individuals and organizations
Research topic | Possible future research questions |
---|---|
AI discrimination | How to reduce and correct AI discrimination from AI-powered context-aware systems? |
Loss of control | How to reduce and correct perceptions of “loss of control” for AI-powered context-aware systems? |
Lock-in | How to reduce and correct “dependence lock-in” for AI-powered context-aware systems? |
Summary of the extant literature on human–AI collaboration
Author(s) | Context of human–AI collaboration | Methods | Research streams | Insights in human–AI collaboration | |
---|---|---|---|---|---|
Human–AI collaboration cultivation | Human–AI collaboration outcomes | ||||
Loske and Klumpp (2021) | Human–AI collaboration in outbound logistics of grocery retailing | Fuzzy data envelopment analysis (DEA) | ✓ | Outbound logistics activities benefit more from human–AI collaboration than from human-only or AI-only approaches when it comes to the mentioned vehicle routing problems | |
Pynadath et al. (2018) | Human–machine teams in autonomous systems | Conceptual | ✓ | Transparency in communication could boost teammates' situation awareness, strengthen the trustworthy relationship and enhance the performance of the human–automation team | |
Lebovitz et al. (2022) | Human–AI augmentation for medical diagnosis | An ethnographic field study | ✓ |
| |
Fügener et al. (2021) | Human–AI complementarity | Online experiment | ✓ |
| |
Schoonderwoerd et al. (2022) | Colearning in human–AI teams | Experiment | ✓ |
| |
Korteling et al. (2021) | Human–AI teaming | Conceptual | ✓ |
| |
Okamura and Yamada (2020) | Collaboration between human and autonomous AI agents | Online experiment with a web-based drone simulator | ✓ |
| |
Chen and Krishnamurthy (2020) | Human–AI collaboration for mind mapping | Algorithm design | ✓ | Comparative analysis of human–AI cooperation and human–human collaboration in mind mapping | |
Baki Kocaballi et al. (2020) | AI documentation assistants | Co-design workshops with general practitioners | ✓ | AI documentation assistants for care consultations still need human supervision. To guarantee patient safety, doctor safety and quality of care, a variety of human–AI collaboration models will need to be developed and tested | |
van den Broek et al. (2021) | Human–AI hybrids for hiring | An ethnographic field study | ✓ | Human–AI hybrids are the consequence of a process of mutual learning in which intense interaction with the AI drove users to reflect on how they created knowledge | |
Silva de Oliveira et al. (2022) | Human–AI hybrid systems | Conceptual | ✓ | This study contributes to the development of cognitive systems by offering recommendations for the appropriate application of hybrid human–AI knowledge | |
Grover et al. (2020) | Human-in-the-loop planning | Prototype design | ✓ | This study proposes a decision support system for sequential planning issues that involves a human in the loop | |
Puranam (2021) | Human–AI collaborative decision-making | Conceptual | ✓ | This study proposes typologies for examining various sorts of human–algorithm division of labor and the learning configurations they are grouped in | |
Andrews et al. (2022) | Human–AI teams | Literature review | ✓ | Human–AI shared mental models are vital for team performance | |
Liu et al. (2022) | The human–AI scoring system | Experiment | ✓ | Radiologists may increase the accuracy of COVID-19 severity assessment by using the human–AI scoring system | |
Asan et al. (2020) | Human–AI collaboration in healthcare | Conceptual | ✓ | Optimal trust, which refers to humans and AI should both maintain a certain degree of skepticism about the judgments made by the other and are needed for human–AI collaboration in healthcare | |
Sturm et al. (2021) | Coordination of human learning and machine learning | Agent-based simulations | ✓ | ✓ |
|
Metcalf et al. (2019) | Artificial swarm intelligence | Conceptual | ✓ | Instead of focusing on how artificial intelligence may replace humans, ASI encourages people to consider how it could augment human skills | |
Naiseh et al. (2021) | Human–AI collaborative decision-making | A multistage qualitative research method | ✓ | The study examines the design of interactive explanations to support trust calibration and provide design principles and methods for such explanations | |
Longoni and Cian (2022) | AI-based recommendations | Experiments | ✓ | The word-of-machine effect describes that AI recommenders are better at utilitarian tasks than humans, but worse at hedonistic ones. Human–AI collaboration would reduce such effects | |
Knop et al. (2022) | AI-enabled clinical decision support systems | Literature review | ✓ | The study offers a thorough overview of the crucial factors, such as training data quality, explainability and medical expertise which affect how medical professionals interact with and collaborate with AI-based clinical decision support systems | |
Siemon (2022) | AI-based systems for idea evaluation | Experiments | ✓ | In a collaboration scenario, using an AI-based system to evaluate ideas reduces anxiety about a future review, while human evaluation increases anxiety about an evaluation to come | |
Pandey et al. (2022) | Human-in-the-loop machine learning | Experiments | ✓ | The study develops a paradigm for human errors in stream processing and a method for mitigating human errors | |
Ramchurn et al. (2021) | Trustworthy human–AI partnerships | Conceptual | ✓ | The study discusses the necessities of trustworthy human–AI partnerships such as creating human–AI team simulations and explanation features for verifying and validating human–AI relationships | |
Jarrahi (2018) | Human–AI symbiosis in organizational decision-making | Conceptual | ✓ | ✓ | This article analyzes how humans and AI might work together in organizational decision-making processes that are often marked by ambiguity and complexity |
Mirbabaie et al. (2022) | Human–AI collaboration at the workplace | Survey + interview | ✓ | This research demonstrates three indicators for AI identity threat derived from human–AI collaboration in the workplace: changes to job, loss of status position and AI identity | |
McNeese et al. (2021) | Human–autonomy teaming | Experiments | ✓ | ✓ | Trusting a teammate (autonomous agents or human) – is related with improved team performance, and that trust may grow over time |
Jia et al. (2022) | Human–AI teaming | Prototype design | ✓ | A visual explainable active learning method is proposed as a means of enhancing human–AI cooperation | |
Burton et al. (2020) | AI-based systems | Literature review | ✓ | The study organizes the proposed causes and remedies for algorithm aversion into five categories. Human-in-the-loop decision-making could be one of potential solutions for algorithm aversion | |
Oh et al. (2018) | Human–AI cocreation | Prototype design + survey | ✓ | ✓ |
|
Cai et al. (2019) | Human–AI collaborative medical decision-making | Interviews + a qualitative laboratory study | ✓ | In addition to case-specific justifications for AI models, physicians needed upfront knowledge of the model's fundamental, global features in order to collaborate successfully with AI | |
Nakahashi and Yamada (2021) | The human–agent team | Markov decision process | ✓ | ✓ | The study models a collaborative agent with implicit guidance to balance human autonomy and task performance |
Cabitza et al. (2021) | Human–AI interaction in collaborative medical tasks | Simulation | ✓ | Effective interaction protocols may ensure superior decision-making team performance than individual agents | |
Maadi et al. (2021) | Human-in-the-loop machine learning for medical applications | Literature review | ✓ | ✓ | The study summarizes the role of humans in the machine learning process and how they interact with machine learning approaches |
Steyvers et al. (2022) | Human–AI hybrid systems | A Bayesian combination model | ✓ | Differentiating mistakes made by human and machine across multiple class labels could improve hybrid human–machine performance | |
Sundar (2020) | Human–AI interaction in mediated communication | Conceptual | ✓ | ✓ | Various affordances are predicted to influence user engagement and experience based on how much they allow users to interact with the system, give them a sense of control, offer them tangible benefits, and provide and give them ways to improve each other |
Shrestha et al. (2019) | Human and AI-based organizational decision-making | Conceptual | ✓ | ✓ | The study compares human and AI-based decision-making on different dimensions and offers a unique paradigm detailing how both types of decision-making might be merged to optimize the organizational decision-making |
Zerilli et al. (2022) | Human–AI teams | Conceptual | ✓ | The study discusses the relevance of different forms of algorithmic transparency played in rebuilding trust when humans and AI work together | |
Newton et al. (2022) | Human–bot teams in the open-source software development | Experiments | ✓ | ✓ |
|
Sowa et al. (2021) | Human–AI collaboration in the workplace | Interviews | ✓ | Human–AI cooperation was shown to boost productivity in this research | |
Peeters et al. (2021) | Hybrid collective intelligence | Conceptual | ✓ | ✓ | The impacts of AI on society are being debated from three angles: the techno-centric, human-centric and collective intelligence-centric views |
Strobelt et al. (2022) | Human–AI collaboration for data-backed text generation | Prototype design | ✓ | This study designs a prototype called GenNI (Generation Negotiation Interface) to enhance human–AI collaboration in the context of text generation | |
Terziyan et al. (2018) | AI in Industry 4.0 | Prototype design | ✓ | ✓ | The study proposes the Pi-Mind technology that can capture and duplicate decision models of humans. The Pi-Mind technology allows humans and artificial intelligence (AI) to share responsibility for the consequences of their decisions |
Cañas (2022) | Human–AI collaboration | Conceptual | ✓ | ✓ | Cosupervision and coresponsibility for collaborators (i.e. human and AI) is required for human–AI collaboration |
Wei et al. (2022) | Human–AI collaborative conversational systems | Quasi-experimental design | ✓ | The informational demands of users were taken into consideration while developing a hierarchical categorization method for human–AI collaborative conversations | |
Rajpurkar et al. (2022) | AI in health and medicine | Literature review | ✓ | ✓ | Human–AI collaboration is a promising future research direction for the medical AI domain |
Veitch and Andreas Alsos (2022) | Human–AI interaction in autonomous ship systems | Literature review | ✓ | Human–machine collaboration could lead to better system performance than any counterpart could accomplish by operating alone | |
Yu and Li (2022) | Human–AI collaborative work | Online experiment | ✓ | Workers' trust in AI is affected by AI decision-making transparency in a human–AI collaborative work environment | |
Rai et al. (2019) | Human–AI hybrids in digital platforms | Conceptual | ✓ | Human–AI hybrids may vary from “substitution” (AI replaces people) to hybrids augmentation (humans and AI enhance each other) to assemblage (AI and humans are working as an integrated entity) | |
Raisch and Krakowski (2021) | AI in organizations | Literature review | ✓ | ✓ | Unlike automation, which refers to AI taking over human, augmentation refers to people collaborating with machines to complete a task. Automation and augmentation are not just mutually exclusive but also inextricably linked |
Askarisichani et al. (2022) | The human–AI nexus in group decision-making | Literature review | ✓ | The study discusses significant factors in managing human–AI decision–making in a group environment |
Notes
References
Adadi, A. and Berrada, M. (2018), “Peeking inside the black-box: a survey on explainable artificial intelligence (XAI)”, IEEE Access, Vol. 6, pp. 52138-52160.
Almagrabi, A.O. and Al-Otaibi, Y.D. (2020), “A survey of context-aware messaging-addressing for sustainable internet of things (IoT)”, Sustainability, Vol. 12 No. 10, p. 4105.
Amershi, S., Weld, D., Vorvoreanu, M., Fourney, A., Nushi, B., Collisson, P., Suh, J., Iqbal, S., Bennett, P.N., Inkpen, K., Teevan, J., Kikin-Gil, R. and Horvitz, E. (2019), “Guidelines for human-AI interaction”, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, Association for Computing Machinery, New York, NY, Glasgow, pp. 1-13.
Amoretti, M., Belli, L. and Zanichelli, F. (2017), “UTravel: smart mobility with a novel user profiling and recommendation approach”, Pervasive and Mobile Computing, Vol. 38, pp. 474-489.
Andrews, R.W., Lilly, J.M., Srivastava, D. and Feigh, K.M. (2022), “The role of shared mental models in human-AI teams: a theoretical review”, Theoretical Issues in Ergonomics Science, pp. 1-47 (In press).
Arsénio, A., Serra, H., Francisco, R., Nabais, F., Andrade, J. and Serrano, E. (2014), “Internet of intelligent things: bringing artificial intelligence into things and communication networks”, in Xhafa, F. and Bessis, N. (Eds), Inter-Cooperative Collective Intelligence: Techniques and Applications, Springer, Berlin, Heidelberg, pp. 1-37.
Asan, O., Bayrak, A.E. and Choudhury, A. (2020), “Artificial intelligence and human trust in healthcare: focus on clinicians”, Journal of Medical Internet Research, Vol. 22 No. 6, e15154.
Askarisichani, O., Bullo, F., Friedkin, N.E. and Singh, A.K. (2022), “Predictive models for human-AI nexus in group decision making”, Annals of the New York Academy of Sciences, Vol. 1514 No. 1, pp. 70-81.
Augusto, J., Aztiria, A., Kramer, D. and Alegre, U. (2017), “A survey on the evolution of the notion of context-awareness”, Applied Artificial Intelligence, Vol. 31 Nos 7-8, pp. 613-642.
Baki Kocaballi, A., Ijaz, K., Laranjo, L., Quiroz, J.C., Rezazadegan, D., Tong, H.L., Willcock, S., Berkovsky, S. and Coiera, E. (2020), “Envisioning an artificial intelligence documentation assistant for future primary care consultations: a co-design study with general practitioners”, Journal of the American Medical Informatics Association, Vol. 27 No. 11, pp. 1695-1704.
Balog, K., Radlinski, F. and Arakelyan, S. (2019), “Transparent, scrutable and explainable user models for personalized recommendation”, Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, Association for Computing Machinery, New York, NY, pp. 265-274.
Bastug, E., Bennis, M., Medard, M. and Debbah, M. (2017), “Toward interconnected virtual reality: opportunities, challenges, and enablers”, IEEE Communications Magazine, Vol. 55 No. 6, pp. 110-117.
Bernardos, A.M., Tarrío, P. and Casar, J.R. (2008), “A data fusion framework for context-aware mobile services”, 2008 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems, IEEE, Seoul, pp. 606-613.
Budhwar, P., Malik, A., De Silva, M.T.T. and Thevisuthan, P. (2022), “Artificial intelligence – challenges and opportunities for international HRM: a review and research agenda”, The International Journal of Human Resource Management, Vol. 33 No. 6, pp. 1065-1097.
Burton, J.W., Stein, M.K. and Jensen, T.B. (2020), “A systematic review of algorithm aversion in augmented decision making”, Journal of Behavioral Decision Making, Vol. 33 No. 2, pp. 220-239.
Cabitza, F., Campagner, A. and Sconfienza, L.M. (2021), “Studying human-AI collaboration protocols: the case of the Kasparov's law in radiological double reading”, Health Information Science and Systems, Vol. 9 No. 1, p. 8.
Cai, C.J., Winter, S., Steiner, D., Wilcox, L. and Terry, M. (2019), “‘Hello AI’: uncovering the onboarding needs of medical practitioners for human–AI collaborative decision-making”, in Lampinen, A. and Darren Gergle, D.A.S. (Eds), Proceedings of the ACM on Human-Computer Interaction, Association for Computing Machinery, New York, NY, Vol. 3, pp. 1-24.
Cañas, J.J. (2022), “AI and Ethics when human beings collaborate with AI Agents”, Frontiers in Psychology, Frontiers Research Foundation, Vol. 13, 836650.
Chen, T.-J.J. and Krishnamurthy, V.R. (2020), “Investigating a mixed-Initiative workflow for digital mind-Mapping”, Journal of Mechanical Design, Vol. 142 No. 10, pp. 101404-101416.
Cheverst, K., Byun, H.E., Fitton, D., Sas, C., Kray, C. and Villar, N. (2005), “Exploring issues of user model transparency and proactive behaviour in an office environment control system”, User Modelling and User-Adapted Interaction, Vol. 15 Nos 3-4, pp. 235-273.
Coppola, P., Mea, V.D., Di Gaspero, L., Lomuscio, R., Mischis, D., Mizzaro, S., Nazzi, E., Scagnetto, I. and Vassena, L. (2009), “AI techniques in a context-aware ubiquitous environment”, in Hassanien, A.-E., Abawajy, J.H., Abraham, A. and Hagras, H. (Eds), Pervasive Computing, Springer, London, pp. 157-180.
del Carmen Rodríguez-Hernández, M. and Ilarri, S. (2021), “AI-based mobile context-aware recommender systems from an information management perspective: progress and directions”, Knowledge-Based Systems, Vol. 215, 106740.
Dey, A.K., Abowd, G.D. and Salber, D. (2001), “A conceptual framework and a toolkit for supporting the rapid prototyping of context-aware applications”, Human-Computer Interaction, Vol. 16 No. 2, pp. 97-166.
Doshi-Velez, F. and Kim, B. (2017), “A roadmap for a rigorous science of interpretability”, ArXiv Preprint ArXiv:1702.08608v1, pp. 1-13, doi: 10.48550/arXiv.1702.08608.
Dubey, A., Abhinav, K., Jain, S., Arora, V. and Puttaveerana, A. (2020), “HACO: a framework for developing human-AI teaming”, Proceedings of the 13th Innovations in Software Engineering Conference on Formerly Known as India Software Engineering Conference, Association for Computing Machinery, New York, NY, pp. 1-9.
van Engelenburg, S., Janssen, M. and Klievink, B. (2019), “Designing context-aware systems: a method for understanding and analysing context in practice”, Journal of Logical and Algebraic Methods in Programming, Vol. 103, pp. 79-104.
Ferrer, X., van Nuenen, T., Such, J.M., Cote, M. and Criado, N. (2021), “Bias and discrimination in AI: a cross-disciplinary perspective”, IEEE Technology and Society Magazine, Vol. 40 No. 2, pp. 72-80.
Fischer, G. (2012), “Context-aware systems: the ‘right’ information, at the ‘right’ time, in the ‘right’ place, in the ‘right’ way, to the ‘right’ person”, in Tortora, G., Levialdi, S. and Tucci, M. (Eds), Proceedings of the International Working Conference on Advanced Visual Interfaces, Association for Computing Machinery, New York, NY, pp. 287-294.
Fügener, A., Grahl, J., Gupta, A. and Ketter, W. (2021), “Will humans-in-the-loop become borgs? Merits and pitfalls of working with AI”, MIS Quarterly, Vol. 45 No. 3, pp. 1527-1556.
Grifoni, P., D'Ulizia, A. and Ferri, F. (2018), “Context-awareness in location based services in the big data era”, in Skourletopoulos, G. (Ed.), Mobile Big Data, Springer, Cham, Vol. 10, pp. 85-127.
Grover, S., Sengupta, S., Chakraborti, T., Mishra, A.P. and Kambhampati, S. (2020), “RADAR: automated task planning for proactive decision support”, Human-Computer Interaction, Vol. 35 Nos 5/6, pp. 387-412.
Holzinger, A., Malle, B., Saranti, A. and Pfeifer, B. (2021), “Towards multi-modal causability with Graph Neural Networks enabling information fusion for explainable AI”, Information Fusion, Vol. 71, pp. 28-37.
Hong, J.Y., Suh, E.H. and Kim, S.J. (2009), “Context-aware systems: a literature review and classification”, Expert Systems with Applications, Vol. 36 No. 4, pp. 8509-8522.
Huang, Y.-C., Cheng, Y.-T., Chen, L.-L. and Hsu, J.Y. (2019), “Human-AI Co-learning for data-driven AI”, ArXiv Preprint ArXiv:1910.12544, doi: 10.48550/arXiv.1910.12544.
Jarrahi, M.H. (2018), “Artificial intelligence and the future of work: human-AI symbiosis in organizational decision making”, Business Horizons, Vol. 61 No. 4, pp. 577-586.
Jia, S., Li, Z., Chen, N. and Zhang, J. (2022), “Towards visual explainable active learning for zero-shot classification”, IEEE Transactions on Visualization and Computer Graphics, Vol. 28 No. 1, pp. 791-801.
Kalimeri, K. and Tjostheim, I. (2020), “Artificial intelligence and concerns about the future: a case study in Norway”, in Streitz, N. and Konomi, S. (Eds), Distributed, Ambient and Pervasive Interactions: 8th International Conference, Springer, Cham, pp. 273-284.
Kaminskas, M. and Ricci, F. (2012), “Contextual music information retrieval and recommendation: state of the art and challenges”, Computer Science Review, Vol. 6 Nos 2-3, pp. 89-119.
Kaplan, A. and Haenlein, M. (2019), “Siri, Siri, in my hand: who's the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence”, Business Horizons, Vol. 62 No. 1, pp. 15-25.
Kaur, D., Uslu, S. and Durresi, A. (2021), “Requirements for trustworthy artificial intelligence – a review”, in Barolli, L., Li, K.F., Enokido, T. and Takizawa, M. (Eds), International Conference on Network-Based Information Systems, Springer, Cham, Vol. 1264, pp. 105-115.
Knop, M., Weber, S., Mueller, M. and Niehaves, B. (2022), “Human factors and technological characteristics influencing the interaction of medical professionals with artificial intelligence-enabled clinical decision support systems: literature review”, JMIR Human Factors, Vol. 9 No. 1, e28639.
Korteling, J.E.H., van de Boer-Visschedijk, G.C., Blankendaal, R.A., Boonekamp, R.C. and Eikelboom, A.R. (2021), “Human- versus artificial intelligence”, Frontiers in Artificial Intelligence, Vol. 4 March, pp. 1-13.
Lebovitz, S., Lifshitz-Assaf, H. and Levina, N. (2022), “To engage or not to engage with AI for critical judgments: how professionals deal with opacity when using AI for medical diagnosis”, Organization Science, Vol. 33 No. 1, pp. 126-148.
Lepri, B., Oliver, N., Letouzé, E., Pentland, A. and Vinck, P. (2018), “Fair, transparent, and accountable Algorithmic decision-making processes”, Philosophy and Technology, Vol. 31, pp. 611-627.
Lim, B.Y., Dey, A.K. and Avrahami, D. (2009), “Why and why not explanations improve the intelligibility of context-aware intelligent systems”, Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Association for Computing Machinery, New York, NY, pp. 2119-2128.
Liu, D., Tong, C., Liu, Y., Yuan, Y. and Ju, C. (2016), “Examining the adoption and continuous usage of context-aware services: an empirical study on the use of an intelligent tourist guide”, Information Development, Vol. 32 No. 3, pp. 608-621.
Liu, M., Lv, W., Yin, B., Ge, Y. and Wei, W. (2022), “The human-AI scoring system: a new method for CT-based assessment of COVID-19 severity”, Technology and Health Care, Vol. 30 No. 1, pp. 1-10.
Longoni, C. and Cian, L. (2022), “Artificial intelligence in utilitarian vs Hedonic contexts: the ‘word-of-machine’ effect”, Journal of Marketing, Vol. 86 No. 1, pp. 91-108.
Loske, D. and Klumpp, M. (2021), “Human-AI collaboration in route planning: an empirical efficiency-based analysis in retail logistics”, International Journal of Production Economics, Vol. 241, 108236.
Luo, C., Goncalves, J., Velloso, E. and Kostakos, V. (2020), “A survey of context simulation for testing mobile context-aware applications”, ACM Computing Surveys, Vol. 53 No. 1, pp. 1-39.
Maadi, M., Akbarzadeh Khorshidi, H. and Aickelin, U. (2021), “A review on human-AI interaction in machine learning and insights for medical applications”, International Journal of Environmental Research and Public Health, Vol. 18 No. 4, p. 2121.
Mcdermott, P., Dominguez, C., Kasdaglis, N., Ryan, M., Mitre, I.T. and Nelson, A. (2018), “Human-machine teaming systems engineering guide”, MITRE Corporation, available at: https://www.mitre.org/publications/technical-papers/human-machine-teaming-systems-engineering-guide
McNeese, N.J., Demir, M., Chiou, E.K. and Cooke, N.J. (2021), “Trust and team performance in human–autonomy teaming”, International Journal of Electronic Commerce, Vol. 25 No. 1, pp. 51-72.
Metcalf, L., Askay, D.A. and Rosenberg, L.B. (2019), “Keeping humans in the loop: pooling knowledge through artificial swarm intelligence to improve business decision making”, California Management Review, Vol. 61 No. 4, pp. 84-109.
Miller, S. (2018), “AI: augmentation, more so than automation”, Asian Management Insights, Vol. 5 No. 1, pp. 1-20.
Mirbabaie, M., Brünker, F., Möllmann Frick, N.R.J. and Stieglitz, S. (2022), “The rise of artificial intelligence – understanding the AI identity threat at the workplace”, Electronic Markets, Vol. 32 No. 1, pp. 73-99.
Mishra, M., Mannaru, P., Sidoti, D., Bienkowski, A., Zhang, L. and Pattipati, K.R. (2019), “Context-driven proactive decision support for hybrid teams”, AI Magazine, Vol. 40 No. 3, pp. 41-57.
Motti, V.G., Mezhoudi, N. and Vanderdonckt, J. (2012), “Machine learning in the support of context-aware adaptation”, Proceedings of the Workshop on Context-Aware Adaptation of Service Front-Ends, Pisa.
Naiseh, M., Al-Thani, D., Jiang, N. and Ali, R. (2021), “Explainable recommendation: when design meets trust calibration”, World Wide Web, Springer, Vol. 24 No. 5, pp. 1857-1884.
Nakahashi, R. and Yamada, S. (2021), “Balancing performance and human Autonomy with Implicit guidance agent”, Frontiers in Artificial Intelligence, Vol. 4, 736321.
Newton, O.B., Saadat, S., Song, J., Fiore, S.M. and Sukthankar, G. (2022), “EveryBOTy counts: examining human-machine teams in open source software development”, Topics in Cognitive Science, pp. 1-35 (In press).
Oh, C., Song, J., Choi, J., Kim, S., Lee, S. and Suh, B. (2018), “I lead, you help but only with enough details: understanding the user experience of co-creation with artificial intelligence”, Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, Association for Computing Machinery, New York, NY, pp. 1-13.
Okamura, K. and Yamada, S. (2020), “Adaptive trust calibration for human-AI collaboration”, PLoS ONE, Vol. 15 No. 2, pp. 1-20.
Pandey, R., Purohit, H., Castillo, C. and Shalin, V.L. (2022), “Modeling and mitigating human annotation errors to design efficient stream processing systems with human-in-the-loop machine learning”, International Journal of Human Computer Studies, Vol. 160, June 2020, 102772.
Panniello, U., Gorgoglione, M. and Tuzhilin, A. (2016), “CARSs we trust: how context-aware recommendations affect customers’ trust and other business performance measures of recommender systems”, Information Systems Research, Vol. 27 No. 1, pp. 182-196.
Peeters, M.M.M., van Diggelen, J., van den Bosch, K., Bronkhorst, A., Neerincx, M.A., Schraagen, J.M. and Raaijmakers, S. (2021), “Hybrid collective intelligence in a human–AI society”, AI and Society, Vol. 36 No. 1, pp. 217-238.
Perera, C., Zaslavsky, A., Christen, P. and Georgakopoulos, D. (2014), “Context aware computing for the internet of things: a survey”, IEEE Communications Surveys and Tutorials, Vol. 16 No. 1, pp. 414-454.
Pu, P., Chen, L. and Hu, R. (2012), “Evaluating recommender systems from the user's perspective: survey of the state of the art”, User Modeling and User-Adapted Interaction, Vol. 22 Nos 4/5, pp. 317-355.
Puranam, P. (2021), “Human–AI collaborative decision-making as an organization design problem”, Journal of Organization Design, Vol. 10 No. 2, pp. 75-80.
Pynadath, D.V., Barnes, M.J., Wang, N. and Chen, J.Y.C. (2018), “Transparency communication for machine learning in human-automation interaction”, in Zhou, J. and Chen, F. (Eds), Human and Machine Learning, Springer International Publishing, pp. 75-90.
Rai, A., Constantinides, P. and Sarker, S. (2019), “Editor's Comments: next-generation digital Platforms: toward human-AI hybrids”, MIS Quarterly, Vol. 43 No. 1, pp. iii-ix.
Raisch, S. and Krakowski, S. (2021), “Artificial intelligence and management: the automation–augmentation paradox”, Academy of Management Review, Vol. 46 No. 1, pp. 192-210.
Rajpurkar, P., Chen, E., Banerjee, O. and Topol, E.J. (2022), “AI in health and medicine”, Nature Medicine, Vol. 28 No. 1, pp. 31-38.
Ramchurn, S.D., Stein, S. and Jennings, N.R. (2021), “Trustworthy human-AI partnerships”, IScience, Vol. 24 No. 8, 102891, School of Electronics and Computer Science, University of Southampton, Southampton.
Robert, L.P., Pierce, C., Marquis, L., Kim, S. and Alahmad, R. (2020), “Designing fair AI for managing employees in organizations: a review, critique, and design agenda”, Human-Computer Interaction, Vol. 35 Nos 5-6, pp. 545-575.
Sarker, I.H. (2022), “AI-based modeling: techniques, applications and research issues towards automation, intelligent and smart systems”, SN Computer Science, Vol. 3 No. 2, pp. 1-20.
Sarker, I.H., Abushark, Y.B. and Khan, A.I. (2020), “ContextPCA: predicting context-aware smartphone apps usage based on machine learning techniques”, Symmetry, Vol. 12 No. 4, p. 499.
Schoonderwoerd, T.A.J.J., van Zoelenvan, E.M., van den Bosch, K. and Neerincx, M.A. (2022), “Design patterns for human-AI co-learning: a wizard-of-Oz evaluation in an urban-search-and-rescue task”, International Journal of Human Computer Studies, Vol. 164, 102831.
Setten, M.van, Pokraev, S. and Koolwaaij, J. (2004), “Context-aware recommendations in the mobile tourist application COMPASS”, in De Bra, P.M.E. and Nejdl, W. (Eds), International Conference on Adaptive Hypermedia and Adaptive Web-Based Systems, Springer, Berlin, Heidelberg, Berlin, pp. 235-244.
Seymour, M., Yuan, L., Dennis, A.R. and Riemer, K. (2021), “Have we crossed the uncanny valley? Understanding affinity, trustworthiness, and preference for realistic digital humans in immersive environments”, Journal of the Association for Information Systems, Vol. 22 No. 3, pp. 591-617.
Sezer, O.B., Dogdu, E. and Ozbayoglu, A.M. (2018), “Context-aware computing, learning, and big data in internet of things: a survey”, IEEE Internet of Things Journal, Vol. 5 No. 1, pp. 1-27.
Shin, D. (2021), “The effects of explainability and causability on perception, trust, and acceptance: implications for explainable AI”, International Journal of Human-Computer Studies, Vol. 146, 102551.
Shrestha, Y.R., Ben-Menahem, S.M. and von Krogh, G. (2019), “Organizational decision-making structures in the Age of artificial intelligence”, California Management Review, Vol. 61 No. 4, pp. 66-83.
Siemon, D. (2022), “Let the computer evaluate your idea: evaluation apprehension in human-computer collaboration”, Behaviour and Information Technology, pp. 1-19 (In press).
Silva de Oliveira, C., Sanin, C. and Szczerbicki, E. (2022), “Smart knowledge engineering for cognitive systems: a brief overview”, Cybernetics and Systems, Vol. 53 No. 5, pp. 384-402.
Sowa, K., Przegalinska, A. and Ciechanowski, L. (2021), “Cobots in knowledge work: human – AI collaboration in managerial professions”, Journal of Business Research, Vol. 125, pp. 135-142.
Steyvers, M., Tejeda, H., Kerrigan, G. and Smyth, P. (2022), “Bayesian modeling of human-AI complementarity”, Proceedings of the National Academy of Sciences of the United States of America, Vol. 119 No. 11, e2111547119.
Strobelt, H., Kinley, J., Krueger, R., Beyer, J., Pfister, H. and Rush, A.M. (2022), “GenNI: human-AI collaboration for data-backed Text generation”, IEEE Transactions on Visualization and Computer Graphics, Vol. 28 No. 1, pp. 1106-1116.
Sturm, T., Gerlach, J.P., Pumplun, L., Mesbah, N., Peters, F., Tauchert, C., Nan, N. and Buxmann, P. (2021), “Coordinating human and machine learning for effective organizational learning”, MIS Quarterly, Vol. 45 No. 3, pp. 1581-1602.
Sundar, S.S. (2020), “Rise of machine agency: a framework for studying the Psychology of human–AI interaction (HAII)”, Journal of Computer-Mediated Communication, Vol. 25 No. 1, pp. 74-88.
Terziyan, V., Gryshko, S. and Golovianko, M. (2018), “Patented intelligence: cloning human decision models for Industry 4.0”, Journal of Manufacturing Systems, Vol. 48, pp. 204-217.
Tóth, Z., Caruana, R., Gruber, T. and Loebbecke, C. (2022), “The dawn of the AI robots: towards a new framework of AI robot accountability”, Journal of Business Ethics, Vol. 178, pp. 895-916.
Unger, M., Tuzhilin, A. and Livne, A. (2020), “Context-aware recommendations based on deep learning frameworks”, ACM Transactions on Management Information Systems, Vol. 11 No. 2, pp. 1-15.
van den Broek, E., Sergeeva, A. and Huysman, M. (2021), “When the machine meets the expert: an ethnography of developing AI for hiring”, MIS Quarterly, Vol. 45 No. 3, pp. 1557-1580.
Veitch, E. and Andreas Alsos, O. (2022), “A systematic review of human-AI interaction in autonomous ship systems”, Safety Science, Vol. 152, 105778.
Verbert, K., Manouselis, N., Ochoa, X., Wolpers, M., Drachsler, H., Bosnic, I. and Duval, E. (2012), “Context-aware recommender systems for learning: a survey and future challenges”, Learning Technologies, IEEE Transactions On, Vol. 5 No. 4, pp. 318-335.
Wei, Y., Lu, W., Cheng, Q., Jiang, T. and Liu, S. (2022), “How humans obtain information from AI: categorizing user messages in human-AI collaborative conversations”, Information Processing and Management, Vol. 59 No. 2, 102838.
Woo, W.L. (2020), “Future trends in I&M: human-machine co-creation in the rise of AI”, IEEE Instrumentation and Measurement Magazine, Vol. 23 No. 2, pp. 71-73.
Yang, Q., Steinfeld, A., Rosé, C. and Zimmerman, J. (2020), “Re-Examining whether, why, and how human-AI interaction is uniquely difficult to design”, Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, Association for Computing Machinery, New York, NY, pp. 1-13.
Yu, L. and Li, Y. (2022), “Artificial intelligence decision-making transparency and employees' trust: the parallel multiple mediating effect of effectiveness and discomfort”, Behavioral Sciences, Vol. 12 No. 5, p. 127.
Yurur, O., Liu, C.H., Sheng, Z., Leung, V.C.M., Moreno, W. and Leung, K.K. (2016), “Context-awareness for mobile sensing: a survey and future directions”, IEEE Communications Surveys and Tutorials, Vol. 18 No. 1, pp. 68-93.
Zerilli, J., Bhatt, U. and Weller, A. (2022), “How transparency modulates trust in artificial intelligence”, Patterns, Vol. 3 No. 4, 100455.