Human-like communication in conversational agents: a literature review and research agenda

Michelle M.E. Van Pinxteren (Department of International Relationship Management, Zuyd University of Applied Sciences, Maastricht, The Netherlands)
Mark Pluymaekers (Department of International Relationship Management, Zuyd University of Applied Sciences, Maastricht, The Netherlands)
Jos G.A.M. Lemmink (Department of Marketing and Supply Chain Management, Maastricht University School of Business and Economics, Maastricht, The Netherlands)

Journal of Service Management

ISSN: 1757-5818

Article publication date: 18 June 2020

Issue publication date: 24 September 2020

18735

Abstract

Purpose

Conversational agents (chatbots, avatars and robots) are increasingly substituting human employees in service encounters. Their presence offers many potential benefits, but customers are reluctant to engage with them. A possible explanation is that conversational agents do not make optimal use of communicative behaviors that enhance relational outcomes. The purpose of this paper is to identify which human-like communicative behaviors used by conversational agents have positive effects on relational outcomes and which additional behaviors could be investigated in future research.

Design/methodology/approach

This paper presents a systematic review of 61 articles that investigated the effects of communicative behaviors used by conversational agents on relational outcomes. A taxonomy is created of all behaviors investigated in these studies, and a research agenda is constructed on the basis of an analysis of their effects and a comparison with the literature on human-to-human service encounters.

Findings

The communicative behaviors can be classified along two dimensions: modality (verbal, nonverbal, appearance) and footing (similarity, responsiveness). Regarding the research agenda, it is noteworthy that some categories of behaviors show mixed results and some behaviors that are effective in human-to-human interactions have not yet been investigated in conversational agents.

Practical implications

By identifying potentially effective communicative behaviors in conversational agents, this study assists managers in optimizing encounters between conversational agents and customers.

Originality/value

This is the first study that develops a taxonomy of communicative behaviors in conversational agents and uses it to identify avenues for future research.

Keywords

Citation

Van Pinxteren, M.M.E., Pluymaekers, M. and Lemmink, J.G.A.M. (2020), "Human-like communication in conversational agents: a literature review and research agenda", Journal of Service Management, Vol. 31 No. 2, pp. 203-225. https://doi.org/10.1108/JOSM-06-2019-0175

Publisher

:

Emerald Publishing Limited

Copyright © 2020, Emerald Publishing Limited

License

Licensed re-use rights only


Introduction

The current service industry is gradually evolving to become technology-driven rather than human-driven. Due to developments in artificial intelligence (AI) and information and communication technology (ICT), technology is integrated into service encounters in many forms and shapes (Larivière et al., 2017; De Keyser et al., 2019; Wirtz et al., 2018). Consider, for example, self-service technologies such as self-checkout counters or mobile apps. Since these technologies encourage customers to produce service outcomes independent of a human service employee (Meuter et al., 2005), they promise to bring benefits to both customers and service providers (Rust and Huang, 2014). But while developers are starting to overcome technological barriers, psychological barriers on the customer side become apparent (Åkesson et al., 2014; Lian, 2018). Customers need time to get acquainted with these new forms of services, which are often experienced as impersonal and lacking human touch (Dabholkar et al., 2003; Makarem et al., 2009).

To overcome these obstacles, conversational agents are increasingly deployed into service encounters (Bolton et al., 2018; De Keyser et al., 2019). Conversational agents are “systems that mimic human conversation” using communication channels such as speech, text but also facial expressions and gestures (Laranjo et al., 2018, p. 1,248; Radziwill and Benton, 2017). Conversational agents roughly consist of three categories: chatbots without embodiment, virtually embodied avatars and physically embodied robots. The deployment of conversational agents in service encounters is growing exponentially in sectors such as hospitality, banking, entertainment and health care and also shows a gradual increase in other industries (Botanalytics, 2018; Lester et al., 2004). Examples include chatbots in schools teaching languages (Fryer and Carpenter, 2006), avatars recommending products in e-commerce (Qiu and Benbasat, 2010) and robots assisting elderly in health care (Čaić et al., 2018). Despite their technological progress and potential added value for social presence in automated service encounters, in reality, conversational agents hardly seem to foster relationships (Marinova et al., 2017). This lack of success may be due to the fact that conversational agents do not yet make optimal use of communicative behaviors that humans use to enhance relational outcomes. Indeed, several authors have suggested that to be utilized to their full potential, conversational agents should communicate more like humans (Fink, 2012; Wang et al., 2007).

In recent years, many researchers have investigated how relational outcomes are affected by the implementation of human-like communicative behaviors (e.g. the use of body movements, humor or communication style) in conversational agents (Groom et al., 2009; Keeling et al., 2010; Niculescu and Banchs, 2019). However, this research is scattered across disciplines (e.g., AI, psychology, marketing, computer science and communication science), and a clear overview of investigated behaviors and their effects is currently missing (Cowell and Stanney, 2005; van Doorn et al., 2017). This is problematic, both for service managers who would like to optimize interactions between conversational agents and customers and for academics who would like to do research in this area and identify promising avenues for future investigations.

Therefore, the current study will first create an overview of human-like communicative behaviors that have already been investigated in conversational agents and their effects. Secondly, a research agenda is constructed that points to potentially effective communicative behaviors that have not yet been explored, as well as remarkable findings in earlier studies that require further investigation. To this end, we first conduct a systematic literature review across different disciplines to identify which communicative behaviors have already been investigated in conversational agents with the goal of enhancing relational outcomes. Subsequently, we create a taxonomy of these behaviors using open and axial coding, so that in the next step, we can analyze which categories of behaviors have been investigated most frequently and how the effects on relational outcomes differ per category. For categories that are relatively underresearched, we search the literature on human-to-human (H2H) service encounters for potentially effective communicative behaviors that have thus far been overlooked in the human-to-machine (H2M) literature. By following these steps, we aim to provide insight into which communicative behaviors used by conversational agents have positive effects on relational outcomes (Brandtzaeg and Følstad, 2017) and which additional behaviors or variables could be investigated in future research (Cassell, 2000).

Conceptual background

In this section, we will first explain how communicative behaviors affect relational mediators including trust, rapport and liking in H2H service encounters. Subsequently, we will discuss why communicative behaviors are theorized to have similar effects in service encounters between humans and machines, such as conversational agents (H2M). Finally, we will raise the issue that despite their ability to display communicative behaviors, conversational agents currently do not optimally establish relationships with customers, which provides the rationale for the current study.

Communicative behaviors in H2H service encounters

Numerous studies in the field of relationship marketing have recognized the importance of communication to the overall success of the service provider (Lin and Lin, 2017; Palmatier et al., 2006). Service encounters typically comprise some form of communication during which the customer infers a perception of the service employee (e.g. competence, friendliness) from verbal and nonverbal communicative behaviors that the employee displays, such as the use of gestures and expressions (Specht et al., 2007; Sundaram and Webster, 2000). These perceptions affect relational mediators such as trust and rapport, which in turn affect intention to use, word of mouth, loyalty and cooperation, see Figure 1 (Hennig-Thurau et al., 2002; Palmatier et al., 2006). According to communication accommodation theory (Giles et al., 1991), efforts from the service employee to communicatively adapt to the customer's needs contribute to a more positive perception of the service employee. Therefore, service employees strategically utilize communicative behaviors to actively steer customers' perceptions in the desired direction (Cronin and Taylor, 1994; Gremler and Gwinner, 2008). For example, customer service agents using the pronoun “I” (first-person singular) instead of “you” (second-person singular) or “we” (first-person plural) are perceived as more empathetic by customers (Packard et al., 2014). Perceived empathy, in turn, positively affects the quality of the relationship between the customer and the service provider, which is important as strong relationships drive intention to use the service again, word of mouth, loyalty and cooperation (Crosby et al., 1990; Palmatier et al., 2006).

In both service management and marketing literature, relationship quality is theorized to mediate the relationship between customer perceptions and relational outcomes (Moliner, 2009; Palmatier et al., 2006). Relationship quality is an important multidimensional variable made up of key determinants that reflect the overall nature of an exchange relationship between customer and service provider (Hennig-Thurau et al., 2002). However, researchers disagree on which ones best capture the construct (see Vieira et al. (2008) for a comprehensive overview). For example, Morgan and Hunt (1994) propose that commitment (a combination of liking and emotional closeness) together with trust captures relationship quality the best, yet others suggest it is either trust (Sirdeshmukh et al., 2002) or commitment (Anderson and Weitz, 1992). Furthermore, some authors argue that a combination of commitment, trust and relationship satisfaction (Wulf et al., 2001) or rapport (Gremler and Gwinner, 2000) provides a more suitable definition. The main focus of this study will be on communicative behaviors that improve the relationship quality between conversational agents and customers. Therefore, the operationalization of relationship quality in this study includes all these determinants, which from now on will be called relational mediators: “trust,” “commitment,” “rapport,” “satisfaction,” “liking” and “emotional closeness,” see Figure 1.

Communicative behaviors in H2M service encounters

Implementing human-like communicative behaviors in conversational agents builds upon the “Computers Are Social Actors” or “CASA” paradigm (Nass et al., 1994; Nass and Reeves, 1996). In a series of studies, several authors have demonstrated that humans tend to attribute essential human capacities and traits, such as personality, feelings or rational thought, to machines (e.g. Bickmore and Cassell, 2001). As a result, machines are treated as social interaction partners, able to engage in meaningful interaction. This tendency is called anthropomorphism and is “a process of inductive inference, by which humans try to rationalize and predict machines” behavior (Epley et al., 2007).

As humans apply human attributes and traits to nonhuman agents, it also steers their perceptions of them. For example, Nass et al. (1995) have found that computers using strong language are perceived as having a dominant personality. Holtgraves et al. (2007) have demonstrated that computers using the first name of the interlocutor as a form of politeness are perceived as more skilled. In addition, Tzeng (2004) has revealed that computers apologizing for their mistakes are perceived as more sensitive and less mechanical. Together with anthropomorphism theory, the CASA paradigm has fueled the development of conversational agents (chatbots, avatars, robots) that utilize the same myriad of communicative behaviors that humans use to establish relationships (Fink, 2012). Examples are chatbots that engage in social praise (Kaptein et al., 2011), avatars mimicking head and torso movements (Hale and Hamilton, 2016) and nodding robots (Broadbent et al., 2013). Despite these efforts, conversational agents do not seem to establish satisfactory relationships with customers, which raises several questions (Everett et al., 2017; Morgan, 2017; Polani, 2017).

First, it has been questioned whether all of these communicative behaviors work as intended when used by conversational agents (Brandtzaeg and Følstad, 2017). Second, multiple authors have voiced the need to broaden the theoretical lens on relationships with conversational agents in a service context (van Doorn et al., 2017; Marinova et al., 2017; Wirtz et al., 2018). Although communicative behaviors have been acknowledged to play an important role in these relationships (Bickmore and Cassell, 2001), a clear research agenda outlining both current research and avenues for future research is lacking.

Methodology and results

The first step in the current study consisted of gathering relevant studies on the use of communicative behaviors by conversational agents and its effects on relational mediators and outcomes. The search process is described in the next section and visualized in Figure 2.

The search process and the sample

An initial search in Google Scholar and Web of Science was conducted using a combination of search terms that included terminology for different types of conversational agents, communicative behaviors and relational mediators. Due to the lack of consensus regarding the definitions of conversational agents, multiple terms for these agents were gathered in the literature and included, see Figure 2 (McTear et al., 2016; Radziwill and Benton, 2017). Regarding the communicative behaviors, the terms “communication technique,” “communication strategy” and “interface” were included as well because communicative behaviors are commonly referred to as “strategies,” “techniques” or “interface” in the literature on robotics and user interface design (Duffy, 2003; Torrey et al., 2013). Lastly, the search included terms for the relational mediators as specified in Figure 2. Retrieved articles were screened and added to the data set if they investigated the effects of one or more communicative behavior(s) on one or more relational mediator(s) in the context of chatbots, avatars, robots or a combination of agent types. An example of an article that was excluded is Cassell and Bickmore (2000), as it did not report the effects of communicative behavior(s) on relational mediators but rather describes how multiple behaviors can be integrated. Grey literature identified in this search, including dissertations, theses and conference proceedings, was also included for screening.

Due to the novelty of the topic, the initial search resulted in a small number of articles. Therefore, backward snowball sampling was used to gather more relevant studies (Lecy and Beatty, 2012). The reference lists of the articles obtained in the search were carefully inspected and articles that met the criteria were included. This was an iterative process, so reference lists of articles obtained in the snowballing were also inspected until saturation was reached. This approach has several advantages. First, reviews using solely keyword searches are often hampered by cognitive biases because they are limited to the most common keywords in a particular area (Vieira et al., 2008). In contrast, an analysis of reference lists offers a more comprehensive and objective approach to map literature on a specific topic.

The obtained data set is visualized in Figure 3. The relatively high number of conference papers emphasizes the novelty of this topic across various fields. All studies were published between 1999 and 2018, with four studies published between 1998 and 2003, 17 between 2003 and 2008, 17 between 2008 and 2013 and 23 between 2013 and 2018.

A taxonomy of communicative behaviors investigated in conversational agents

Next, an inventory of the independent, mediating and dependent variables was constructed. On average, the experimental studies investigated the effect of two independent variables on four dependent variables. Regarding mediation effects, 12 out of the 54 experimental studies tested a mediated relationship. Commonly investigated mediating variables included anthropomorphism (Waytz et al., 2014), similarity (Vugt et al., 2010) and social presence (Lee et al., 2006a; Lee et al., 2006b). Of the experimental studies, 26 out of the 54 investigated a moderation effect. The most common variables included as moderators were gender (e.g. Kaptein et al., 2011; Siegel et al., 2009), personality (e.g. Cassell and Bickmore, 2000; Lee et al., 2006b) or the interaction between communicative behaviors (e.g. Kanda et al., 2007). Some less common moderators included loneliness (Lee et al., 2006a) and task difficulty (Stanton and Stevens, 2014).

After creating the aforementioned inventory of all investigated communicative behaviors, we used a combination of open and axial coding to establish whether categories of behaviors could be distinguished. Following grounded theory (Wolfswinkel et al., 2013), the independent variable(s) of each study was (were) labeled by the researcher using open coding. This entailed that each independent variable received a label describing its main theme on a more abstract level. For example, the label “etiquette” was assigned to the independent variable in Parasuraman and Miller's (2004) study, who manipulated a robot to be either interruptive or impatient. If a study examined multiple communicative behaviors that were different in nature, multiple labels were generated, since grounded theory prescribes that literature should be analyzed per theme, not per study (Wolfswinkel et al., 2013). For example, a study by Kaptein et al. (2011) manipulated both statements of positive feedback and mimicry of response time in a chatbot. Although humans use both of these communicative behaviors to come across more socially intelligent, they tap into different themes and thus received the separate labels “social praise” and “response time.”

In order to identify categories of labels, axial coding was applied. More specifically, all labels were compared on similarities and differences and subsequently categorized by two separate researchers. Small differences in axial coding were discussed until agreement was reached. Sometimes a study fell under multiple categories [1]. If this was the case, it received labels in multiple categories. The axial coding resulted in a taxonomy discerning nine categories in which all studies but four could be placed, see Table 1. The colored dots behind each label indicate which type of conversational agent was investigated in the respective study. As can be seen in Table 1, the two axes that emerged were “modality” and “footing.” These two axes will be explained in more detail as follows.

Modality

The modality axis in the taxonomy classifies the nature of communicative behavior(s) under investigation and distinguishes three categories, namely verbal behavior, nonverbal behavior and appearance characteristics. In research on H2H communication, modality is considered a particular mode in which communication is expressed (e.g. gestures, eye gaze) and can be used to convey a message or show social conventions (Cassell, 2001). In service management research, it is common to distinguish between two categories of modality, namely verbal versus nonverbal behavior, in which verbal behavior involves written and spoken language, whereas nonverbal behavior does not (vocal sounds that are not words, such as a sighs, are considered nonverbal) (Holmqvist et al., 2017; Sundaram and Webster, 2000). However, in research on H2M interaction, it is more common to make a distinction between verbal behavior, nonverbal behavior and appearance characteristics, as appearance characteristics can be fully changed in conversational agents, whereas humans can only change them partly. Moreover, a study by Bergmann et al. (2012) showed that appearance characteristics, such as human-like appearance (vs cartoonish appearance), have different effects than nonverbal behaviors, such as gestures. Therefore, this distinction was explicitly included in our taxonomy.

Footing

Footing describes the grounds on which communicative behaviors aim to establish relationships. According to Goffman (1979), the way humans interpret the world is determined by the mental structures or “frames” used. These frames provide the context of how situations or phenomena are interpreted (e.g. gain or loss frame). For a service encounter, this means that both the customer and the service provider have their own interpretation of the encounter. In social encounters, humans present their mental frames to each other by means of communicative choices, for example, by using particularly positive or negative words. The alignment of mental frames between the customer and the service provider is what Goffman (1979) calls footing (see also Giles, 2016). This alignment is important because living up to the customers' expectations greatly shapes the perceived quality of the service and the relationship between the customer and service provider (Cronin and Taylor, 1994). During the axial coding, we found that conversational agents employ three broad categories of communicative behaviors to align with the user. These three categories (human similarity, individual similarity and responsiveness) will now be discussed in more detail.

First, similarity includes all communicative behaviors that aim to make the customer feel more similar to the agent. In our taxonomy, we discern two types of similarity, as some behaviors try to achieve similarity to humans in general, for example, mimicking a human face (Broadbent et al., 2013), while others focus on similarity to the individual user, for example, mimicking the user's face (Vugt et al., 2010). The idea that similarity between a conversational agent and a customer benefits relational outcomes is explained by theories from social psychology. The similarity attraction theory (Byrne, 1997), the social identity theory (Tajfel, 1974) and the self-categorization theory (Turner and Reynolds, 2011) combine to support the idea that people are attracted to, prefer and support relationships with similar others. In the field of robotics, however, literature prescribes that although similarity to humans is an important driver of relational outcomes, too much resemblance can hinder these outcomes. For example, the uncanny valley theory (Mori et al., 2012) argues that humans prefer machines similar to humans in appearance and functioning, until a certain tipping point. Therefore, the degree of both human similarity and individual similarity between a conversational agent and a customer should be chosen with caution.

Responsiveness was the third footing category identified. Responsiveness is defined as “behaving in a sensitive manner that is supportive of another person's needs” (Hoffman et al., 2014, p. 1) and includes behaviors such as partner affirmation, communal sharing and social support (Reis, 2007). Perceived responsiveness refers to “a sense of felt understanding, validation and caring” (Reis, 2007, p. 78). In service encounters, responsiveness can be enhanced by behaviors that signal that the service employee listens and understands the customer, like nodding, expressions of concern and emotion, and asking questions (Maisel et al., 2008). Humans vary greatly in their expression of responsive behaviors, suggesting that responsiveness is not an innate human trait, as opposed to, for example, memory or movement (Maisel et al., 2008). The idea that responsiveness between a service provider and a customer benefits service outcomes is also supported by research in social psychology. In a relationship context, these behaviors have been found to follow from and foster relationship well-being, but also to decrease sadness and anxiety (Canevello and Crocker, 2010; Maisel et al., 2008). In H2H service encounters, responsive behaviors such as listening attentively (de Ruyter and Wetzels, 2000) or asking questions (Gremler and Gwinner, 2008) have been found to correlate positively with service satisfaction. In conversational agents, such responsive behavior can also be integrated into the interface. For example, Parasuraman and Miller (2004) have manipulated etiquette in a chatbot by changing its tendency to interrupt during a conversation, whereas Keeling et al. (2010) have built an avatar using a socially oriented communication style.

An analysis of significant effects per category

In order to investigate which communicative behaviors yield positive effects when used by conversational agents, the effects of the communicative behaviors in each category of the taxonomy were carefully inspected and noticeable differences and similarities were identified. Categories that contained many positive effects were colored light green in Table 1, indicating that communicative behaviors from these categories yield promising results. Categories characterized by mixed results were colored light yellow. The behaviors in these categories yield some significant results on relational mediators and outcomes, but the effects are more complex than initially suggested. Categories with less than five studies were considered too empty to draw conclusions. Further, we will first discuss the green cells first, followed by the yellow cells.

Categories with mainly positive effects

All in all, there were three categories of behaviors that showed predominantly positive effects on relational outcomes: appearance characteristics grounded in human similarity, appearance characteristics grounded in individual similarity and nonverbal behaviors grounded in responsiveness.

Generally speaking, appearance characteristics grounded in human similarity exert positive effects on relational mediators and outcomes (Broadbent et al., 2013; Nowak and Rauh, 2005), particularly when the conversational agent is physically present or even touchable (Bainbridge et al., 2008; Lee et al., 2006a). Robots thus seem to have an advantage over avatars and chatbots thanks to their physical embodiment. However, if appearance becomes too human-like, users experience an eerie sensation, which can cause them to dislike the conversational agent (Bartneck et al., 2007). Furthermore, multiple studies showed positive effects were only achieved if users' expectations of the robots' behavior, evoked by the human-like appearance, were met (Luo et al., 2006; McBreen and Jack, 2001).

Similarly, appearance characteristics grounded in individual similarity have mainly positive effects on relational mediators, for example, Paiva et al. (2005), Qiu and Benbasat (2010). Again, these effects were found to be stronger when users experienced feelings of social presence (Qiu and Benbasat, 2010) and identification with the conversational agent (Kim et al., 2012). Although ethnic appearance features that were similar to the individual user did yield positive significant effects, the effects found for gender were mixed, with some studies reporting a preference for agents with the same gender and others reporting the opposite (Qiu and Benbasat, 2010; Nowak and Rauh, 2005). A possible explanation for this contradiction is provided by Powers et al. (2005), who argue that personas play an important role in interactions with conversational agents. Their study showed that women tend to use fewer words to explain dating norms for females to a female robot compared to a male robot. This suggests that the gender of the robot, as inferred by appearance, activates a persona, which in turn affects user perceptions.

Finally, the category nonverbal behavior grounded in responsiveness, which included behaviors such as gaze and nodding, was also found to yield significant positive effects (Gratch et al., 2007; Kaptein et al., 2011). For example, Kanda et al. (2007) showed that a robot performing behaviors that signal active listening behavior, including nodding and making eye contact, was evaluated more positively than its nonlistening counterpart. Other nonverbal forms of active listening, including prompt response time, were found to affect user evaluations of intelligence, friendliness and liking (however, only for females) (Kaptein et al., 2011). Parasuraman and Miller (2004) even found that good etiquette, manipulated as not interrupting the interlocutor, could overcome the negative effects caused by the otherwise unreliable behavior of a chatbot.

For practitioners who wish to optimize interactions between conversational agents and customers, implementing communicative behaviors from one of the aforementioned categories appears to be a relatively safe and effective option. However, they should take heed of the uncanny valley effect and possible interactions between communicative behaviors and the gender of both the customer and the conversational agent. We will return to this in the section on managerial implications as follows.

Categories with mixed results

There were also three categories that showed mixed results: nonverbal behaviors grounded in human similarity, nonverbal behaviors grounded in individual similarity and verbal behavior grounded in responsiveness.

Regarding the category nonverbal behaviors grounded in human similarity, effects on relational mediators and outcomes were largely dependent on other variables. For example, van den Brule et al. (2014) demonstrated that gestures performed by a robot enhanced user trust, however, only if the gestures were predictive of the robot's behavior. In addition, Cowell and Stanney (2005) found that users interacting with an avatar demonstrating trusting facial expressions, such as eye contact and paralanguage, trusted the avatar more than users interacting with an avatar lacking these nonverbal behaviors. However, there was no concordant increase in perceived trust if the avatar was able to change its posture and gestures over time. Krämer et al. (2007) found positive effects of gestures in a study that demonstrated that when an avatar showed self-touching behaviors, such as scratching itself, it was perceived more positively than its nongesturing counterpart.

An explanation of the mixed effects of different body movements is provided by Pejsa et al. (2015). According to this study, coordinated movements of the eyes and head enable an avatar to convey more information and establish a stronger affiliation with the user, whereas upper body movement enables the avatar to grab the user's attention and direct it through the environment. In line with this, Stanton and Stevens (2014) found that a robot demonstrating gaze movements increased users' trust in the robot, however, only when the robot helped to carry out a difficult task. This suggests that in contrast to facial movements, body movements do not serve relational purposes. Another explanation might lie in the congruency of the movements with other communicative behaviors. Salem et al. (2011) found that the effects of gesturing on liking and intentions to use the robot again were particularly pronounced if the robot's gestures were incongruent with its speech for half of the time. Similar results were obtained by Groom et al. (2009), who found that users liked an avatar more when it moved in sync with its speech at some times and out of sync at others. This seems to indicate that especially unexpected gestures induce positive perceptions of the conversational agent. However, implementing such features should be done cautiously, as incongruence in gestures was also found to hinder task performance (Salem et al., 2013). On a related note, von der Pütten et al. (2010) demonstrated the importance of these features being implemented realistically in a certain conversation. Features that do not make sense at all negatively impact users' evaluations of the agent.

For nonverbal behaviors grounded in similarity to the individual, the effects on relational mediators and outcomes were also mixed. For example, Siegel et al. (2009) manipulated a robot's gender, using its voice, and found users rate the voice of the opposite sex instead of the same sex as more credible, trustworthy and engaging. This suggests that customers prefer a robot that is dissimilar to them in terms of gender. For personality, however, Lee et al. (2006b) found that similarity to the user in loudness and facial expressions increased enjoyment. Furthermore, when looking at movements in particular, results were mixed and dependent on the type of movements mimicked. For example, Bailenson and Yee (2005) found that users had a more positive perception of an avatar when it mimicked the user's pitch, jaw and eye movements. However, a study by Hale and Hamilton (2016) only found weak effects of mimicry of the torso and head on rapport.

Finally, verbal behaviors grounded in responsiveness, such as affect support and social praise, had positive effects on relational mediators including liking, but only under particular circumstances (Bickmore and Cassell, 2001; Derrick and Ligon, 2014; Klein et al., 2002; Kulms et al., 2014). For example, Derrick and Ligon (2014) showed that male and female users differ in their preferences for social praise used by an avatar. Whereas ingratiation techniques, such as self-presentation, increased liking of the avatar for men, self-promotion increased the attractiveness of the avatar for women. Furthermore, Strait et al. (2014) showed that a robot using polite speech is perceived as more considerate and likable, however, only when evaluated by a bystander and not by the direct user. In addition, Cassell and Bickmore (2003) found small talk is an important driver of trust in avatars, however, only for extravert users. Lastly, Lee et al. (2010) demonstrated service recovery strategies were successful in reducing the negative impact of a robotic breakdown. However, users with a relational orientation responded best to an apology, while those with a utilitarian orientation responded best to compensation.

The use of humor was also placed under verbal responsiveness, as it also conveys empathy (Hampes, 2010). However, it is difficult to make clear statements on its effectiveness, as humor is very personal. In general, humor used by conversational agents had positive effects on relational mediators such as trust and liking (e.g. Mirnig et al., 2017). For example, Sjobergh and Araki (2009) showed that a joke was perceived as funnier when told by a robot, than when users read the joke by themselves. Furthermore, in a study by Niculescu and Banchs (2019), a chatbot telling fun facts was compared to a nonhumorous counterpart and found to be liked more. Humor was also found to have negative effects under some conditions. For example, Tay et al. (2016) found that nondisparaging jokes are liked more when told by a human, whereas disparaging jokes are perceived as less disgusting when told by a robot. Humor, therefore, can help establish relationships, but certainly not all types of humor are appropriate for this.

The categories showing mixed results are probably more interesting for academics than for practitioners, as they point at the existence of important moderators such as personal preferences or personality characteristics. More attention will be devoted to these behaviors when we present the research agenda in the section on theoretical implications as follows.

An analysis of potential additional behaviors from the H2H literature

The overview of communicative behaviors investigated in conversational agents (see Table 1) allowed us to conduct a targeted search in the literature on H2H service encounters for additional, thus far “overlooked” behaviors that could be investigated in H2M interactions. Considering the magnitude of H2H literature, we restricted our search to review articles. We searched Google Scholar and Web of Science for review articles on the effects of communicative behaviors displayed by service employees on one of the relational mediators mentioned in Figure 2. The search yielded three review studies and one overview paper based on the critical incident technique.

The first review article used for comparison was a review by Boles et al. (2000) on the communicative behaviors service employees utilize to build relationships with customers. The second review was by Swan et al. (1999), who specifically focused on how service employees build trust relationships. Lastly, the third review by Gremler and Gwinner (2000) and the overview article by Gremler and Gwinner (2008) both investigated how service employees establish rapport in H2H service encounters. The labels in Table 1 were compared to the communicative behaviors mentioned in these four articles and potential additional behaviors were noted down. Further, we discuss which “overlooked” behaviors we identified. First, we will do this for the categories that were relatively underresearched in the H2M literature (fewer than five labels in Table 1), with the exception of the category appearance characteristics grounded in responsiveness. For this particular category, no overlooked behaviors were obtained from the H2H literature. Thereafter, we will also provide some additional suggestions for the other categories.

Overlooked behaviors for categories containing fewer than five labels

First of all, in our taxonomy only one article belonged to the category verbal behaviors grounded in similarity to humans: Richards and Bransky's (2014) study on cognitive recall, in which recall was found to increase both the user's enjoyment of interacting with the agent and the believability of the character over multiple service encounters. Therefore, expressions that signal other human cognitive processes, such as thinking out loud (e.g. “let me think,” “I was thinking”) and processing (e.g. “please give me a second to process that”) verbatim, might also be interesting to investigate in future research. This is in line with the idea put forward by Gremler and Gwinner (2008) that being attentive to the customer helps to build rapport.

Secondly, studies that aim to establish relationships through human-like appearance characteristics primarily seemed to focus on conversational agents having a human-like (instead of a cartoon-like) appearance (e.g. Parise et al., 1999; Bartneck et al., 2009) and in particular on having a human-like face (e.g. Broadbent et al., 2013). Although not many other appearance characteristics are mentioned in the H2H literature, Gremler and Gwinner (2008) do refer to a study by Wood (2006), who shows that appropriate attire can influence customer perceptions of expertise and thereby trust. Furthermore, Swan et al. (1999) mention the importance of appearance in coming across as competent and benevolent. Therefore, future research might investigate more specific and detailed appearance cues, such as attire, build or posture, in conversational agents.

Third, the studies in the taxonomy that investigated nonverbal behaviors grounded in individual similarity particularly focused on head movements (Bailenson and Yee, 2005; Hale and Hamilton, 2016) and voice cues, which signal similarity in terms of gender (Siegel et al., 2009) or personality (Lee et al., 2006b). However, Gremler and Gwinner (2008) mention various other nonverbal behaviors that when mimicked accordingly have been found to foster rapport with customers in service encounters. These behaviors include posture, speech rate, gestures, breathing patterns and facial expressions.

Finally, studies looking at behaviors that establish relationships through individual similarity in verbal communication were scarce in the taxonomy. Only communication style (Li and Mao, 2015) and personality (introvert or extravert voice) (Lee et al., 2006b) were investigated in this category. Although these variables seemed to increase engagement and enjoyment, individual verbal language use can be mimicked in many more ways (Gremler and Gwinner, 2008). Consider, for example, “linguistic mimicry” or mimicry of individual word usage, which has been found to help to develop positive relationships between customers and salespersons. A related, but different concept called “linguistic style matching” (the use of certain types of function words in similar frequencies) has also been found to play an important role in relationship initiation and stability (Ireland et al., 2011). Furthermore, Gremler and Gwinner (2008) mention common grounding behaviors, which are verbal conversational techniques to establish common ground with the user. These techniques include, for example, pointing out similarities in lifestyle or interests.

Overlooked behaviors for categories containing more than five labels

First, in our taxonomy we found that communicative behaviors in conversational agents that aim to establish similarity with humans primarily focus on nonverbal behaviors. However, several nonverbal behaviors investigated in the H2H literature remain overlooked. According to Sundaram and Webster (2000), communicative nonverbal behavior is divided into paralanguage, kinetics (e.g. movement) and proxemics (e.g. distance). Research on nonverbal behavior in conversational agents focused mainly on kinetics and less on paralinguistic cues such as loudness, rate, pitch and proxemics, all of which could be interesting avenues for future research.

Research on communicative behaviors grounded in responsiveness is strongly focused on verbal behaviors. In the nonverbal category, only a few behaviors, including cooperative gestures and listening behaviors such as nodding, can be found (e.g. Gratch et al., 2007; Kanda et al., 2007). This provides an opportunity for future research as Gremler and Gwinner (2008) identify several other responsive nonverbal behaviors. For example, a reference is made to a study by Wood (2006), who showed that a friendly smile is an important communicative cue to increase perceived trustworthiness. Furthermore, nonverbal displays of empathy with the customer are considered crucial for establishing rapport (Gremler and Gwinner, 2000), but remain largely unexplored in the H2M literature. Therefore, future research might want to investigate the effects of facial expressions of care and concern in avatars or humanoid robots. Most studies in the category of verbal responsiveness focused on politeness (Portela and Granell-Canut, 2017) or humor (Torrey et al., 2013). However, perceptions of politeness can be induced by other behaviors than those mentioned in the taxonomy. Examples include expressions of cheerfulness or unexpected honesty (Gremler and Gwinner, 2008). Besides politeness, humor was found to be another commonly investigated behavior. Although humor is mentioned by Gremler and Gwinner (2008), the authors point out the importance of (mutual) laughter in service encounters and not only humor per se. In addition, it is also mentioned that humor should be related to the service situation. These might be useful additions for the investigation of humor in conversational agents. Furthermore, Gremler and Gwinner (2008) mention that empathy does include not only expressions of care but also expressions that signal an understanding of the problems of the customers, which they refer to as cognitive empathy. Furthermore, these expressions can be strengthened by important back-channel responses including “um-hmms” to induce perceptions of listening and thereby influence rapport (Gremler and Gwinner, 2008).

Conclusion and discussion

Given the reluctance of customers to engage with conversational agents, the question has been raised regarding which communicative behaviors, if any, conversational agents can use to build better relationships (Brandtzaeg and Følstad, 2017). This study aims to answer that question by creating both an overview of communicative behaviors that have already been investigated in conversational agents and their effects and a research agenda that points to overlooked communicative behaviors and thought-provoking findings in existing studies that call for further research.

Taken together, the findings of our literature review show that the use of certain communicative behaviors by conversational agents has significant positive effects on relevant relational mediators and outcomes such as intention to use, word of mouth, loyalty or increased cooperation (e.g. Richards and Bransky, 2014; Waytz et al., 2014). Other behaviors, however, show effects that are less clear and straightforward as one would expect based on anthropomorphism theory (Epley et al., 2007) or the CASA paradigm (Nass and Reeves, 1996). Furthermore, several communicative behaviors that had already been identified in the literature on H2H service encounters have not yet been investigated in conversational agents. This suggests that there are several promising avenues for future research, which are outlined in the next section.

Theoretical implications: a research agenda

This study has several theoretical implications. First, conducting research on conversational agents in a service setting requires a multidisciplinary approach, including theories and previous research from the field of AI, robotics, psychology, service management, communication science and others. This is the first study to analyze studies from different subdisciplines and draw up a taxonomy that allows comparison across fields, categories of communicative behaviors and different types of conversational agents. Using this taxonomy, researchers who are interested in the impact of AI on services can conduct more targeted research into the communicative behaviors that can be implemented in conversational agents to enhance the overall customer experience.

More specifically, we advise researchers to focus on (1) categories of communicative behaviors that show mixed results in our review and (2) communicative behaviors from the H2H literature that have not yet been investigated in conversational agents. With respect to (1), several complexities emerged from our research. First and foremost, user characteristics such as gender (Derrick and Ligon, 2014), personality (Cassell and Bickmore, 2003) or relationship orientation (Lee et al., 2010) have been shown to interfere with the intended effects of communicative behaviors used by conversational agents. This is particularly true for verbal behaviors grounded in responsiveness. For example, Derrick and Ligon (2014) showed that men and women have different preferences for the use of social praise by conversational agents, and Cassell and Bickmore (2003) showed that small talk evokes trust in extroverts but not in introverts. This suggests that humans differ not only in their responsiveness toward others but also in their receptiveness for responsiveness from others. Initial attempts to map user expectations and preferences can be found in usability studies such as Brandtzaeg and Følstad (2018) and Baron (2015); it is desirable that more research is conducted in this area. For example, experimental research could investigate how preferences of specific user groups, such as users with high and low affinity for technology, moderate the effects of particular communicative behaviors. Furthermore, the interplay between appearance characteristics and verbal and nonverbal behaviors should be explored further, as some studies show that accommodating a conversational agent with human-like appearance characteristics also increases user expectations of the agent's verbal or nonverbal behavior (e.g. Luo et al., 2006; McBreen and Jack, 2001).

In the H2H literature, several potential additional behaviors are mentioned that have not yet been investigated in conversational agents. The full overview was provided in the results section, but we will highlight a few particularly promising ones here: mimicry of language use or communication style, mimicry of gestures, posture, speech rate or facial expressions, the use of common grounding behaviors, the use nonverbal expressions of empathy and the use of back-channel responses that signal active listening.

Managerial implications: communicative behaviors worth considering

The systematic review reported in this study sheds more light on the effects of communicative behaviors in conversational agents, which can help service managers enhance the experience of customers interacting with conversational agents. In the green cells in Table 1, several behaviors are mentioned, which have been shown to have positive effects when used by conversational agents. These behaviors include (but are not restricted to) human-like appearance, similarity in appearance to the customer, the use of etiquette, the use of cooperative gestured and the use of laughter. However, even for some of these behaviors, implementing them can come at a cost, such as decreased task performance by the user (Salem et al., 2013), attention shifts (Pejsa et al., 2015) or the activation of undesirable personas (Powers et al., 2005). Therefore, we strongly advise service managers to carefully consider whether the ends justify the means.

Limitations

Of course, our study also has some limitations. Firstly, this literature review is a snapshot of a research field that is in flux. Technological innovations are moving faster than ever before, which implies that the practical state of the art might be already one step ahead. We hope this overview and research agenda encourage scholars to shed more light on ongoing developments.

Second, we have combined research on chatbots, avatars and robots under the common denominator of conversational agents. Although there is theoretical justification to do so, we may have oversimplified the differences between agent types. For example, the effects of communicative behaviors tested in a robot cannot be seen in the absence of its physical embodiment. Therefore, it is hard to compare the effects of similar communicative behaviors between different agent types.

Third, we used review articles as a basis for comparison between the H2M and the H2H literature. Suitable review articles were scarce, however, which suggests that a more detailed analysis of individual studies could have provided a more complete picture. Nevertheless, we hope to have provided a useful framework for such a comparison and encourage other authors to extend our work with more specific studies in the future.

Figures

The role of communicative behaviors in service relationships

Figure 1

The role of communicative behaviors in service relationships

Search procedures

Figure 2

Search procedures

Diagram of included studies

Figure 3

Diagram of included studies

Taxonomy of communicative behaviors in conversational agents ( = chatbot,  = avatar,  = robot,  = combination)

Note

1.

Seven studies received two labels in different categories and one study three. This explains why 70 different labels instead of 61 are displayed in Table 1.

References

Note: References with an asterisk were included in the literature review

Åkesson, M., Edvardsson, B. and Tronvoll, B. (2014), “Regular issue paper: customer experience from a self-service system perspective”, Journal of Service Management, Vol. 25 No. 5, pp. 677-698.

Anderson, E. and Weitz, B. (1992), “The use of pledges to build and sustain commitment in distribution channels”, Journal of Marketing Research, Vol. 29 No. 1, pp. 18-34.

Bailenson, J.N. and Yee, N. (2005) *, “Digital chameleons: automatic assimilation of nonverbal gestures in immersive virtual environments”, Psychological Science, Vol. 16 No. 10, pp. 814-819.

Bainbridge, W.A., Hart, J., Kim, E.S. and Scassellati, B. (2008) *, “The effect of presence on human-robot interaction”, RO-MAN 2008 - the 17th IEEE International Symposium on Robot and Human Interactive Communication, IEEE, pp. 701-706.

Baron, N.S. (2015), “Shall we talk? Conversing with humans and robots”, The Information Society, Vol. 31 No. 3, pp. 257-264.

Bartneck, C., Kanda, T., Ishiguro, H. and Hagita, N. (2007) *, “Is the uncanny valley an uncanny cliff?”, RO-MAN 2007 - the 16th IEEE International Symposium on Robot and Human Interactive Communication, IEEE, pp. 368-373.

Bartneck, C., Kanda, T., Mubin, O. and Al Mahmud, A. (2009), “Does the design of a robot influence its animacy and perceived intelligence?”, International Journal of Social Robotics, Vol. 1 No. 2, pp. 195-204.

Bergmann, K., Eyssel, F. and Kopp, S. (2012), “A second chance to make a first impression? How appearance and nonverbal behavior affect perceived warmth and competence of virtual agents over time”, Intelligent {Virtual} {Agents}, Vol. 7502, pp. 126-138.

Bickmore, T. and Cassell, J. (2001) *, “Relational agents: a model and implementation of building user trust”, Proceedings of the SIGCHI Conference on Human Factors in Computing Systems - CHI '01, ACM Press, New York, pp. 396-403.

Boles, J.S., Johnson, J.T. and Barksdale, H.C. (2000), “How salespeople build quality relationships: a replication and extension”, Journal of Business Research, Vol. 48 No. 1, pp. 75-81.

Bolton, R.N., McColl-Kennedy, J.R., Cheung, L., Gallan, A., Orsingher, C., Witell, L. and Zaki, M. (2018), “Customer experience challenges: bringing together digital, physical and social realms”, Journal of Service Management, Vol. 29 No. 5, pp. 776-808.

Botanalytics (2018), “The top industries driving chatbot innovation”, available at: https://botanalytics.co/blog/2018/02/07/top-chatbot-industries-driving-chatbot-innovation/ (accessed 1 February 2019).

Brandtzaeg, P.B. and Følstad, A. (2017), “Why people use chatbots”, Insci 2017, Vol. 9934, pp. 377-392.

Brandtzaeg, P.B. and Følstad, A. (2018), “Chatbots”, Interactions, Vol. 25 No. 5, pp. 38-43.

Broadbent, E., Kumar, V., Li, X., Sollers, J., Stafford, R.Q., MacDonald, B.A. and Wegner, D.M. (2013) *, “Robots with display screens: a robot with a more humanlike face display is perceived to have more mind and a better personality”, PloS One, Vol. 8 No. 8, e72589.

Byrne, D. (1997), “An overview (and underview) of research and theory within the attraction paradigm”, Journal of Social and Personal Relationships, Vol. 14 No. 3, pp. 417-431.

Čaić, M., Odekerken-Schröder, G. and Mahr, D. (2018), “Service robots: value co-creation and co-destruction in elderly care networks”, Journal of Service Management, Vol. 29 No. 2, pp. 178-205.

Canevello, A. and Crocker, J. (2010), “Creating good relationships: responsiveness, relationship quality, and interpersonal goals”, Journal of Personality and Social Psychology, Vol. 99 No. 1, pp. 78-106.

Cassell, J. and Bickmore, T. (2000), “External manifestations of trustworthiness in the interface”, Communications of the ACM, Vol. 43 No. 12, pp. 50-56.

Cassell, J. and Bickmore, T. (2003) *, “Negotiated collusion: modeling social language and its relationship effects in intelligent agents”, User Modelling and User-Adapted Interaction, Vol. 13 Nos 1-2, pp. 89-132.

Cassell, J. (2000), “More than just another pretty face: embodied conversational interface agents”, Communications of the ACM, Vol. 43, pp. 70-78.

Cassell, J. (2001), “Conversational agents representation and intelligence in user interfaces”, AI Magazine, Vol. 22 No. 4, pp. 67-84.

Cowell, A.J. and Stanney, K.M. (2005) *, “Manipulation of non-verbal interaction style and demographic embodiment to increase anthropomorphic computer character credibility”, International Journal of Human-Computer Studies, Vol. 62 No. 2, pp. 281-306.

Cronin, J.J. and Taylor, S.A. (1994), “SERVPERF versus SERVQUAL: reconciling performance-based and perceptions-minus-expectations measurement of service quality”, Journal of Marketing, Vol. 58 No. 1, p. 125.

Crosby, L.A., Evans, K.R. and Cowles, D. (1990), “Relationship quality in services selling: an interpersonal influence perspective”, Journal of Marketing, Vol. 54 No. 3, p. 68.

Dabholkar, P.A., Michelle Bobbitt, L. and Lee, E. (2003), “Understanding consumer motivation and behavior related to self‐scanning in retailing”, International Journal of Service Industry Management, Vol. 14 No. 1, pp. 59-95.

De Keyser, A., Köcher, S., Alkire (née Nasr), L., Verbeeck, C. and Kandampully, J. (2019), “Frontline Service Technology infusion: conceptual archetypes and future research directions”, Journal of Service Management, Vol. 30 No. 1, pp. 156-183.

de Ruyter, K. and Wetzels, M.G.M. (2000), “The impact of perceived listening behavior in voice-to-voice service encounters”, Journal of Service Research, Vol. 2 No. 3, pp. 276-284.

Derrick, D.C. and Ligon, G.S. (2014) *, “The affective outcomes of using influence tactics in embodied conversational agents”, Computers in Human Behavior, Vol. 33, pp. 39-48.

Duffy, B.R. (2003), “Anthropomorphism and the social robot”, Robotics and Autonomous Systems, Vol. 42 Nos 3-4, pp. 177-190.

Epley, N., Waytz, A. and Cacioppo, J.T. (2007), “On seeing human: a three-factor theory of anthropomorphism”, Psychological Review, Vol. 114 No. 4, pp. 864-886.

Everett, J., Pizarro, D. and Crockett, M. (2017), “Why are we reluctant to trust robots?”, available at: https://www.theguardian.com/science/head-quarters/2017/apr/24/why-are-we-reluctant-to-trust-robots (accessed 16 November 2017).

Fink, J. (2012), Anthropomorphism and Human Likeness in the Design of Robots and Human-Robot Interaction, pp. 199-208.

Fryer, L. and Carpenter, R. (2006), “EMerging techonologies. Bots as language learning tools”, Language, Learning and Technology, Vol. 10 No. 3, pp. 8-14.

Giles, H., Coupland, N. and Coupland, J. (1991), “Accommodation theory: communication, context, and consequence”, in Giles, H., Coupland, J. and Coupland, N. (Eds), Contexts of Accommodation, Cambridge University Press, Cambridge, pp. 1-68.

Giles, H. (2016), “Communication Accommodation Theory, the International Encyclopedia of Communication Theory and Philosophy, John Wiley & Sons, Hoboken, NJ, pp. 1-7.

Goffman, E. (1979), “Footing”, Semiotica, Vol. 25 Nos 1-2, pp. 1-30.

Gratch, J., Wang, N., Okhmatovskaia, A., Lamothe, F., Morales, M., van der Werf, R.J. and Morency, L.-P. (2007) *, “Can virtual humans Be more engaging than real ones?”, 12th International Conference on Human-Computer Interaction, pp. 286-297.

Gremler, D.D. and Gwinner, K.P. (2000), “Customer-employee rapport in service relationships”, Journal of Service Research, Vol. 3 No. 1, pp. 82-104.

Gremler, D.D. and Gwinner, K.P. (2008), “Rapport-building behaviors used by retail employees”, Journal of Retailing, Vol. 84 No. 3, pp. 308-324.

Groom, V., Nass, C., Chen, T., Nielsen, A., Scarborough, J.K. and Robles, E. (2009) *, “Evaluating the effects of behavioral realism in embodied agents”, International Journal of Human Computer Studies, Elsevier, Vol. 67 No. 10, pp. 842-849.

Hale, J. and Hamilton, A.F.D.C. (2016) *, “Testing the relationship between mimicry, trust and rapport in virtual reality conversations”, Scientific Reports, Vol. 6 No. 1, p. 35295.

Hampes, W.P. (2010), “The relation between humor styles and empathy”, Europe's Journal of Psychology, Vol. 6 No. 3, pp. 131-138.

Hennig-Thurau, T., Gwinner, K.P. and Gremler, D.D. (2002), “Understanding relationship marketing outcomes: an integration of relational benefits and relationship quality”, Journal of Service Research, Vol. 4 No. 3, pp. 230-247.

Hoffman, G., Birnbaum, G.E., Vanunu, K., Sass, O. and Reis, H.T. (2014) *, “Robot responsiveness to human disclosure affects social impression and appeal”, Proceedings of the 2014 ACM/IEEE International Conference on Human-Robot Interaction - HRI '14, ACM Press, New York, pp. 1-8.

Holmqvist, J., Van Vaerenbergh, Y. and Grönroos, C. (2017), “Language use in services: recent advances and directions for future research”, Journal of Business Research, Elsevier, Vol. 72, pp. 114-118.

Holtgraves, T.M., Ross, S.J., Weywadt, C.R. and Han, T.L. (2007) *, “Perceiving artificial social agents”, Computers in Human Behavior, Vol. 23 No. 5, pp. 2163-2174.

Ireland, M.E., Slatcher, R.B., Eastwick, P.W., Scissors, L.E., Finkel, E.J. and Pennebaker, J.W. (2011), “Language style matching predicts relationship initiation and stability”, Psychological Science, Vol. 22 No. 1, pp. 39-44.

Kanda, T., Kamasima, M., Imai, M., Ono, T., Sakamoto, D., Ishiguro, H. and Anzai, Y. (2007) *, “A humanoid robot that pretends to listen to route guidance from a human”, Autonomous Robots, Vol. 22 No. 1, pp. 87-100.

Kaptein, M., Markopoulos, P., de Ruyter, B. and Aarts, E. (2011) *, “Two acts of social intelligence: the effects of mimicry and social praise on the evaluation of an artificial agent”, AI and Society, Vol. 26 No. 3, pp. 261-273.

Keeling, K., McGoldrick, P. and Beatty, S. (2010) *, “Avatars as salespeople: communication style, trust, and intentions”, Journal of Business Research, Elsevier, Vol. 63 No. 8, pp. 793-800.

Kim, C., Lee, S.G. and Kang, M. (2012) *, “I became an attractive person in the virtual world: users' identification with virtual communities and avatars”, Computers in Human Behavior, Vol. 28 No. 5, pp. 1663-1669.

Klein, J., Moon, Y. and Picard, R.W. (2002) *, “This computer responds to user frustration”, Interacting with Computers, ACM Press, New York, Vol. 14 No. 2, pp. 119-140.

Krämer, N.C., Simons, N. and Kopp, S. (2007) *, “The effects of an embodied conversational agent's nonverbal behavior on user's evaluation and behavioral mimicry”, in International Workshop on Intelligent Virtual Agents, Springer, Berlin, Heidelberg, pp. 238-251.

Kulms, P., Kopp, S. and Krämer, N.C. (2014) *, “Let's Be serious and have a laugh: can humor support cooperation with a virtual agent?”, Interacting with Computers, Vol. 14, pp. 250-259.

Laranjo, L., Dunn, A.G., Tong, H.L., Kocaballi, A.B., Chen, J., Bashir, R., Surian, D., Gallego, B., Magrabi, F., Lau, A.YS. and Coiera, E. (2018), “Conversational agents in healthcare: a systematic review”, Journal of the American Medical Informatics Association, Vol. 25 No. 9, pp. 1248-1258.

Larivière, B., Bowen, D., Andreassen, T.W., Kunz, W., Sirianni, N.J., Voss, C., Wünderlich, N.V. and De Keyser, A. (2017), “‘Service Encounter 2.0’: an investigation into the roles of technology, employees and customers”, Journal of Business Research, Vol. 79, pp. 238-246.

Lecy, J.D. and Beatty, K.E. (2012), “Representative literature reviews using constrained snowball sampling and citation network analysis”, SSRN Electronic Journal, January, available at: https://doi.org/10.2139/ssrn.1992601.

Lee, K.M., Jung, Y., Kim, J. and Kim, S.R. (2006a) *, “Are physically embodied social agents better than disembodied social agents?: the effects of physical embodiment, tactile interaction, and people's loneliness in human–robot interaction”, International Journal of Human-Computer Studies, Vol. 64 No. 10, pp. 962-973.

Lee, K.M., Peng, W., Jin, S.A. and Yan, C. (2006b) *, “Can robots manifest personality?: an empirical test of personality recognition, social responses, and social presence in human-robot interaction”, Journal of Communication, Vol. 56 No. 4, pp. 754-772.

Lee, M.K., Kiesler, S., Forlizzi, J., Srinivasa, S. and Rybski, P. (2010) *, “Gracefully mitigating breakdowns in robotic services”, 2010 5th ACM/IEEE International Conference on Human-Robot Interaction (HRI), IEEE, pp. 203-210.

Lester, J., Branting, K. and Mott, B. (2004), “Conversational agents”, The Practical Handbook of Internet Computing, Chapman and Hall/CRC, Florida, pp. 220-240.

Li, M. and Mao, J. (2015) *, “Hedonic or utilitarian? Exploring the impact of communication style alignment on user's perception of virtual health advisory services”, International Journal of Information Management, Vol. 35 No. 2, pp. 229-243.

Lian, J.-W. (2018), “Why is self-service technology (SST) unpopular? Extending the IS success model”, Library Hi Tech, p. LHT-01-2018-0015.

Lin, C.-Y. and Lin, J.-S.C. (2017), “The influence of service employees' nonverbal communication on customer-employee rapport in the service encounter”, Journal of Service Management, Vol. 28 No. 1, pp. 107-132.

Luo, J.T., McGoldrick, P., Beatty, S. and Keeling, K.A. (2006) *, “On-screen characters: their design and influence on consumer trust”, Journal of Services Marketing, Vol. 20 No. 2, pp. 112-124.

Maisel, N.C., Gable, S.L. and Strachman, A. (2008), “Responsive behaviors in good times and in bad”, Personal Relationships, Vol. 15 No. 3, pp. 317-338.

Makarem, S.C., Mudambi, S.M. and Podoshen, J.S. (2009), “Satisfaction in technology-enabled service encounters”, Journal of Services Marketing, Vol. 23 No. 3, pp. 134-144.

Marinova, D., de Ruyter, K., Huang, M.H., Meuter, M.L. and Challagalla, G. (2017), “Getting smart: learning from technology-empowered frontline interactions”, Journal of Service Research, Vol. 20 No. 1, pp. 29-42.

McBreen, H.M. and Jack, M.A. (2001) *, “Evaluating humanoid synthetic agents in e-retail applications”, IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans, Vol. 31 No. 5, pp. 394-405.

McTear, M., Callejas, Z. and Griol, D. (2016), Conversational Interfaces: Devices, Wearables, Virtual Agents, and Robots, the Conversational Interface, Springer International Publishing, Cham, pp. 283-308.

Meuter, M.L., Bitner, M.J., Ostrom, A.L. and Brown, S.W. (2005), “Choosing among alternative service delivery modes: an investigation of customer trial of self-service technologies”, Journal of Marketing, Vol. 69 No. 2, pp. 61-83.

Mirnig, N., Stollnberger, G., Giuliani, M. and Tscheligi, M. (2017) *, “Elements of humor”, Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction - HRI '17, ACM Press, New York, New York, USA, pp. 211-212.

Moliner, M.A. (2009), “Loyalty, perceived value and relationship quality in healthcare services”, Journal of Service Management, Vol. 20 No. 1, pp. 76-97.

Morgan, R.M. and Hunt, S.D. (1994), “Commitment-trust theory of relationship marketing”, Journal of Marketing, Vol. 58 No. 3, pp. 20-38.

Morgan, B. (2017), “10 things robots can't do better than humans”, available at: https://www.forbes.com/sites/blakemorgan/2017/08/16/10-things-robots-cant-do-better-than-humans/#547f6ffcc83d.

Mori, M., MacDorman, K. and Kageki, N. (2012), “The uncanny valley [from the field]”, IEEE Robotics and Automation Magazine, Vol. 19 No. 2, pp. 98-100.

Nass, C. and Reeves, B. (1996), “The media equation: how people treat computers, television and new media like real people and places”, IEEE Spectrum, Vol. 34 No. 3, pp. 9-10.

Nass, C., Steuer, J. and Tauber, E.R. (1994), “Computers are social actors”, Conference Companion on Human Factors in Computing Systems - CHI '94, ACM Press, New York, p. 204.

Nass, C., Moon, Y., Fogg, B.J., Reeves, B. and Dryer, D.C. (1995), “Can computer personalities be human personalities?”, International Journal of Human-Computer Studies, Vol. 43 No. 2, pp. 223-239.

Niculescu, A.I. and Banchs, R.E. (2019) *, “Humor intelligence for virtual agents”, 2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), IEEE, pp. 285-297.

Nowak, K.L. and Rauh, C. (2005) *, “The influence of the avatar on online perceptions of anthropomorphism, androgyny, credibility, homophily, and attraction”, Journal of Computer-Mediated Communication, Vol. 11 No. 1, pp. 153-178.

Packard, G., Moore, S.G. and McFerran, B. (2014), How Can ‘I’ Help ‘You’? The Impact of Personal Pronoun Use in Customer-Firm Agent Interactions, Marketing Science Institute Research Report, pp. 14-110.

Paiva, A., Dias, J., Sobral, D., Aylett, R., Woods, S., Hall, L. and Zoll, C. (2005) *, “Learning by feeling: evoking empathy with synthetic characters”, Applied Artificial Intelligence, Vol. 19 Nos 3-4, pp. 235-266.

Palmatier, R.W., Dant, R.P., Grewal, D. and Evans, K.R. (2006), “Factors influencing the effectiveness of relationship marketing: a meta-analysis”, Journal of Marketing, Vol. 70 No. 4, pp. 136-153.

Parasuraman, R. and Miller, C.A. (2004) *, “Trust and etiquette in high-criticality automated systems”, Communications of the ACM, Vol. 47 No. 4, pp. 51-55.

Parise, S., Kiesler, S., Sproull, L. and Waters, K. (1999) *, “Cooperating with life-like interface agents”, Computers in Human Behavior, Elsevier, Vol. 15 No. 2, pp. 123-142.

Pejsa, T., Andrist, S., Gleicher, M. and Mutlu, B. (2015) *, “Gaze and attention management for embodied conversational agents”, ACM Transactions on Interactive Intelligent Systems, Vol. 5 No. 1, pp. 1-34.

Polani. (2017), “Motionless chatbots are taking over customer service – and it's bad news for consumers”, available at: https://theconversation.com/emotionless-chatbots-are-taking-over-customer-service-and-its-bad-news-for-consumers-82962 (accessed 2 October 2018).

Portela, M. and Granell-Canut, C. (2017) *, “A new friend in our smartphone?”, Proceedings of the XVIII International Conference on Human Computer Interaction - Interacción '17, ACM Press, New York, New York, USA, pp. 1-7.

Powers, A., Kramer, A.D.I., Lim, S., Kuo, J., Lee, S.L. and Kiesler, S. (2005) *, “Eliciting information from people with a gendered humanoid robot”, ROMAN 2005. IEEE International Workshop on Robot and Human Interactive Communication, 2005., IEEE, pp. 158-163.

Qiu, L. and Benbasat, I. (2010) *, “A study of demographic embodiments of product recommendation agents in electronic commerce”, International Journal of Human-Computer Studies, Vol. 68 No. 10, pp. 669-688.

Radziwill, N.M. and Benton, M.C. (2017), “Evaluating quality of chatbots and intelligent conversational agents”, available at: http://arxiv.org/abs/1704.04579.

Reis, H.T. (2007), “Step toward the ripening of relationship science”, Personal Relationships, Vol. 14, pp. 1-23.

Richards, D. and Bransky, K. (2014) *, “ForgetMeNot: what and how users expect intelligent virtual agents to recall and forget personal conversational content”, International Journal of Human-Computer Studies, Vol. 72 No. 5, pp. 460-476.

Rust, R.T. and Huang, M.-H. (2014), “The service revolution and the transformation of marketing science”, Marketing Science, Vol. 33 No. 2, pp. 206-221.

Salem, M., Eyssel, F., Rohlfing, K., Kopp, S. and Joublin, F. (2011) *, “Effects of gesture on the perception of psychological anthropomorphism: a case study with a humanoid robot”, Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Vol. 7072, LNAI, pp. 31-41.

Salem, M., Eyssel, F., Rohlfing, K., Kopp, S. and Joublin, F. (2013) *, “To err is human(-like): effects of robot gesture on perceived anthropomorphism and likability”, International Journal of Social Robotics, Vol. 5 No. 3, pp. 313-323.

Siegel, M., Breazeal, C. and Norton, M.I. (2009) *, “Persuasive Robotics: the influence of robot gender on human behavior”, 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems, IEEE, pp. 2563-2568.

Sirdeshmukh, D., Singh, J. and Sabol, B. (2002), “Consumer trust, value, and loyalty in relational exchanges”, Journal of Marketing, Vol. 66 No. 1, pp. 15-37.

Sjöbergh, J. and Araki, K. (2009) *, “Robots make things funnier”, Annual Conference of the Japanese Society for Artificial Intelligence, Springer, Berlin, Heidelberg, pp. 306-313.

Specht, N., Fichtel, S. and Meyer, A. (2007), “Perception and attribution of employees' effort and abilities: the impact on customer encounter satisfaction”, International Journal of Service Industry Management, Vol. 18 No. 5, pp. 534-554.

Stanton, C. and Stevens, C.J. (2014) *, “Robot pressure: the impact of robot eye gaze and lifelike bodily movements upon decision-making and trust”, International Conference on Social Robotics, Springer, Cham, pp. 330-339.

Strait, M., Canning, C. and Scheutz, M. (2014) *, “Let me tell you! investigating the effects of robot communication strategies in advice-giving situations based on robot appearance, interaction modality and distance”, Proceedings of the 2014 ACM/IEEE International Conference on Human-Robot Interaction - HRI '14, ACM Press, New York, New York, USA, pp. 479-486.

Sundaram, D.S. and Webster, C. (2000), “The role of nonverbal communication in service encounters”, Journal of Services Marketing, Vol. 14 No. 5, pp. 378-391.

Swan, J.E., Bowers, M.R. and Richardson, L.D. (1999), “Customer trust in the salesperson: an integrative review and meta-analysis of the empirical literature”, Journal of Business Research, Vol. 44 No. 2, pp. 93-107.

Tajfel, H. (1974), “Social identity and intergroup behavior”, Information (International Social Science Council), Vol. 13 No. 2, pp. 65-93.

Tay, B.T.C., Low, S.C., Ko, K.H. and Park, T. (2016) *, “Types of Humor that Robots Can Play”, Computers in Human Behavior, Elsevier, Vol. 60, pp. 19-28.

Torrey, C., Fussell, S.R. and Kiesler, S. (2013) *, “How a robot should give advice”, 2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI), IEEE, pp. 275-282.

Turner, J.C. and Reynolds, K.J. (2011), Self-categorization Theory, Handbook of Theories in Social Psychology, 1st ed., pp. 399-417.

Tzeng, J.Y. (2004), “Toward a more civilized design: studying the effects of computers that apologize”, International Journal of Human-Computer Studies, Vol. 61 No. 3, pp. 319-345.

van den Brule, R., Dotsch, R., Bijlstra, G., Wigboldus, D.H.J. and Haselager, P. (2014) *, “Do robot performance and behavioral style affect human trust?: a multi-method approach”, International Journal of Social Robotics, Vol. 6 No. 4, pp. 519-531.

van Doorn, J., Mende, M., Noble, S.M., Hulland, J., Ostrom, A.L., Grewal, D. and Petersen, J.A. (2017), “Domo arigato mr. Roboto: emergence of automated social presence in organizational frontlines and customers' service experiences”, Journal of Service Research, Vol. 20 No. 1, pp. 43-58.

Vieira, A.L., Winklhofer, H. and Ennew, C.T. (2008), “Relationship quality: a literature review and research agenda”, Journal of Customer Behavior, Vol. 7 No. 4, pp. 269-291.

von der Pütten, A.M., Krämer, N.C., Gratch, J. and Kang, S. (2010) *, “‘It doesn’t matter what you are!' Explaining social effects of agents and avatars”, Computers in Human Behavior, Vol. 26 No. 6, pp. 1641-1650.

Vugt, H.C. Van, Bailenson, J.N., Hoorn, J.F. and Konijn, E.A. (2010) *, “Effects of facial similarity on user responses to embodied agents”, ACM Transactions on Computer-Human Interaction, Vol. 17 No. 2, pp. 1-27.

Wang, L.C., Baker, J., Wagner, J.A. and Wakefield, K. (2007), “Can a retail web site Be social?”, Journal of Marketing, Vol. 71 No. 3, pp. 143-157.

Waytz, A., Heafner, J. and Epley, N. (2014) *, “The mind in the machine: anthropomorphism increases trust in an autonomous vehicle”, Journal of Experimental Social Psychology, Vol. 52, pp. 113-117.

Wirtz, J., Patterson, P.G., Kunz, W.H., Gruber, T., Lu, V.N., Paluch, S. and Martins, A. (2018), “Brave new world: service robots in the frontline”, Journal of Service Management, Vol. 29 No. 5, pp. 907-931.

Wolfswinkel, J.F., Furtmueller, E. and Wilderom, C.P.M. (2013), “Using grounded theory as a method for rigorously reviewing literature”, European Journal of Information Systems, Vol. 22 No. 1, pp. 45-55.

Wood, J.A. (2006), “NLP revisited: nonverbal communications and signals of trustworthiness”, Journal of Personal Selling and Sales Management, Vol. 26 No. 2, pp. 197-204.

Wulf, K.D., Odekerken-Schröder, G. and Iacobucci, D. (2001), “Investments in consumer relationships: a cross-country and cross-industry exploration”, Journal of Marketing, Vol. 65 No. 4, pp. 33-50.

Further reading

Bailenson, J.N., Swinth, K., Hoyt, C., Persky, S., Dimov, A. and Blascovich, J. (2005) *, “The independent and interactive effects of embodied-agent appearance and behavior on self-report, cognitive, and behavioral markers of copresence in immersive virtual environments”, Presence: Teleoperators and Virtual Environments, Vol. 14 No. 4, pp. 379-393.

Cramer, H., Evers, V., Ramlal, S., Van Someren, M., Rutledge, L., Stash, N., Aroyo, L. and Wielinga, B. (2008) *, “The effects of transparency on trust in and acceptance of a content-based art recommender”, User Modeling and User-Adapted Interaction, Vol. 18 No. 5, pp. 455-496.

Dotsch, R. and Wigboldus, D.H.J. (2008) *, “Virtual prejudice”, Journal of Experimental Social Psychology, Vol. 44 No. 4, pp. 1194-1198.

Fischer, K., Lohan, K.S. and Foth, K. (2012) *, “Levels of embodiment”, Proceedings of the Seventh Annual ACM/IEEE International Conference on Human-Robot Interaction - HRI '12, ACM Press, New York, New York, USA, p. 463.

Lisetti, C., Amini, R., Yasavur, U. and Rishe, N. (2013) *, “I can help you change! An empathic virtual agent delivers behavior change health interventions”, ACM Transactions on Management Information Systems, Vol. 4 No. 4, pp. 1-28.

Looije, R., Neerincx, M.A. and De Lange, V. (2008) *, “Children's responses and opinion on three bots that motivate, educate and play”, Journal of Physical Agents, Vol. 2 No. 2, pp. 13-20.

Mathur, M.B. and Reichling, D.B. (2016) *, “Navigating a social world with robot partners: a quantitative cartography of the Uncanny Valley”, Cognition, Vol. 146, pp. 22-32.

Mutlu, B., Shiwa, T., Kanda, T., Ishiguro, H. and Hagita, N. (2009) *, “Footing in human-robot conversations: how robots might shape participant roles using gaze cues”, Human Factors, Vol. 2 No. 1, pp. 61-68.

Powers, A., Kiesler, S., Fussell, S. and Torrey, C. (2007) *, “Comparing a computer agent with a humanoid robot”, Proceeding of the ACM/IEEE International Conference on Human-Robot Interaction - HRI '07, ACM Press, New York, p. 145.

Riegelsberger, J., Sasse, M.A. and McCarthy, J.D. (2005) *, “Do people trust their eyes more than ears?”, CHI '05 Extended Abstracts on Human Factors in Computing Systems - CHI '05, p. 1745.

Wang, H.C. and Doong, H.S. (2010) *, “Argument form and spokesperson type: the recommendation strategy of virtual salespersons”, International Journal of Information Management, Vol. 30 No. 6, pp. 493-501.

Yuksel, B.F., Collisson, P. and Czerwinski, M. (2017) *, “Brains or beauty: how to engender trust in user-agent interactions”, ACM Transactions on Internet Technology, Vol. 17 No. 1, pp. 1-20.

Acknowledgements

This research was supported by the Province of Limburg, The Netherlands, under grant number SAS-2014-02207.

Corresponding author

Michelle M.E. Van Pinxteren can be contacted at: michelle.vanpinxteren@zuyd.nl

Related articles