Telematics and Informatics, volume 86, pages 102071

Emojifying chatbot interactions: An exploration of emoji utilization in human-chatbot communications

Shubin Yu 1
Luming Zhao 2
1
 
Department of Communication and Culture, BI Norwegian Business School, Room number C4I-021, Nydalsveien 37, 0484 Oslo, Norway
Publication typeJournal Article
Publication date2024-02-01
scimago Q1
SJR1.827
CiteScore17.0
Impact factor7.6
ISSN07365853, 1879324X
Electrical and Electronic Engineering
Computer Networks and Communications
Abstract
The prevalence of chatbots in human–computer communication has significantly increased. Emojis, as a form of emotional disclosure, have gained significant attention for their potential to boost chatbot service satisfaction. However, how and when emoji usage can increase satisfaction toward chatbots is not fully examined. This paper aims to fill this gap and contribute to the rapidly evolving field of human-chatbot communication research. Through three experiments, this paper investigates and explores the role of emojis in enhancing chatbot interactions. The results reveal that emojis heighten chatbot's perceived warmth but do not necessarily augment their competence. This warmth promoting effect leads to boosted service satisfaction and is more apparent when chatbots serve hedonic purposes and are pre-programmed rather than highly autonomous. However, the warmth upshot of emojis is not as potent for chatbots as it is for humans. While this study unravels the intricate pathway of how emojis augment service satisfaction, it also extends the dialogue of the Stereotype Content Model (SCM) and propels the new wave of the Computers Are Social Actors (CASA) paradigm. Thus, this research lays down pathways for further studies in understanding the role of emotionally simulated interactions in automated technologies.
Agnihotri A., Bhattacharya S.
2024-06-01 citations by CoLab: 36 Abstract  
Leveraging the computers are social actors theory, in this study, we explore traits of artificial intelligence-based chatbots that make them perceived as trustworthy, drive consumers to forgive the firm for service failure, and reduce their propensity to spread negative word-of-mouth against the firm. Across two scenario-based studies with UK consumers: one in a utilitarian product category (n = 586) and another in a hedonic product category (n = 508), and a qualitative study, our findings suggest that the perceived safety of chatbots enhances consumers' perceived ability and empathy, and anthropomorphism enhances the benevolence and integrity of chatbots, i.e., three traits of chatbots affect components of trustworthiness differently. Further, these traits have a positive influence on customer forgiveness and a negative influence on negative word-of-mouth.
Haase J., Hanel P.H.
2023-12-01 citations by CoLab: 56 Abstract  
A widespread view is that Artificial Intelligence cannot be creative. We tested this assumption by comparing human-generated ideas with those generated by six Generative Artificial Intelligence (GAI) chatbots: alpa.ai, Copy.ai, ChatGPT (versions 3 and 4), Studio.ai, and YouChat. Humans and a specifically trained AI independently assessed the quality and quantity of ideas. We found no qualitative difference between AI and human-generated creativity, although there are differences in how ideas are generated. Interestingly, 9.4% of humans were more creative than the most creative GAI, GPT-4. Our findings suggest that GAIs are valuable assistants in the creative process. Continued research and development of GAI in creative tasks is crucial to fully understand this technology's potential benefits and drawbacks in shaping the future of creativity. Finally, we discuss the question of whether GAIs are capable of being “truly” creative.
Edwards C., Edwards A., Rijhwani V.
Language Sciences scimago Q1 wos Q2
2023-09-01 citations by CoLab: 4 Abstract  
What happens when a social robot attempts to accommodate its communicative behavior towards the human interlocutor? The present experiment seeks to expand understanding of how people evaluate social robots when they (the social robots) engage in cases of over- and under-accommodation during interactions. Additionally, the current study partially replicates and extends earlier work examining nonaccommodation. Results indicated that some relationships between the stereotype content model (warmth and competence) were mediated by perceived accommodation for the evaluation outcomes. Moreover, the social robot in the overaccommodative communication condition was evaluated more positively than the social robot in the underaccommodative communication condition. As such, it is better for a social robot to be considered overaccommodative than underaccommodative.
Liu D., Lv Y., Huang W.
Technology in Society scimago Q1 wos Q1
2023-05-01 citations by CoLab: 33 Abstract  
Prior research has shown that humor can positively impact service recovery in face-to-face interactions. However, the efficacy of using humor in virtual environments for chatbots to address service failures remains unclear. Through three experiments in different populations, this paper found that using humorous emojis by chatbots can help increase consumers' willingness to continue using chatbots after service failures (i.e., reuse intention) and the underlying mechanism; that is, the level of consumers perceiving the degree of the chatbot's intelligence (i.e., perceived intelligence) partially mediates the relationship between humorous emojis use and consumers' reuse intention. Further, how people form impressions about others based on limited information (i.e., implicit personality) significantly moderates the influence path from humorous emojis use to perceived intelligence, and perceived intelligence is more likely to mediate for people who see challenging situations as opportunities (i.e., incremental theorists). In conclusion, this paper provides empirical evidence supporting the potential benefits of using humorous emojis in chatbot service recovery, and offers guidance to online retailers to leverage digital technology for effective consumer engagement.
Wang K., Chih W., Honora A.
2023-04-01 citations by CoLab: 30 Abstract  
This research investigates the impact of using emojis (i.e., the pleading-face emoji) on customer forgiveness in relation to handling complaints on social media. Specifically, this research proposes that perceived firm sincerity and perceived firm empathy play mediating roles in the relationship between emoji use and customer forgiveness. In addition, the research identifies the moderating role of communication style in the proposed relationship. Results of three experimental studies indicated that the presence of the emoji in online complaint handling leads to an increase in perceived firm sincerity, which in turn increases perceived firm empathy and, subsequently, leads to customer forgiveness. The serial mediation effects (use of emojis → perceived firm sincerity → perceived firm empathy → customer forgiveness) were moderated by the service provider’s communication style. Specifically, the serial mediation effect occurs when an informal communication style, but not a formal communication style, is used. Furthermore, no difference exists in the research findings across different service types (hedonic vs. utilitarian).
Dinh C., Park S.
Electronic Commerce Research scimago Q1 wos Q2
2023-01-06 citations by CoLab: 41 Abstract  
As chatbots become more advanced and popular, marketing research has paid enormous attention to the antecedents of consumer adoption of chatbots. This has become increasingly relevant because chatbots can help mitigate the fear and loneliness caused by the global pandemic. Therefore, unlike previous work that focused on design factors, we theorize that social presence serves a mediating role between consumer motivations (i.e., hedonic and utilitarian) and intention to use a chatbot service based on self-determination theory. Our results from a structural equation model (n = 377) indicate that hedonic (but not utilitarian) motivation significantly affects chatbots’ social presence, ultimately influencing intention to use the chatbot service. We also found that fear of COVID-19 amplifies the effect of social presence on intention to use the chatbot service. In this dynamic, we found an additional moderated moderation effect of generational cohorts (i.e., baby boomers and Generations X, Y, and Z) in experiencing different levels of fear of COVID-19. Overall, our findings emphasize the importance of motivation-matching features for consumer adoption of chatbot services. Our findings also indicate that marketers may utilize the fear element to increase adoption of chatbot services, especially when targeting the young generations (e.g., Generation Z).
Yu S., Xiong J.(., Shen H.
Journal of Consumer Psychology scimago Q1 wos Q1
2022-11-14 citations by CoLab: 27 Abstract  
This research investigates consumers’ perceptions and evaluations of robot service agents compared with human service agents when service requests are rejected. Six studies were conducted. The results show that when consumers receive a rejection of their service request, they evaluate the service less negatively if the service is handled by a chatbot agent versus a human agent. The reason is that consumers have lower expectations that robots will be able to provide flexible services to them. Consequently, their dissatisfaction with the request rejection is lower when the service is handled by robots. However, the aforementioned effect is not observed (1) when consumers have not experienced the service yet, (2) when their service request has been accepted, or (3) when the service agent conveys emotions to apologize for request rejection.
Diederich S., Brendel A.B., Morana S., Kolbe L.
2022-03-26 citations by CoLab: 124 Abstract  
Conversational agents (CAs), described as software with which humans interact through natural language, have increasingly attracted interest in both academia and practice because of improved capabilities driven by advances in artificial intelligence and, specifically, natural language processing. CAs are used in contexts such as peoples private lives, education, and healthcare, as well as in organizations to innovate or automate tasks for example, in marketing, sales, or customer service. In addition to these application contexts, CAs take on different forms in terms of their embodiment, the communication mode, and their (often human-like) design. Despite their popularity, many CAs are unable to fulfill expectations, and fostering a positive user experience is challenging. To better understand how CAs can be designed to fulfill their intended purpose and how humans interact with them, a number of studies focusing on human-computer interaction have been carried out in recent years, which have contributed to our understanding of this technology. However, currently, a structured overview of this research is lacking, thus impeding the systematic identification of research gaps and knowledge on which future studies can build. To address this issue, we conducted an organizing and assessing review of 262 studies, applying a sociotechnical lens to analyze CA research regarding user interaction, context, agent design, as well as CA perceptions and outcomes. This study contributes an overview of the status quo of CA research, identifies four research streams through cluster analysis, and proposes a research agenda comprising six avenues and sixteen directions to move the field forward
Song X., Xu B., Zhao Z.
Information and Management scimago Q1 wos Q1
2022-03-01 citations by CoLab: 75 Abstract  
• We explore the relationship between human and artificial intelligence based on theory of love. • Human users can develop intimacy and passion with an intelligent assistant. • Intimacy and passion can promote user’s commitment to and usage of an intelligent assistant • User’s Intimacy and passion is anteceded by performance efficacy and emotional capability of an intelligent assistant Along with the development of artificial intelligence (AI), more IT applications based on AI are being created. A personal intelligent assistant is an AI application that provides information, education, consulting, or entertainment to users. Due to their high levels of cognitive and emotional capabilities, we assume that users can form humanlike relationships with intelligent assistants, therefore, we develop a research model based on the theory of love. Data were collected from users of intelligent assistants through a survey. The results indicate that users can develop intimacy and passion for an AI application similar to that experienced with human beings. These feelings are related to users’ commitment, promoting the usage of an intelligent assistant, influenced by AI factors (performance efficacy and emotional capability), and moderated by human trust disposition.
Horstmann A.C., Krämer N.C.
2022-01-16 citations by CoLab: 12 Abstract  
Since social robots are rapidly advancing and thus increasingly entering people’s everyday environments, interactions with robots also progress. For these interactions to be designed and executed successfully, this study considers insights of attribution theory to explore the circumstances under which people attribute responsibility for the robot’s actions to the robot. In an experimental online study with a 2 × 2 × 2 between-subjects design (N = 394), people read a vignette describing the social robot Pepper either as an assistant or a competitor and its feedback, which was either positive or negative during a subsequently executed quiz, to be generated autonomously by the robot or to be pre-programmed by programmers. Results showed that feedback believed to be autonomous leads to more attributed agency, responsibility, and competence to the robot than feedback believed to be pre-programmed. Moreover, the more agency is ascribed to the robot, the better the evaluation of its sociability and the interaction with it. However, only the valence of the feedback affects the evaluation of the robot’s sociability and the interaction with it directly, which points to the occurrence of a fundamental attribution error.
Mohamad Suhaili S., Salim N., Jambli M.N.
2021-12-01 citations by CoLab: 103 Abstract  
• This review conducts a quantitative analysis of state-of-the-art service chatbot. • Deep and reinforcement learnings dominate the most used chatbot design techniques. • Twitter dataset emerges to be the most popular dataset used for chatbot evaluation. • Accuracy becomes the most frequently used performance evaluation metric for chatbot. Chatbots or Conversational agents are the next significant technological leap in the field of conversational services, that is, enabling a device to communicate with a user upon receiving user requests in natural language. The device uses artificial intelligence and machine learning to respond to the user with automated responses. While this is a relatively new area of study, the application of this concept has increased substantially over the last few years. The technology is no longer limited to merely emulating human conversation but is also being increasingly used to answer questions, either in academic environments or in commercial uses, such as situations requiring assistants to seek reasons for customer dissatisfaction or recommending products and services. The primary purpose of this literature review is to identify and study the existing literature on cutting-edge technology in developing chatbots in terms of research trends, their components and techniques, datasets and domains used, as well as evaluation metrics most used between 2011 and 2020. Using the standard SLR guidelines designed by Kitchenham, this work adopts a systematic literature review approach and utilizes five prestigious scientific databases for identifying, extracting, and analyzing all relevant publications during the search. The related publications were filtered based on inclusion/exclusion criteria and quality assessment to obtain the final review paper. The results of the review indicate that the exploitation of deep learning and reinforcement learning architecture is the most used technique to understand users’ requests and to generate appropriate responses. Besides, we also found that the Twitter dataset (open domain) is the most popular dataset used for evaluation, followed by Airline Travel Information Systems (ATIS) (close domain) and Ubuntu Dialog Corpora (technical support) datasets. The SLR review also indicates that the open domain provided by the Twitter dataset, airline and technical support are the most common domains for chatbots. Moreover, the metrics utilized most often for evaluating chatbot performance (in descending order of popularity) were found to be accuracy, F1-Score, BLEU (Bilingual Evaluation Understudy), recall, human-evaluation, and precision.
Kervyn N., Fiske S.T., Malone C.
2021-11-22 citations by CoLab: 62 Abstract  
People form impressions about brands as they do about social groups. The Brands as Intentional Agents Framework (BIAF) a decade ago derived from the Stereotype Content Model (SCM) two dimensions of consumers' brand perception: warmth (worthy intentions) and competence (ability). The BIAF dimensions and their predictive validity have replicated the general primacy of warmth (intentions) and developed the congruence principle of fit to context. BIAF domains include various brands, product design, and countries as origins of products and as travel destinations. Brand anthropomorphism plays a role in perceiving brands' morality, personality, and humanity. Consumer–brand relations follow from anthropomorphism: perceived brand-self congruence, brand trust, and brand love. Corporate social (ir)responsibility and human relations, especially warm, worthy intent, interplay with BIAF dimensions, as do service marketing, service recovery, and digital marketing. Case studies describe customer loyalty, especially to warm brands, corresponds to profits, charitable donations, and healthcare usage. As the SCM and BIAF evolve, research potential regards the dimensions and beyond. BIAF has stood the tests of time, targets (brands, products, and services), and alternative theory (brand personality, brand relationships), all being compatible. Understanding how people view corporations as analogous to social groups advances theory and practice in consumer psychology.
Łukasik A., Gut A.
Frontiers in Psychology scimago Q2 wos Q2 Open Access
2025-04-09 citations by CoLab: 0 PDF Abstract  
The rapid integration of artificial agents—robots, avatars, and chatbots—into human social life necessitates a deeper understanding of human-AI interactions and their impact on social interaction. Artificial agents have become integral across various domains, including healthcare, education, and entertainment, offering enhanced efficiency, personalization, and emotional connectivity. However, their effectiveness in providing successful social interaction is influenced by various factors that impact both their reception and human responses during interaction. The present article explores how different forms of these agents influence processes essential for social interaction, such as attributing mental states and intentions and shaping emotions. The goal of this paper is to analyze the roles that artificial agents can and cannot assume in social environments, the stances humans adopt toward them, and the dynamics of human-artificial agent interactions. Key factors associated with the artificial agent’s design such as physical appearance, adaptability to human behavior, user beliefs and knowledge, transparency of social cues, and the uncanny valley phenomenon have been selected as factors that significant influence social interaction in AI contexts.
Sestino A., Rizzo C., Irgang L., Stehlíková B.
2025-03-11 citations by CoLab: 0 Abstract  
PurposeThe utilization of virtual agents, particularly chatbots, within healthcare and medical contexts is witnessing exponential growth owing to their capacity to provide comprehensive support to patients throughout their healthcare journey, by reshaping the healthcare business processes. Such transformation in healthcare service delivery processes is enabled by those digital entities able to offer a preliminary screening and consultation platform, facilitating patients’ interactions with real medical professionals. However, when redesigning processes through the integration of new technologies, particular attention to the reactions of end users cannot be neglected. Thus, the purpose of this paper is investigating how both chatbots' features and patients' individual differences may shape a redesigned/renewed service in the healthcare sector.Design/methodology/approachThrough two experimental studies (Study 1 and Study 2), we examined the impact of chatbot tone of voice (formal vs unformal) on patients’ behavioral responses, particularly their intention to use.FindingsOur investigation shed light on chatbots’ characteristics in terms of perceived warmth, denoting the friendliness and empathy conveyed by the chatbot, and competence, reflecting its effectiveness in addressing user queries or tasks, when used to reshape the service delivery process; Moreover, we also shed light on the moderating role of emotional receptivity seeking, indicating that the emotionality and non-verbal communication between doctor and patient, cannot be overlooked even in innovative digital environments.Practical implicationsManagers and marketers could leverage insights from this study to tailor chatbot interactions, optimizing tone of voice to enhance patient engagement and satisfaction. By focusing on perceived warmth and competence, they can design more effective digital health solutions. Additionally, recognizing the role of emotional receptivity can guide strategies for integrating chatbots in a way that maintains a human touch in patient communications.Social implicationsFindings importantly underscore the relevance of chatbot in improving patient care, making digital interactions more empathetic and responsive. This relevance extends to business process management by demonstrating how integrating emotionally intelligent chatbots may contribute to better service delivery on the basis of personalized and effective healthcare experiences.Originality/valueThe originality of this paper is about the relevance of considering chatbots’ and final users’ characteristics to strategically plan healthcare services process redesign. Indeed, it examines chatbots' perceived warmth and competence in reshaping service delivery processes. Additionally, it highlights the moderating role of emotional receptivity seeking, emphasizing the importance of emotional communication in digital healthcare environments.
Roy R., Naidoo V.
Journal of Services Marketing scimago Q1 wos Q2
2025-03-10 citations by CoLab: 0 Abstract  
Purpose Chatbots are increasingly deployed in services and marketing applications, although they are often met with scepticism. To explore how such scepticism can be reduced, this study aims to examine how materialism and social judgment influence human–chatbot interactions. Design/methodology/approach The authors conduct one pre-test, two laboratory experiments and one simulated study conducted in the field, to test the premises. Findings The studies show that when material pursuit is guided by positive (negative) values, subjects prefer a chatbot that is perceived warm (competent) versus perceived competent (warm). This, in turn, leads to favourable purchase decisions for services with perceived homophily mediating this effect. Research limitations/implications The work addresses the call for more research on how human–robot interactions can be improved applied to a services context. While the findings are novel, they are not without limitations which in turn lay a path for future research. Practical implications The findings have implications for driving more strategic value out of how marketing and service managers can improve the interface design in human–chatbot interactions. Originality/value The propositions demonstrate a novel framing in suggesting that positive (vs negative) values underpinning material pursuit can lead to a preference for perceived warm (vs competent) chatbots, which further guide favourable decision-making.
Li Z., Wu C., Li J., Yuan Q.
2025-02-17 citations by CoLab: 0 Abstract  
PurposeChatbots are increasingly embodied in business and IS contexts to enhance customer and user experience. Despite wide interest in chatbots among business and IS academics, surprisingly, there are no current comprehensive reviews to reveal the knowledge structure of chatbot research in such areas.Design/methodology/approachThis study employed a mixed-method approach that combines systematic review and bibliometric analysis to provide a comprehensive synthesis of chatbot research. The sample was obtained in December 2023 after searching across six databases: EBSCOhost, PsycINFO, Web of Science, Scopus, ACM Digital Library and IEEE Computer Society Digital Library.FindingsThis study reveals the major trend in publication trends, countries, article performance and cluster distribution of chatbot research. We also identify the key themes of chatbot research, which mainly focus on how users interact with chatbots and their consequences, such as users’ cognition and behavior. Moreover, several important research agendas have been discussed to address some limitations in the current chatbot research in business and IS fields.Originality/valueThe present review is one of the first attempts to systematically reveal the ongoing knowledge map of chatbots in business and IS fields, which makes important contributions and provides useful resources for future chatbot research and practice.
Xiao Y., Yu S.
2025-02-01 citations by CoLab: 7 Abstract  
Imagine a world where chatbots are the first responders to crises, efficiently addressing concerns and providing crucial information. ChatGPT has demonstrated the capability of GenAI (Generative Artificial Intelligence)-powered chatbots when deployed to answer crisis-related questions in a timely and cost-efficient manner, thus replacing humans in crisis communication. However, public reactions to such messages remain unknown. To address this problem, this study recruited participants (N1 = 399, N2 = 189, and N3 = 121) and conducted two online vignette experiments and a qualitative survey. The results suggest that, when organizations fail to handle crisis-related requests, stakeholders exhibit higher satisfaction and lower responsibility attribution to chatbots providing instructing (vs. adjusting) information, as they are perceived to be more competent. However, when organizations satisfy requests, chatbots that provide adjusting (vs. instructing information) lead to higher satisfaction and lower responsibility attribution due to higher perceived competence. The second experiment involving a public emergency crisis scenario reveals that, regardless of the information provided (instructing or adjusting), stakeholders exhibit greater satisfaction and positive attitudes toward high-competence (vs. low-competence) chatbots. The qualitative study further confirms the experimental findings and offers insights to improve crisis chatbots. These findings contribute to the literature by extending situational crisis communication theory to nonhuman touchpoints and providing a deeper understanding of using chatbots in crisis communication through the lens of machine heuristics. The study also offers practical guidance for organizations to strategically integrate chatbots and human agents in crisis management based on context.
Meng H., Lu X., Xu J.
Behavioral Sciences scimago Q2 wos Q2 Open Access
2025-01-23 citations by CoLab: 1 PDF Abstract  
Artificial intelligence (AI) chatbots have been widely adopted in customer service, playing a crucial role in improving service efficiency, enhancing user experience, and elevating satisfaction levels. Current research on the impact of chatbots on consumers’ purchase decisions primarily focuses on linguistic communication features, with limited exploration into the non-verbal social cues employed by chatbots. By conducting three scenario-based experiments, this study investigates the mechanisms through which chatbot response strategies (proactive vs. reactive) and the use of emojis (yes vs. no) influence users’ purchase intention. The findings suggest that proactive response strategies by chatbots are more effective in strengthening users’ purchase intention compared to reactive strategies. Psychological distance and performance expectancy serve as significant mediators in this relationship. Additionally, the use of emojis moderates the effect of chatbot response strategies on psychological distance, while its moderating effect on performance expectancy is not significant. This study offers new insights into non-verbal social cues in chatbots, revealing the psychological mechanisms underlying the influence of chatbot response strategies on users’ purchase decisions and contributing to the limited evidence on visual symbols as moderating factors. Furthermore, the findings provide practical recommendations for businesses on optimizing chatbot interaction strategies to enhance user experience.
Singh D., Kunja S.R.
2025-01-08 citations by CoLab: 0 Abstract  
Over the past two decades, technology adoption has surged, with businesses leveraging innovations like chatbots to enhance customer experiences. Sustainability concerns have also escalated, particularly in the hospitality sector. Recognizing the importance of resource mindfulness and catering to digitally savvy, eco-conscious young consumers, there is a need to find a way to use technology to encourage pro-environmental behavior among hotel guests. Therefore, this study investigates how anthropomorphic chatbot concierges encourage willingness to partake in sustainable practices among young hotel guests. With a sample size of 346, the study confirms a positive and significant relationship between anthropomorphic chatbot concierge and the willingness of guests to partake in sustainable practices, which is mediated by hedonic motivation and positive experience. The findings contribute to the SEEK Model and value theory, offering practical insights for marketers to develop a sustainable competitive edge and cultivate an environmentally responsible brand image.
Wang D., Guo J., Zheng K.
2024-12-06 citations by CoLab: 0 Abstract  
Considering the difficulties faced by libraries in AI implementation and the research gap in users’ social-emotional aspects of AI applications in libraries, this study evaluated university students’ awareness of AI-generated versus human-created feedback (a simplified Turing test) in the scenario of reference services in university libraries. A user test was carried out on 146 Chinese university students with 5 tasks from different subject areas. Results showed students’ limited ability to distinguish AI and human agents with an accuracy rate of 49.59%. Many of them mistook human agents for AI (33.7%). The study analyzed factors affecting the judgment from three dimensions: the information about the task, AI technology, and the user. We found that the more complex the task, the more likely the students were to judge the feedback as AI-generated. When students felt the agent was knowledgeable and able to solve the task, they were more likely to judge the agent as an AI. Students also felt the AI-generated explanations were more helpful compared to the human-created ones. Students with lower AI literacy tended to judge the AI agent as a human. The study benefits AI implementation in libraries by confirming the ability of AI to provide expert services, raising the alarm bell for libraries to be replaced by AI, and calling for improvements in AI literacy cultivation.
Liu G., Liu M.W., Zhu Q.
Marketing Letters scimago Q1 wos Q3
2024-11-23 citations by CoLab: 0 Abstract  
AI conversational agents are increasingly prevalent in marketing practices. A recent effort to make AI conversational agents humanlike is the use of conversational fillers, such as uh, um, and hmm. While computer science research has shown that conversational fillers used by AI agents generally improve users’ interactional experiences, their effects in marketing remain unexplored. This research examines AI conversational fillers in a sales and promotion context and shows that they trigger consumers’ suspicion of ulterior motives, thereby decreasing their purchase intentions, an effect moderated by organization type (for-profit vs. nonprofit). We conducted one field experiment and four online experiments with different modalities, products, and languages to provide empirical support for the hypotheses. In so doing, this research contributes to the research on chatbot natural language and AI persuasion by showing that the persuasion knowledge model can come into work in consumers’ interaction with AI agents.

Top-30

Journals

1
2
1
2

Publishers

1
2
3
4
5
6
1
2
3
4
5
6
  • We do not take into account publications without a DOI.
  • Statistics recalculated only for publications connected to researchers, organizations and labs registered on the platform.
  • Statistics recalculated weekly.

Are you a researcher?

Create a profile to get free access to personal recommendations for colleagues and new articles.
Share
Cite this
GOST | RIS | BibTex
Found error?