Edition Sales Excellence, pages 533-568

Dienstleistungsroboter im Handel – Einsatzmöglichkeiten und verantwortungsbewusster Einsatz

Ruth Stock Homburg 1
Merlind Knof 1
Jerome Kirchhoff 1
Judith Simone Heinisch 2
Andreas Ebert 3
Philip Busch 1
Klaus David 2
Janine Wendt 1
Indra Spiecker gen. Döhmann 3
Oskar Von Stryk 1
Martin Hannig 1
Show full list: 11 authors
Publication typeBook Chapter
Publication date2023-03-10
SJR
CiteScore
Impact factor
ISSN26629208, 26629216
Abstract
Anthropomorphe Dienstleistungsroboter gewinnen immer mehr an Popularität. Je leistungsfähiger sie werden und je stärker sie in unserem Alltag integriert sind, desto wichtiger wird die verantwortungsbewusste Gestaltung der Mensch-Roboter-Interaktion (MRI). Hierbei sind Menschenwürde, Transparenz, Privatsphäre, Datenschutz und Compliance im verantwortungsbewussten Einsatz anthropomorpher Dienstleistungsroboter von zentraler Bedeutung. Dieser Beitrag nennt Tätigkeiten von Dienstleistungsrobotern im Handel und bietet einen interdisziplinären Überblick über den aktuellen Forschungsstand zur verantwortungsbewussten Gestaltung der MRI unter besonderer Berücksichtigung der vier Disziplinen Ethik, Recht, Psychologie und Technik. Zudem wird ein interdisziplinärer Bezugsrahmen für die Gestaltung einer verantwortungsbewussten MRI mit anthropomorphen Dienstleitungsrobotern entwickelt und präsentiert. Abschließend werden wissenschaftliche Implikationen abgeleitet und weitere Forschungsfelder hinsichtlich einer verantwortungsbewussten Gestaltung der MRI mit anthropomorphen Dienstleistungsrobotern formuliert.
Li S., Deng W.
2022-07-01 citations by CoLab: 814 Abstract  
With the transition of facial expression recognition (FER) from laboratory-controlled to challenging in-the-wild conditions and the recent success of deep learning techniques in various fields, deep neural networks have increasingly been leveraged to learn discriminative representations for automatic FER. Recent deep FER systems generally focus on two important issues: overfitting caused by a lack of sufficient training data and expression-unrelated variations, such as illumination, head pose, and identity bias. In this survey, we provide a comprehensive review of deep FER, including datasets and algorithms that provide insights into these intrinsic problems. First, we introduce the available datasets that are widely used in the literature and provide accepted data selection and evaluation principles for these datasets. We then describe the standard pipeline of a deep FER system with the related background knowledge and suggestions for applicable implementations for each stage. For the state-of-the-art in deep FER, we introduce existing novel deep neural networks and related training strategies that are designed for FER based on both static images and dynamic image sequences and discuss their advantages and limitations. Competitive performances and experimental comparisons on widely used benchmarks are also summarized. We then extend our survey to additional related issues and application scenarios. Finally, we review the remaining challenges and corresponding opportunities in this field as well as future directions for the design of robust deep FER systems.
Rawal N., Stock-Homburg R.M.
2022-06-24 citations by CoLab: 38 Abstract  
Facial expressions are an ideal means of communicating one’s emotions or intentions to others. This overview will focus on human facial expression recognition as well as robotic facial expression generation. In the case of human facial expression recognition, both facial expression recognition on predefined datasets as well as in real-time will be covered. For robotic facial expression generation, hand-coded and automated methods i.e., facial expressions of a robot are generated by moving the features (eyes, mouth) of the robot by hand-coding or automatically using machine learning techniques, will also be covered. There are already plenty of studies that achieve high accuracy for emotion expression recognition on predefined datasets, but the accuracy for facial expression recognition in real-time is comparatively lower. In the case of expression generation in robots, while most of the robots are capable of making basic facial expressions, there are not many studies that enable robots to do so automatically. In this overview, state-of-the-art research in facial emotion expressions during human–robot interaction has been discussed leading to several possible directions for future research.
Kansizoglou I., Bampis L., Gasteratos A.
2022-04-01 citations by CoLab: 76 Abstract  
The advancement of Human-Robot Interaction (HRI) drives research into the development of advanced emotion identification architectures that fathom audio-visual (A-V) modalities of human emotion. State-of-the-art methods in multi-modal emotion recognition mainly focus on the classification of complete video sequences, leading to systems with no online potentialities. Such techniques are capable of predicting emotions only when the videos are concluded, thus restricting their applicability in practical scenarios. This article provides a novel paradigm for online emotion classification, which exploits both audio and visual modalities and produces a responsive prediction when the system is confident enough. We propose two deep Convolutional Neural Network (CNN) models for extracting emotion features, one for each modality, and a Deep Neural Network (DNN) for their fusion. In order to conceive the temporal quality of human emotion in interactive scenarios, we train in cascade a Long Short-Term Memory (LSTM) layer and a Reinforcement Learning (RL) agent –which monitors the speaker– thus stopping feature extraction and making the final prediction. The comparison of our results on two publicly available A-V emotional datasets viz., RML and BAUM-1s, against other state-of-the-art models, demonstrates the beneficial capabilities of our work.
Stock-Homburg R.M., Kirchhoff J., Heinisch J.S., Ebert A., Busch P., Rawal N., David K., Wendt J., Spiecker Gen. Döhmann I., Von Stryk O., Hannig M., Knof M.
2022-01-11 citations by CoLab: 6
Stock-Homburg R.
2021-06-04 citations by CoLab: 85 Abstract  
Knowledge production within the interdisciplinary field of human–robot interaction (HRI) with social robots has accelerated, despite the continued fragmentation of the research domain. Together, these features make it hard to remain at the forefront of research or assess the collective evidence pertaining to specific areas, such as the role of emotions in HRI. This systematic review of state-of-the-art research into humans’ recognition and responses to artificial emotions of social robots during HRI encompasses the years 2000–2020. In accordance with a stimulus–organism–response framework, the review advances robotic psychology by revealing current knowledge about (1) the generation of artificial robotic emotions (stimulus), (2) human recognition of robotic artificial emotions (organism), and (3) human responses to robotic emotions (response), as well as (4) other contingencies that affect emotions as moderators.
Du S., Xie C.
Journal of Business Research scimago Q1 wos Q1
2021-05-01 citations by CoLab: 198 Abstract  
Products and services empowered by artificial intelligence (AI) are becoming widespread in today’s marketplace. However, consumers have mixed feelings about AI technologies due to the numerous ethical challenges associated the development and deployment of AI. Drawing upon prior research on the moral significance of technology and the emerging literature on AI, we delineate three key dimensions of AI-enabled products (i.e., multi-functionality, interactivity, and AI intelligence stage) that have relevance for ethical implications and adopt a socio-technical approach to provide a multi-layered ethical analysis of AI products at the product-, consumer-, and society-levels. Some key ethical issues identified in the paper include AI biases, ethical design, consumer privacy, cybersecurity, individual autonomy and wellbeing, and unemployment. Companies need to engage in corporate social responsibility (CSR) to shape the future of ethical AI; drawing upon stakeholder theory and institutional theory, we develop a conceptual framework on AI-related CSR, highlighting the product-, company-, and institutional environment-specific factors that influence firms’ socially responsible actions in the domain of AI and discussing the subsequent outcomes for firm, consumers, and the society. We include a section on future research agenda for AI ethics and firm CSR in this important domain.
Erdelyi O.J., Erdelyi G.
2021-04-06 citations by CoLab: 4 Abstract  
Confidence in the regulatory environment is crucial to enable responsible AI innovation and foster the social acceptance of these powerful new technologies. One notable source of uncertainty is, however, that the existing legal liability system is unable to assign responsibility where a potentially harmful conduct and/or the harm itself are unforeseeable, yet some instantiations of AI and/or the harms they may trigger are not foreseeable in the legal sense. The unpredictability of how courts would handle such cases makes the risks involved in the investment and use of AI difficult to calculate with confidence, creating an environment that is not conducive to innovation and may deprive society of some benefits AI could provide. To tackle this problem, we propose to draw insights from financial regulatory best practices and establish a system of AI guarantee schemes. We envisage the system to form part of the broader market-structuring regulatory frameworks, with the primary function to provide a readily available, clear, and transparent funding mechanism to compensate claims that are either extremely hard or impossible to realize via conventional litigation. We propose it to be at least partially industry-funded. Funding arrangements should depend on whether it would pursue other potential policy goals aimed more broadly at controlling the trajectory of AI innovation to increase economic and social welfare worldwide. Because of the global relevance of the issue, rather than focusing on any particular legal system, we trace relevant developments across multiple jurisdictions and engage in a high-level, comparative conceptual debate around the suitability of the foreseeability concept to limit legal liability. The paper also refrains from confronting the intricacies of the case law of specific jurisdictions for now and—recognizing the importance of this task—leaves this to further research in support of the legal system’s incremental adaptation to the novel challenges of present and future AI technologies. This article appears in the special track on AI and Society.
Holland J., Kingston L., McCarthy C., Armstrong E., O’Dwyer P., Merz F., McConnell M.
Robotics scimago Q1 wos Q2 Open Access
2021-03-11 citations by CoLab: 152 PDF Abstract  
Traditionally, advances in robotic technology have been in the manufacturing industry due to the need for collaborative robots. However, this is not the case in the service sectors, especially in the healthcare sector. The lack of emphasis put on the healthcare sector has led to new opportunities in developing service robots that aid patients with illnesses, cognition challenges and disabilities. Furthermore, the COVID-19 pandemic has acted as a catalyst for the development of service robots in the healthcare sector in an attempt to overcome the difficulties and hardships caused by this virus. The use of service robots are advantageous as they not only prevent the spread of infection, and reduce human error but they also allow front-line staff to reduce direct contact, focusing their attention on higher priority tasks and creating separation from direct exposure to infection. This paper presents a review of various types of robotic technologies and their uses in the healthcare sector. The reviewed technologies are a collaboration between academia and the healthcare industry, demonstrating the research and testing needed in the creation of service robots before they can be deployed in real-world applications and use cases. We focus on how robots can provide benefits to patients, healthcare workers, customers, and organisations during the COVID-19 pandemic. Furthermore, we investigate the emerging focal issues of effective cleaning, logistics of patients and supplies, reduction of human errors, and remote monitoring of patients to increase system capacity, efficiency, resource equality in hospitals, and related healthcare environments.
Nesset B., Robb D.A., Lopes J., Hastie H.
2021-03-08 citations by CoLab: 34 Abstract  
Robots are rapidly gaining acceptance in recent times, where the general public, industry and researchers are starting to understand the utility of robots, for example for delivery to homes or in hospitals. However, it is key to understand how to instil the appropriate amount of trust in the user. One aspect of a trustworthy system is its ability to explain actions and be transparent, especially in the face of potentially serious errors. Here, we study the various aspects of transparency of interaction and its effect in a scenario where a robot is performing triage when a suspected Covid-19 patient arrives at a hospital. Our findings consolidate prior work showing a main effect of robot errors on trust, but also showing that this is dependent on the level of transparency. Furthermore, our findings indicate that high interaction transparency leads to participants making better informed decisions on their health based on their interaction. Such findings on transparency could inform interaction design and thus lead to greater adoption of robots in key areas, such as health and well-being.
Zech H.
ERA Forum scimago Q1
2021-01-07 citations by CoLab: 33 Abstract  
Liability for AI is the subject of a lively debate. Whether new liability rules should be introduced or not and how these rules should be designed hinges on the function of liability rules. Mainly, they create incentives for risk control, varying with their requirements – especially negligence versus strict liability. In order to do so, they have to take into account who is actually able to exercise control. In scenarios where a clear allocation of risk control is no longer possible, social insurance might step in. This article discusses public policy considerations concerning liability for artificial intelligence (AI). It first outlines the major risks associated with current developments in information technology (IT) (1.). Second, the implications for liability law are discussed. Liability rules are seen conceptualized as an instrument for risk control (2.). Negligence liability and strict liability serve different purposes making strict liability the rule of choice for novel risks (3.). The key question is, however, who should be held liable (4.). Liability should follow risk control. In future scenarios where individual risk attribution is no longer feasible social insurance might be an alternative (5). Finally, the innovation function of liability rules is stressed, affirming that appropriate liability rules serve as a stimulus for innovation, not as an impediment (6.).
Fusté-Forné F., Jamal T.
2021-01-02 citations by CoLab: 49 PDF Abstract  
Research on the relationship between automation services and tourism has been rapidly growing in recent years and has led to a new service landscape where the role of robots is gaining both practical and research attention. This paper builds on previous reviews and undertakes a comprehensive analysis of the research literature to discuss opportunities and challenges presented by the use of service robots in hospitality and tourism. Management and ethical issues are identified and it is noted that practical and ethical issues (roboethics) continue to lack attention. Going forward, new directions are urgently needed to inform future research and practice. Legal and ethical issues must be proactively addressed, and new research paradigms developed to explore the posthumanist and transhumanist transitions that await. In addition, closer attention to the potential of “co-creation” for addressing innovations in enhanced service experiences in hospitality and tourism is merited. Among others, responsibility, inclusiveness and collaborative human-robot design and implementation emerge as important principles to guide future research and practice in this area.
Vasylkovskyi V., Guerreiro S., Sequeira J.S.
2020-11-01 citations by CoLab: 14 Abstract  
Social robots can cause privacy concerns for humans, both because they collect personal data while moving around globally, and because humans can perceive them as social actors. Nowadays, robots can record and transmit (private) data in a human-readable format, which is identified as a problem in human-robot interaction (HRI). Privacy is an individual's right to have control over its data. With this problem in mind, we present BlockRobot - a new privacy-by-design conceptual model for access control of personal data based on Blockchain (BC) technology. With this solution, we ensure data privacy rights by giving each user control over its data in a decentralized manner without the need for an intermediate data controller. This BC solution grants confidentiality, integrity, and non-repudiation of data transparently and fairly to every user. As proof of concept, we demonstrate the initial implementation of a Decentralized Application (DAP) based on EOS Blockchain integrated with robotic events that contain private data. This paper details the experiments conducted with a Social Robot (SR) in a non-lab environment, and the collected data is analyzed using process mining techniques.
Rajabiyazdi F., Jamieson G.A.
2020-10-11 citations by CoLab: 18 Abstract  
Humans often have difficulty accomplishing tasks in correspondence with automation with concealed inner workings. Researchers suggest that allowing humans to see into the inner workings of automation will lead to better understanding, trust in, reliance on, joint task completion with, and better situation awareness of the automation. We identified and compared four transparency models that assist researchers in designing and conducting empirical studies by guiding them on what, how, and when information on or about automation should be disclosed. The results of this review will assist researchers with understanding, identifying, and employing suitable transparency models to their applications.

Are you a researcher?

Create a profile to get free access to personal recommendations for colleagues and new articles.
Share
Cite this
GOST | RIS | BibTex
Found error?