Planning for Social Interaction in a Robot Bartender Domain
Citations Over TimeTop 10% of 2013 papers
Abstract
A robot coexisting with humans must not only be able to perform physical tasks, but must also be able to interact with humans in a socially appropriate manner. In many social settings, this involves the use of social signals like gaze, facial expression, and language. In this paper, we describe an application of planning to task-based social interaction using a robot that must interact with multiple human agents in a simple bartending domain. We show how social states are inferred from low-level sensors, using vision and speech as input modalities, and how we use the knowledge-level PKS planner to construct plans with task, dialogue, and social actions, as an alternative to current mainstream methods of interaction management. The resulting system has been evaluated in a real-world study with human subjects.
Related Papers
- → Perception of Power and Distance in Human-Human and Human-Robot Role-Based Relations(2022)8 cited
- → Affective facial expressions recognition for human-robot interaction(2017)44 cited
- → Improving Human–Robot Interaction by Enhancing NAO Robot Awareness of Human Facial Expression(2021)36 cited
- → Towards the Development of Affective Facial Expression Recognition for Human-Robot Interaction(2017)17 cited
- → Human-Robot Interaction Based on Facial Expression Recognition Using Deep Learning(2020)7 cited