Saving the Robot or the Human? Robots Who Feel Deserve Moral Care
Citations Over TimeTop 10% of 2019 papers
Abstract
Robots are becoming an integral part of society, yet our moral stance toward these non-living objects is unclear. In two experiments, we investigated whether anthropomorphic appearance and anthropomorphic attributions modulated people's utilitarian decision making about robotic agents. In Study 1, participants were presented with moral dilemmas in which the to-be-sacrificed agent was either a human, a human-like robot, or a machine-like robot. These victims were described in either neutral or anthropomorphic priming stories. Study 2 teased apart anthropomorphic attributions of agency and affect. Results indicate that although robot-like robots were sacrificed significantly more often than humans and humanlike robots, the effect of humanized priming was the same for all three agent types (Study 1), and this effect was mainly due to the attribution of affective states rather than agency (Study 2). That is, when people attribute affective states to robots, they are less likely to sacrifice them in order to save humans.
Related Papers
- → The role of expressiveness and attention in human-robot interaction(2003)296 cited
- → How do people talk with a robot?(2009)42 cited
- → Natural human-robot interaction using social cues(2016)10 cited
- → A Robot that Perceives Human Emotions and Implications in human-robot interaction(2014)2 cited
- → Social human-robot interaction in the wild(2020)