HELLOU, Mehdi, VINANZI, Samuele and CANGELOSI, Angelo (2023). Bayesian Theory of Mind for False Belief Understanding in Human-Robot Interaction. In: 2023 32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN). IEEE, 1893-1900. [Book Section]
Documents
33136:636912
PDF
RO_MAN_2023_HRI_Experiment_in_false_beliefs_task.pdf - Accepted Version
Available under License Creative Commons Attribution.
RO_MAN_2023_HRI_Experiment_in_false_beliefs_task.pdf - Accepted Version
Available under License Creative Commons Attribution.
Download (4MB) | Preview
Abstract
In order to achieve a widespread adoption of social robots in the near future, we need to design intelligent systems that are able to autonomously understand our beliefs and preferences. This will pave the foundation for a new generation of robots able to navigate the complexities of human societies. To reach this goal, we look into Theory of Mind (ToM): the cognitive ability to understand other agents’ mental states. In this paper, we rely on a probabilistic ToM model to detect when a human has false beliefs with the purpose of driving the decision-making process of a collaborative robot. In particular, we recreate an established psychology experiment involving the search for a toy that can be secretly displaced by a malicious individual. The results that we have obtained in simulated experiments show that the agent is able to predict human mental states and detect when false beliefs have arisen. We then explored the set-up in a real-world human interaction to assess the feasibility of such an experiment with a humanoid social robot.
More Information
Statistics
Downloads
Downloads per month over past year
Metrics
Altmetric Badge
Dimensions Badge
Share
Actions (login required)
View Item |