Robot, did you read my mind? Modelling Human Mental States to Facilitate Transparency and Mitigate False Beliefs in Human-Robot Collaboration

ANGELOPOULOS, Georgios, HELLOU, Mehdi, VINANZI, Samuele, ROSSI, Alessandra, ROSSI, Silvia and CANGELOSI, Angelo (2025). Robot, did you read my mind? Modelling Human Mental States to Facilitate Transparency and Mitigate False Beliefs in Human-Robot Collaboration. ACM Transactions on Human-Robot Interaction. [Article]

Documents
35777:946126
[thumbnail of Replace with VoR once available]
Preview
PDF (Replace with VoR once available)
Using_Human_Mental_States_for_Facilitating_Transparency_and_Mitigating_False_Beliefs_in_Human_Robot_Collaboration.pdf - Accepted Version
Available under License Creative Commons Attribution.

Download (4MB) | Preview
Abstract
Providing a robot with the capabilities of understanding and effectively adapting its behaviour based on human mental states, is a critical challenge in Human-Robot Interaction, since it can significantly improve the quality of interaction between humans and robots. In this work, we investigate whether considering human mental states in the decision-making process of a robot improves the transparency of its behaviours and mitigates potential human’s false beliefs about the environment during collaborative scenarios. We used Bayesian inference within a Hierarchical Reinforcement Learning algorithm to include human desires and beliefs into the decision-making processes of the robot, and to monitor the robot’s decisions. This approach, which we refer to as Hierarchical Bayesian Theory of Mind, represents an upgraded version of the initial Bayesian Theory of Mind, a probabilistic model capable of reasoning about a rational agent’s actions. The model enabled us to track the mental states of a human observer, even when the observer held false beliefs, thereby benefiting the collaboration in a multi-goal task and the interaction with the robot. In addition to a qualitative evaluation, we conducted a between-subjects study (110 participants) to evaluate the robot’s perceived Theory of Mind and its effects on transparency and false beliefs in different settings. Results indicate that a robot which considers human desires and beliefs increases its transparency and reduces misunderstandings. These findings show the importance of endowing Theory of Mind capabilities in robots and demonstrate how these skills can enhance their behaviours, particularly in human-robot collaboration, paving the way for more effective robotic applications.
More Information
Statistics

Downloads

Downloads per month over past year

View more statistics

Metrics

Altmetric Badge

Dimensions Badge

Share
Add to AnyAdd to TwitterAdd to FacebookAdd to LinkedinAdd to PinterestAdd to Email

Actions (login required)

View Item View Item