Robot, did you read my mind? Modelling Human Mental States to Facilitate Transparency and Mitigate False Beliefs in Human-Robot Collaboration

ANGELOPOULOS, Georgios, HELLOU, Mehdi, VINANZI, Samuele, ROSSI, Alessandra, ROSSI, Silvia and CANGELOSI, Angelo (2025). Robot, did you read my mind? Modelling Human Mental States to Facilitate Transparency and Mitigate False Beliefs in Human-Robot Collaboration. ACM Transactions on Human-Robot Interaction, 15 (1): 1, 1-29. [Article]

Documents
35777:1025222
[thumbnail of Vinanzi-RobotDidYou(VoR).pdf]
Preview
PDF
Vinanzi-RobotDidYou(VoR).pdf - Published Version
Available under License Creative Commons Attribution.

Download (50MB) | Preview
Abstract
Providing a robot with the capabilities of understanding and effectively adapting its behaviour based on human mental states, is a critical challenge in Human-Robot Interaction, since it can significantly improve the quality of interaction between humans and robots. In this work, we investigate whether considering human mental states in the decision-making process of a robot improves the transparency of its behaviours and mitigates potential human’s false beliefs about the environment during collaborative scenarios. We used Bayesian inference within a Hierarchical Reinforcement Learning algorithm to include human desires and beliefs into the decision-making processes of the robot, and to monitor the robot’s decisions. This approach, which we refer to as Hierarchical Bayesian Theory of Mind, represents an upgraded version of the initial Bayesian Theory of Mind, a probabilistic model capable of reasoning about a rational agent’s actions. The model enabled us to track the mental states of a human observer, even when the observer held false beliefs, thereby benefiting the collaboration in a multi-goal task and the interaction with the robot. In addition to a qualitative evaluation, we conducted a between-subjects study (110 participants) to evaluate the robot’s perceived Theory of Mind and its effects on transparency and false beliefs in different settings. Results indicate that a robot which considers human desires and beliefs increases its transparency and reduces misunderstandings. These findings show the importance of endowing Theory of Mind capabilities in robots and demonstrate how these skills can enhance their behaviours, particularly in human-robot collaboration, paving the way for more effective robotic applications.
Plain Language Summary

What is it about?

The study explored the integration of human mental states into robotic decision-making to improve transparency and reduce false beliefs in Human-Robot Interaction (HRI). It employed a Hierarchical Reinforcement Learning algorithm with Bayesian inference to incorporate human desires and beliefs into the robot's decision-making process. This methodology, termed Hierarchical Bayesian Theory of Mind, builds upon the Bayesian Theory of Mind to allow robots to track human observers' mental states, even when false beliefs are present. The research included a qualitative evaluation and a between-subjects study involving 110 participants to assess the robot's perceived Theory of Mind and its impact on interaction transparency and false beliefs. Findings indicated that considering human mental states improves robot transparency and mitigates human false beliefs in collaborative scenarios. The study emphasized the importance of understanding users’ mental states to enhance cooperation between humans and robots.

Why is it important?

This study is important as it addresses a critical challenge in Human-Robot Interaction (HRI) by integrating human mental states into robotic decision-making, thereby enhancing interaction quality. By employing a Hierarchical Bayesian Theory of Mind approach, the research advances our understanding of how robots can adapt their behavior based on human beliefs and desires. This integration is crucial for improving transparency and mitigating false beliefs in collaborative scenarios, ultimately leading to more effective and natural interactions between humans and robots. The findings have significant implications for robotics applications, including assistive technology, autonomous vehicles, and social robotics, where understanding and predicting human behavior is essential for seamless collaboration.

Key Takeaways:

1. Improved Interaction Quality: The study shows that considering human mental states in robotic decision-making enhances the transparency of robot behaviors, leading to better interaction quality and reducing the potential for misunderstandings.

2. Effective Collaboration: By utilizing a Hierarchical Bayesian Theory of Mind, robots can track human mental states even in false-belief contexts, thereby improving collaboration in multi-goal tasks and enhancing the overall user experience.

3. Applicability in Complex Environments: The research highlights the benefits of integrating Theory of Mind in robots, enabling them to predict human behaviors more accurately, which is particularly advantageous in complex environments requiring adaptive and intuitive robotic responses.

More Information
Statistics

Downloads

Downloads per month over past year

View more statistics

Metrics

Altmetric Badge

Dimensions Badge

Share
Add to AnyAdd to TwitterAdd to FacebookAdd to LinkedinAdd to PinterestAdd to Email

Actions (login required)

View Item View Item