KANG, Chen, ALITAI, Madina, WANG, Yiting, CAI, Xiaochi, MA, Ruidong, CANGELOSI, Angelo and SHANGGUAN, Zhegong (2026). Fluid-Xpress: Emotion-Aware Dual-Loop Framework for Empathic Facial Reaction in HRI. In: HRI Companion '26: Companion Proceedings of the 21st ACM/IEEE International Conference on Human-Robot Interaction. ACM, 425-429. [Book Section]
Documents
37180:1223554
PDF
Ma-Fluid-XpressEmotion-Aware(VoR).pdf - Published Version
Available under License Creative Commons Attribution.
Ma-Fluid-XpressEmotion-Aware(VoR).pdf - Published Version
Available under License Creative Commons Attribution.
Download (1MB) | Preview
Abstract
Large Language Model (LLM)-driven social robots face two key challenges: inference latency creates unnatural silences during which an expressively static robot appears disengaged, and LLMs rarely account for the user's facial affect as a continuous evolving process. We present Fluid-Xpress, an emotion-aware dual-loop framework for empathic facial reactions in human-robot interaction. The framework features: (1) a Macro-Micro dual-loop architecture that decouples real-time non-verbal feedback from LLM verbal processing, enabling continuous affective backchanneling during inference latency; (2) a Temporal Affective Engine using metrics such as MSSD to capture emotional dynamics and detect complex states like cognitive overload and masked emotions; and (3) a Risk-Adaptive Strategy that prioritizes immediate intervention during high-arousal states. A pilot study (N=8) showed that Fluid-Xpress significantly improved arousal stability (p < .05), mood improvement (p < .05), expression awareness (p < .01), and perceived empathy (p < .05) compared to baseline, providing preliminary support for emotion-aware non-verbal feedback in embodied social robots.
More Information
Statistics
Downloads
Downloads per month over past year
Metrics
Altmetric Badge
Dimensions Badge
Share
Actions (login required)
![]() |
View Item |


Tools
Tools
