Multimodal Emotion Recognition Software to Facilitate Human-Robot Interaction

BAMFORTH, Joshua (2024). Multimodal Emotion Recognition Software to Facilitate Human-Robot Interaction. Doctoral, Sheffield Hallam University. [Thesis]

Documents
36387:1097496
[thumbnail of Edited for copyright reasons]
Preview
PDF (Edited for copyright reasons)
Bamforth_2025_MPhil_MultimodalEmotionRecognition_Edited.pdf - Accepted Version
Available under License Creative Commons Attribution Non-commercial No Derivatives.

Download (16MB) | Preview
36387:1097497
[thumbnail of VoR not available]
PDF (VoR not available)
Bamforth_2025_MPhil_MultimodalEmotionRecognition(VoR).pdf - Accepted Version
Restricted to Repository staff only

Download (16MB)
36387:1098881
[thumbnail of Appendix A files]
Archive (ZIP) (Appendix A files)
Bamforth_2025_MPhil_MultimodalEmotionRecognition_Supplementary.zip - Supplemental Material
Available under License Creative Commons Attribution Non-commercial No Derivatives.

Download (2GB)
Abstract
Emotion recognition is a key enabler of effective human-robot interaction (HRI), allowing robots to respond appropriately to users’ emotional states. However, many current approaches rely on a single modality or multimodal fusion techniques which are computationally intensive and unsuitable for widely available, resourceconstrained robotic platforms. This presents a significant barrier to deploying emotionally aware robots in real-world settings such as healthcare, education, and assistive technology. This thesis addresses this challenge by evaluating two independent, low-resource emotion recognition approaches: facial emotion recognition and text-based sentiment analysis. The goal is to assess their individual effectiveness, feasibility, and potential to support emotionally intelligent behaviour without relying on full multimodal integration. A literature review contextualises the work within existing research on visual, auditory, and gesture-based emotion recognition. Experimental evaluations explore the accuracy and efficiency of both modalities in constrained environments using a robotic platform. Results demonstrate that both facial and text-based emotion recognition methods can operate effectively in isolation, offering practical solutions for real-time deployment on low-power systems. These findings suggest that strategic use of unimodal methods can enhance robot emotional responsiveness while avoiding the complexity of multimodal systems. The thesis concludes by identifying future research directions, including real-world testing, improved on-device processing, and lightweight integration strategies.
More Information
Statistics

Downloads

Downloads per month over past year

View more statistics

Metrics

Altmetric Badge

Dimensions Badge

Share
Add to AnyAdd to TwitterAdd to FacebookAdd to LinkedinAdd to PinterestAdd to Email

Actions (login required)

View Item View Item