BAMFORTH, Joshua (2024). Multimodal Emotion Recognition Software to Facilitate Human-Robot Interaction. Doctoral, Sheffield Hallam University. [Thesis]
Documents
36387:1097496
PDF (Edited for copyright reasons)
Bamforth_2025_MPhil_MultimodalEmotionRecognition_Edited.pdf - Accepted Version
Available under License Creative Commons Attribution Non-commercial No Derivatives.
Bamforth_2025_MPhil_MultimodalEmotionRecognition_Edited.pdf - Accepted Version
Available under License Creative Commons Attribution Non-commercial No Derivatives.
Download (16MB) | Preview
36387:1097497
PDF (VoR not available)
Bamforth_2025_MPhil_MultimodalEmotionRecognition(VoR).pdf - Accepted Version
Restricted to Repository staff only
Bamforth_2025_MPhil_MultimodalEmotionRecognition(VoR).pdf - Accepted Version
Restricted to Repository staff only
Download (16MB)
36387:1098881
Archive (ZIP) (Appendix A files)
Bamforth_2025_MPhil_MultimodalEmotionRecognition_Supplementary.zip - Supplemental Material
Available under License Creative Commons Attribution Non-commercial No Derivatives.
Bamforth_2025_MPhil_MultimodalEmotionRecognition_Supplementary.zip - Supplemental Material
Available under License Creative Commons Attribution Non-commercial No Derivatives.
Download (2GB)
Abstract
Emotion recognition is a key enabler of effective human-robot interaction (HRI),
allowing robots to respond appropriately to users’ emotional states. However,
many current approaches rely on a single modality or multimodal fusion techniques
which are computationally intensive and unsuitable for widely available, resourceconstrained
robotic platforms. This presents a significant barrier to deploying
emotionally aware robots in real-world settings such as healthcare, education, and
assistive technology.
This thesis addresses this challenge by evaluating two independent, low-resource
emotion recognition approaches: facial emotion recognition and text-based sentiment
analysis. The goal is to assess their individual effectiveness, feasibility, and potential
to support emotionally intelligent behaviour without relying on full multimodal
integration.
A literature review contextualises the work within existing research on visual,
auditory, and gesture-based emotion recognition. Experimental evaluations explore
the accuracy and efficiency of both modalities in constrained environments using a
robotic platform.
Results demonstrate that both facial and text-based emotion recognition methods
can operate effectively in isolation, offering practical solutions for real-time deployment
on low-power systems. These findings suggest that strategic use of unimodal
methods can enhance robot emotional responsiveness while avoiding the complexity
of multimodal systems. The thesis concludes by identifying future research directions,
including real-world testing, improved on-device processing, and lightweight
integration strategies.
More Information
Statistics
Downloads
Downloads per month over past year
Metrics
Altmetric Badge
Dimensions Badge
Share
Actions (login required)
![]() |
View Item |


Tools
Tools