ZOUGHALIAN, Kavyan, TARAKLI, Imene, HTET, Aung, BAMFORTH, Joshua, JIMÉNEZ-RODRIGUEZ, Alejandro, MARCHANG, Jims and DI NUOVO, Alessandro (2026). A ROS-Based Multi-Modal Architecture for Fall Detection and Response with a Social Robot. In: 2026 IEEE/SICE International Symposium on System Integration (SII). IEEE, 1677-1682. [Book Section]
Documents
37383:1262098
PDF
robot_comp_paper (7).pdf - Accepted Version
Available under License Creative Commons Attribution.
robot_comp_paper (7).pdf - Accepted Version
Available under License Creative Commons Attribution.
Download (904kB) | Preview
Abstract
Falls are a leading cause of injury in older adults, requiring detection systems that are both sensitive and reliable. We present a multi-modal robotic framework that integrates wearable sensing, vision-based verification, and dialogue-driven assessment. A smartwatch streams inertial data, with thresholds tuned through pilot testing to maximise fall sensitivity. Vision verification is performed using a fine-tuned YOLOv11 model, while Whisper ASR and a lightweight GPT-based classifier enable simple verbal checks of user responsiveness. Our tuned thresholds outperformed published baselines (F1 = 0.857), and the vision module achieved strong accuracy (mAP@0.5 = 0.827). In integrated trials, the system reached a 90.6% success rate with a mean end-to-end response time of 43.5 seconds. These results show that combining complementary modalities enhances robustness and moves socially assistive robots toward interactive fall response in real-world care.
More Information
Statistics
Downloads
Downloads per month over past year
Metrics
Altmetric Badge
Dimensions Badge
Share
Actions (login required)
![]() |
View Item |


Tools
Tools
