Context-Aware complex human activity recognition using hybrid deep learning models

OMOLAJA, Adebola, OTEBOLAKU, Abayomi and ALFOUDI, Ali (2022). Context-Aware complex human activity recognition using hybrid deep learning models. Applied Sciences, 12 (18).

[img]
Preview
PDF
applsci-12-09305-v2.pdf - Published Version
Creative Commons Attribution.

Download (3MB) | Preview
[img]
Preview
PDF
applsci-1866365-supplementary.pdf - Supplemental Material
Creative Commons Attribution.

Download (824kB) | Preview
Official URL: https://www.mdpi.com/2076-3417/12/18/9305
Open Access URL: https://www.mdpi.com/2076-3417/12/18/9305/pdf?vers... (Published version)
Link to published version:: https://doi.org/10.3390/app12189305

Abstract

Smart devices, such as smartphones, smartwatches, etc., are examples of promising platforms for automatic recognition of human activities. However, it is difficult to accurately monitor complex human activities on these platforms due to interclass pattern similarities, which occur when different human activities exhibit similar signal patterns or characteristics. Current smartphone-based recognition systems depend on traditional sensors, such as accelerometers and gyroscopes, which are built-in in these devices. Therefore, apart from using information from the traditional sensors, these systems lack the contextual information to support automatic activity recognition. In this article, we explore environmental contexts, such as illumination (light conditions) and noise level, to support sensory data obtained from the traditional sensors using a hybrid of Convolutional Neural Network and Long Short-Term Memory (CNN–LSTM) learning models. The models performed sensor fusion by augmenting low-level sensor signals with rich contextual data to improve the models’ recognition accuracy and generalization. Two sets of experiments were performed to validate the proposed solution. The first set of experiments used triaxial inertial sensing signals to train baseline models, while the second set of experiments combined the inertial signals with contextual information from environmental sensors. The obtained results demonstrate that contextual information, such as environmental noise level and light conditions using hybrid deep learning models, achieved better recognition accuracy than the traditional baseline activity recognition models without contextual information.

Item Type: Article
Additional Information: ** Article version: VoR ** From MDPI via Jisc Publications Router ** Licence for VoR version of this article: https://creativecommons.org/licenses/by/4.0/ ** Peer reviewed: TRUE **Journal IDs: eissn 2076-3417 **Article IDs: publisher-id: applsci-12-09305 **History: published_online 16-09-2022; accepted 13-09-2022; collection 09-2022; submitted 29-07-2022
Uncontrolled Keywords: Article, simple activities, complex activities, context awareness, deep hybrid learning, sensors, smart devices, human activity recognition
Identification Number: https://doi.org/10.3390/app12189305
SWORD Depositor: Colin Knott
Depositing User: Colin Knott
Date Deposited: 10 Oct 2022 12:02
Last Modified: 12 Oct 2023 09:47
URI: https://shura.shu.ac.uk/id/eprint/30809

Actions (login required)

View Item View Item

Downloads

Downloads per month over past year

View more statistics