Context-aware complex human activity recognition using hybrid deep learning models

OTEBOLAKU, Abayomi and OMOLAJA, Adebola (2022). Context-aware complex human activity recognition using hybrid deep learning models. [Pre-print] (Unpublished)

Preprints have not been peer-reviewed. They should not be relied on to guide clinical practice or health related behaviour and should not be regarded as conclusive or be reported in news media as established information.
[img]
Preview
PDF
Otebolaku-Context-awareComplex(Pre-print).pdf - Pre-print
Creative Commons Attribution.

Download (1MB) | Preview
Official URL: https://www.preprints.org/manuscript/202203.0363/v...
Open Access URL: https://www.preprints.org/manuscript/202203.0363/v... (Published version)
Related URLs:

    Abstract

    Smart devices such as smartphones, smartwatches, etc. are promising platforms that are being used for automatic recognition of human activities. However, it is difficult to accurately monitor complex human activities due to inter-class pattern similarity, which occurs when different human activities exhibit similar signal patterns or characteristics. Current smartphone-based recognition systems depend on the traditional sensors such as accelerometer and gyroscope, which are inbuilt in these devices. Therefore, apart from using information from the traditional sensors, these systems lack contextual information to support automatic activity recognition. In this article, we explore environment contexts such as illumination(light conditions) and noise level to support sensory data obtained from the traditional sensors using a hybrid of Convolutional Neural Networks and Long Short Time Memory(CNN_LSTM) learning models. The models performed sensor fusion by augmenting the low-level sensor signals with rich contextual data to improve recognition and generalisation ability of the proposed solution. Two sets of experiments were performed to validate the proposed solution. The first set of experiments used inertial sensing data whilst the second set of extensive experiments combined inertial signals with contextual information from environment sensing data. Obtained results demonstrate that contextual information such as environment noise level and illumination using hybrid deep learning models achieved better recognition accuracy than the traditional activity recognition models without contextual information.

    Item Type: Pre-print
    Identification Number: https://doi.org/10.20944/preprints202203.0363.v1
    SWORD Depositor: Symplectic Elements
    Depositing User: Symplectic Elements
    Date Deposited: 22 Aug 2022 10:12
    Last Modified: 22 Aug 2022 10:12
    URI: http://shura.shu.ac.uk/id/eprint/30631

    Actions (login required)

    View Item View Item

    Downloads

    Downloads per month over past year

    View more statistics