Haoran Duan haoran.duan@durham.ac.uk
PGR Student Doctor of Philosophy
Wearable-based behaviour interpolation for semi-supervised human activity recognition
Duan, Haoran; Wang, Shidong; Ojha, Varun; Wang, Shizheng; Huang, Yawen; Long, Yang; Ranjan, Rajiv; Zheng, Yefeng
Authors
Shidong Wang
Varun Ojha
Shizheng Wang
Yawen Huang
Dr Yang Long yang.long@durham.ac.uk
Associate Professor
Rajiv Ranjan
Yefeng Zheng
Abstract
While traditional feature engineering for Human Activity Recognition (HAR) involves a trial-and-error process, deep learning has emerged as a preferred method for high-level representations of sensor-based human activities. However, most deep learning-based HAR requires a large amount of labelled data and extracting HAR features from unlabelled data for effective deep learning training remains challenging. We, therefore, introduce a deep semi-supervised HAR approach, MixHAR, which concurrently uses labelled and unlabelled activities. Our MixHAR employs a linear interpolation mechanism to blend labelled and unlabelled activities while addressing both inter- and intra-activity variability. A unique challenge identified is the activity-intrusion problem during mixing, for which we propose a mixing calibration mechanism to mitigate it in the feature embedding space. Additionally, we rigorously explored and evaluated the five conventional/popular deep semi-supervised technologies on HAR, acting as the benchmark of deep semi-supervised HAR. Our results demonstrate that MixHAR significantly improves performance, underscoring the potential of deep semi-supervised techniques in HAR.
Citation
Duan, H., Wang, S., Ojha, V., Wang, S., Huang, Y., Long, Y., …Zheng, Y. (2024). Wearable-based behaviour interpolation for semi-supervised human activity recognition. Information Sciences, 665, Article 120393. https://doi.org/10.1016/j.ins.2024.120393
Journal Article Type | Article |
---|---|
Acceptance Date | Feb 28, 2024 |
Online Publication Date | Mar 5, 2024 |
Publication Date | 2024-04 |
Deposit Date | May 16, 2024 |
Publicly Available Date | May 16, 2024 |
Journal | Information Sciences |
Print ISSN | 0020-0255 |
Publisher | Elsevier |
Peer Reviewed | Peer Reviewed |
Volume | 665 |
Article Number | 120393 |
DOI | https://doi.org/10.1016/j.ins.2024.120393 |
Public URL | https://durham-repository.worktribe.com/output/2441901 |
Files
Published Journal Article
(2 Mb)
PDF
Licence
http://creativecommons.org/licenses/by-nc-nd/4.0/
Publisher Licence URL
http://creativecommons.org/licenses/by-nc-nd/4.0/
Copyright Statement
© 2024 The Author(s).
Published by Elsevier Inc. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
You might also like
EfficientTDNN: Efficient Architecture Search for Speaker Recognition
(2022)
Journal Article
Dynamic Unary Convolution in Transformers
(2023)
Journal Article
CTNeRF: Cross-time Transformer for dynamic neural radiance field from monocular video
(2024)
Journal Article
Rules for Expectation: Learning to Generate Rules via Social Environment Modeling
(2023)
Journal Article
Downloadable Citations
About Durham Research Online (DRO)
Administrator e-mail: dro.admin@durham.ac.uk
This application uses the following open-source libraries:
SheetJS Community Edition
Apache License Version 2.0 (http://www.apache.org/licenses/)
PDF.js
Apache License Version 2.0 (http://www.apache.org/licenses/)
Font Awesome
SIL OFL 1.1 (http://scripts.sil.org/OFL)
MIT License (http://opensource.org/licenses/mit-license.html)
CC BY 3.0 ( http://creativecommons.org/licenses/by/3.0/)
Powered by Worktribe © 2024
Advanced Search