Sensors and Actuators, A: Physical, volume 350, pages 114150

Multimodal data-based deep learning model for sitting posture recognition toward office workers’ health promotion

Publication typeJournal Article
Publication date2023-02-01
scimago Q1
SJR0.788
CiteScore8.1
Impact factor4.1
ISSN09244247, 18733069
Metals and Alloys
Surfaces, Coatings and Films
Electronic, Optical and Magnetic Materials
Condensed Matter Physics
Electrical and Electronic Engineering
Instrumentation
Abstract
Recognizing sitting posture is significant to prevent the development of work-related musculoskeletal disorders for office workers. Multimodal data, i.e., infrared map and pressure map, have been leveraged to achieve accurate recognition while preserving privacy and being unobtrusive for daily use. Existing studies in sitting posture recognition utilize handcrafted features with machine learning models for multimodal data fusion, which significantly relies on domain knowledge. Therefore, a deep learning model is proposed to fuse the multimodal data and recognize the sitting posture. This model contains modality-specific backbones, a cross-modal self-attention module, and multi-task learning-based classification. Experiments are conducted to verify the effectiveness of the proposed model using 20 participants’ data, achieving a 93.08% F1-score. The high-performance result indicates that the proposed model is promising for sitting posture-related applications.

Top-30

Journals

1
2
1
2

Publishers

1
2
3
4
5
1
2
3
4
5
  • We do not take into account publications without a DOI.
  • Statistics recalculated only for publications connected to researchers, organizations and labs registered on the platform.
  • Statistics recalculated weekly.

Are you a researcher?

Create a profile to get free access to personal recommendations for colleagues and new articles.
Share
Cite this
GOST | RIS | BibTex
Found error?