Sensor-based Human Activity Recognition (HAR) provides valuable knowledge to many areas. Recently, wearable devices have gained space as a relevant source of data. However, there are two issues: large number of heterogeneous sensors available and the temporal nature of the sensor data. To handle those issues, we propose a multimodal approach that processes each sensor separately and, through an ensemble of Deep Convolution Neural Networks (DCNN), extracts information from multiple temporal scales of the sensor data. In this ensemble, we use a convolutional kernel with a different height for each DCNN. Considering that the number of rows in the sensor data reflects the data captured over time, each kernel height reflects a temporal scale from which we can extract patterns. Consequently, our approach is able to extract from simple movement patterns such as a wrist twist when picking up a spoon to complex movements such as the human gait. This multimodal and multitemporal approach outperforms previous state-of-the-art works in seven important datasets using two different protocols. In addition, we demonstrate that the use of our proposed set of kernels improves sensor-based HAR in another multi-kernel approach, the widely employed inception network.
Multiscale DCNN Ensemble Applied to Human Activity Recognition Based on Wearable Sensors
Jessica Sena,Jesimon Barreto Santos,W. R. Schwartz
Published 2018 in European Signal Processing Conference
ABSTRACT
PUBLICATION RECORD
- Publication year
2018
- Venue
European Signal Processing Conference
- Publication date
2018-09-01
- Fields of study
Computer Science, Engineering
- Identifiers
- External record
- Source metadata
Semantic Scholar
CITATION MAP
EXTRACTION MAP
CLAIMS
- No claims are published for this paper.
CONCEPTS
- No concepts are published for this paper.
REFERENCES
Showing 1-22 of 22 references · Page 1 of 1
CITED BY
Showing 1-8 of 8 citing papers · Page 1 of 1