Human activity recognition (HAR) is a very active field of research and many techniques have been developed in recent years. Deep learning techniques applied to 3D and 4D signals such as images and videos have proven to be effective, achieving significant classification accuracy. Recently, these techniques are also being used for 1D signals and are exploited to recognize human Activities of Daily Living (ADLs). They process inertial signals such as those obtained from accelerometers and gyroscopes. However, to compute accurate and reliable deep learning models, a lot of sample data is required. Moreover, the creation of a dataset to be used with deep learning techniques is an onerous process that requires the involvement of a significant number of possibly heterogeneous subjects. The publicly available datasets are few and, with rare exceptions, contain few subjects. Furthermore, datasets are heterogeneous and therefore not directly usable all together. The goal of our work is the definition of a platform to support long-term data collection to be used in training HAR algorithms. The platform, termed Continuous Learning Platform (CLP), aims to integrate datasets of inertial signals in order to make available to the scientific community a large dataset of homogeneous signals and, when possible, enrich it with context information (e.g., characteristics of the subject, device position, and so on). Moreover, the platform has been designed to provide additional services such as the deployment of activity recognition models and online signal labelling services. The architecture has been defined and some of the main components have been developed in order to verify the soundness of the approach
Ferrari, A., Micucci, D., Mobilio, M., Napoletano, P. (2019). A Framework for Long-Term Data Collection to Support Automatic Human Activity Recognition. In Ambient Intelligence and Smart Environments. IOS Press [10.3233/AISE190067].
A Framework for Long-Term Data Collection to Support Automatic Human Activity Recognition
Ferrari Anna;Micucci Daniela;Mobilio Marco;Napoletano Paolo
2019
Abstract
Human activity recognition (HAR) is a very active field of research and many techniques have been developed in recent years. Deep learning techniques applied to 3D and 4D signals such as images and videos have proven to be effective, achieving significant classification accuracy. Recently, these techniques are also being used for 1D signals and are exploited to recognize human Activities of Daily Living (ADLs). They process inertial signals such as those obtained from accelerometers and gyroscopes. However, to compute accurate and reliable deep learning models, a lot of sample data is required. Moreover, the creation of a dataset to be used with deep learning techniques is an onerous process that requires the involvement of a significant number of possibly heterogeneous subjects. The publicly available datasets are few and, with rare exceptions, contain few subjects. Furthermore, datasets are heterogeneous and therefore not directly usable all together. The goal of our work is the definition of a platform to support long-term data collection to be used in training HAR algorithms. The platform, termed Continuous Learning Platform (CLP), aims to integrate datasets of inertial signals in order to make available to the scientific community a large dataset of homogeneous signals and, when possible, enrich it with context information (e.g., characteristics of the subject, device position, and so on). Moreover, the platform has been designed to provide additional services such as the deployment of activity recognition models and online signal labelling services. The architecture has been defined and some of the main components have been developed in order to verify the soundness of the approachFile | Dimensione | Formato | |
---|---|---|---|
AISE-26-AISE190067.pdf
accesso aperto
Tipologia di allegato:
Publisher’s Version (Version of Record, VoR)
Dimensione
336.37 kB
Formato
Adobe PDF
|
336.37 kB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.