We present a unified deep learning framework for the recognition of user identity and the recognition of imagined actions, based on electroencephalography (EEG) signals, for application as a brain-computer interface. Our solution exploits a novel shifted subsampling preprocessing step as a form of data augmentation, and a matrix representation to encode the inherent local spatial relationships of multielectrode EEG signals. The resulting image-like data are then fed to a convolutional neural network to process the local spatial dependencies, and eventually analyzed through a bidirectional long-short term memory module to focus on temporal relationships. Our solution is compared against several methods in the state of the art, showing comparable or superior performance on different tasks. Specifically, we achieve accuracy levels above 90% both for action and user classification tasks. In terms of user identification, we reach 0.39% equal error rate in the case of known users and gestures, and 6.16% in the more challenging case of unknown users and gestures. Preliminary experiments are also conducted in order to direct future works toward everyday applications relying on a reduced set of EEG electrodes.
Buzzelli, M., Bianco, S., Napoletano, P. (2023). Unified Framework for Identity and Imagined Action Recognition From EEG Patterns. IEEE TRANSACTIONS ON HUMAN-MACHINE SYSTEMS, 53(3), 529-537 [10.1109/THMS.2023.3267898].
Unified Framework for Identity and Imagined Action Recognition From EEG Patterns
Buzzelli, Marco
;Bianco, Simone;Napoletano, Paolo
2023
Abstract
We present a unified deep learning framework for the recognition of user identity and the recognition of imagined actions, based on electroencephalography (EEG) signals, for application as a brain-computer interface. Our solution exploits a novel shifted subsampling preprocessing step as a form of data augmentation, and a matrix representation to encode the inherent local spatial relationships of multielectrode EEG signals. The resulting image-like data are then fed to a convolutional neural network to process the local spatial dependencies, and eventually analyzed through a bidirectional long-short term memory module to focus on temporal relationships. Our solution is compared against several methods in the state of the art, showing comparable or superior performance on different tasks. Specifically, we achieve accuracy levels above 90% both for action and user classification tasks. In terms of user identification, we reach 0.39% equal error rate in the case of known users and gestures, and 6.16% in the more challenging case of unknown users and gestures. Preliminary experiments are also conducted in order to direct future works toward everyday applications relying on a reduced set of EEG electrodes.File | Dimensione | Formato | |
---|---|---|---|
Buzzelli-2023-IEEE Transact Human-Machine Sys-VoR.pdf
Solo gestori archivio
Descrizione: Article
Tipologia di allegato:
Publisher’s Version (Version of Record, VoR)
Licenza:
Tutti i diritti riservati
Dimensione
1.84 MB
Formato
Adobe PDF
|
1.84 MB | Adobe PDF | Visualizza/Apri Richiedi una copia |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.