Electroencephalografic (EEG) data are complex multi-dimensional time-series which are very useful in many different applications, i.e., from diagnostics of epilepsy to driving brain-computer interface systems. Their classification is still a challenging task, due to the inherent within- and between-subject variability as well as their low signal-to-noise ratio. On the other hand, the reconstruction of raw EEG data is even more difficult because of the high temporal resolution of these signals. Recent literature has proposed numerous machine and deep learning models that could classify, e.g., different types of movements, with an accuracy in the range 70% to 80% (with 4 classes). On the other hand, a limited number of works targetted the reconstruction problem, with very limited results. In this work, we propose vEEGNet, a DL architecture with two modules, i.e., an unsupervised module based on variational autoencoders to extract a latent representation of the multi-channel EEG data, and a supervised module based on a feed-forward neural network to classify different movements. Furthermore, to build the encoder and the decoder of VAE we exploited the well-known EEGNet network, specifically designed since 2016 to process EEG data. We implemented two slightly different architectures of vEEGNet, thus showing state of the art classification performance, and the ability to reconstruct both low frequency and middle-range components of the raw EEG. Although preliminary, this work is promising as we found out that the low-frequency reconstructed signals are consistent with the so-called motor-related cortical potentials, very specific and well-known motor-related EEG patterns and we could improve over previous literature by reconstructing faster EEG components, too. Further investigations are needed to explore the potentialities of vEEGNet in reconstructing the full EEG data, to generate new samples, and to study the relationship between classification and reconstruction performance.

Zancanaro, A., Cisotto, G., Zoppis, I., Manzoni, S. (2024). vEEGNet: Learning Latent Representations to Reconstruct EEG Raw Data via Variational Autoencoders. In M. Ziefle, M.D. Lozano, M. Mulvenna (a cura di), Information and Communication Technologies for Ageing Well and e-Health 9th International Conference, ICT4AWE 2023, Prague, Czech Republic, April 22–24, 2023, Revised Selected Papers (pp. 114-129). Springer [10.1007/978-3-031-62753-8_7].

vEEGNet: Learning Latent Representations to Reconstruct EEG Raw Data via Variational Autoencoders

Cisotto G.
;
Zoppis I.;Manzoni S. L.
2024

Abstract

Electroencephalografic (EEG) data are complex multi-dimensional time-series which are very useful in many different applications, i.e., from diagnostics of epilepsy to driving brain-computer interface systems. Their classification is still a challenging task, due to the inherent within- and between-subject variability as well as their low signal-to-noise ratio. On the other hand, the reconstruction of raw EEG data is even more difficult because of the high temporal resolution of these signals. Recent literature has proposed numerous machine and deep learning models that could classify, e.g., different types of movements, with an accuracy in the range 70% to 80% (with 4 classes). On the other hand, a limited number of works targetted the reconstruction problem, with very limited results. In this work, we propose vEEGNet, a DL architecture with two modules, i.e., an unsupervised module based on variational autoencoders to extract a latent representation of the multi-channel EEG data, and a supervised module based on a feed-forward neural network to classify different movements. Furthermore, to build the encoder and the decoder of VAE we exploited the well-known EEGNet network, specifically designed since 2016 to process EEG data. We implemented two slightly different architectures of vEEGNet, thus showing state of the art classification performance, and the ability to reconstruct both low frequency and middle-range components of the raw EEG. Although preliminary, this work is promising as we found out that the low-frequency reconstructed signals are consistent with the so-called motor-related cortical potentials, very specific and well-known motor-related EEG patterns and we could improve over previous literature by reconstructing faster EEG components, too. Further investigations are needed to explore the potentialities of vEEGNet in reconstructing the full EEG data, to generate new samples, and to study the relationship between classification and reconstruction performance.
Capitolo o saggio
AI; CNN; Complex systems; Deep learning; EEG; Inter-Subject variability; Latent space; Machine learning; Reconstruction; Time-Series; Variational autoencoder;
English
Information and Communication Technologies for Ageing Well and e-Health 9th International Conference, ICT4AWE 2023, Prague, Czech Republic, April 22–24, 2023, Revised Selected Papers
Ziefle, M; Lozano, MD; Mulvenna, M
26-lug-2024
2024
9783031627521
2087 CCIS
Springer
114
129
Zancanaro, A., Cisotto, G., Zoppis, I., Manzoni, S. (2024). vEEGNet: Learning Latent Representations to Reconstruct EEG Raw Data via Variational Autoencoders. In M. Ziefle, M.D. Lozano, M. Mulvenna (a cura di), Information and Communication Technologies for Ageing Well and e-Health 9th International Conference, ICT4AWE 2023, Prague, Czech Republic, April 22–24, 2023, Revised Selected Papers (pp. 114-129). Springer [10.1007/978-3-031-62753-8_7].
reserved
File in questo prodotto:
File Dimensione Formato  
Zancanaro-2024-ICT4AWE-VoR.pdf

Solo gestori archivio

Descrizione: proof
Tipologia di allegato: Submitted Version (Pre-print)
Licenza: Tutti i diritti riservati
Dimensione 1.48 MB
Formato Adobe PDF
1.48 MB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10281/513959
Citazioni
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
Social impact