Decoding of linguistics information from electrical brain states aims at understanding how language is represented in the brain. In this work we combined electrophysiology and distributional semantics: the former provides large datasets in which semantic content is encoded in the neural signal; the latter provides a quantitative theory of meaning. We used data from the Kiloword megastudy and we modeled meaning with fastText, studying the extent to which semantic information could be recovered from the neural data. We modeled the relationship between the two initially with linear models, and then with convolutional neural networks, by using a Leave-Two-Out procedure to assess how much our models were able to distinguish a target word from a random word selected as control. Our models show that semantic information can be decoded from the neural signal, that it is distributed according to the time window of the N400 ERP component, and that neural networks are more powerful in information extraction as compared to linear models. Our work eventually brings support to distributional semantics, showing that linguistic information encoded in semantic vectors can be recovered from brain signals.

Introzzi, L., Monti, D., Petilli, M., Marelli, M. (2024). Neural Semantic Decoding through Distributional Semantics: Comparing Linear Models and Neural Networks. Intervento presentato a: Giornata sul Pensiero, Messina, Italia.

Neural Semantic Decoding through Distributional Semantics: Comparing Linear Models and Neural Networks

Introzzi, L.
Primo
;
Petilli, M
Penultimo
;
Marelli, M
Ultimo
2024

Abstract

Decoding of linguistics information from electrical brain states aims at understanding how language is represented in the brain. In this work we combined electrophysiology and distributional semantics: the former provides large datasets in which semantic content is encoded in the neural signal; the latter provides a quantitative theory of meaning. We used data from the Kiloword megastudy and we modeled meaning with fastText, studying the extent to which semantic information could be recovered from the neural data. We modeled the relationship between the two initially with linear models, and then with convolutional neural networks, by using a Leave-Two-Out procedure to assess how much our models were able to distinguish a target word from a random word selected as control. Our models show that semantic information can be decoded from the neural signal, that it is distributed according to the time window of the N400 ERP component, and that neural networks are more powerful in information extraction as compared to linear models. Our work eventually brings support to distributional semantics, showing that linguistic information encoded in semantic vectors can be recovered from brain signals.
abstract + slide
distributional semantics; computational modelling; neural networks; semantic decoding; EEG; ERP
English
Giornata sul Pensiero
2024
2024
none
Introzzi, L., Monti, D., Petilli, M., Marelli, M. (2024). Neural Semantic Decoding through Distributional Semantics: Comparing Linear Models and Neural Networks. Intervento presentato a: Giornata sul Pensiero, Messina, Italia.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10281/547503
Citazioni
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
Social impact