Thanks to the recent progresses in judicial proceedings management, especially related to the introduction of audio/video recording systems, semantic retrieval has now become a realistic key challenge. In this context emotion recognition engine, through the analysis of vocal signature of actors involved in judicial proceedings, could provide useful annotations for semantic retrieval of multimedia clips. With respect to the generation of semantic emotional tag in judicial domain, two main contributions are given: (1) the construction of an Italian emotional database for Italian proceedings annotation; (2) the investigation of a hierarchical classification system, based on risk minimization method, able to recognize emotional states from vocal signatures. In order to estimate the degree of affection we compared the proposed classification method with the traditional ones, highlighting in terms of classification accuracy the improvements given by a hierarchical learning approach.
Archetti, F., Fersini, E., Arosio, G., Messina, V. (2008). Audio-based Emotion Recognition for Advanced Automatic Retrieval in Judicial Domain. In ICT4JUSTICE, 1st International Conference on ICT Solutions for Justice.
Audio-based Emotion Recognition for Advanced Automatic Retrieval in Judicial Domain
ARCHETTI, FRANCESCO ANTONIO;FERSINI, ELISABETTA;MESSINA, VINCENZINA
2008
Abstract
Thanks to the recent progresses in judicial proceedings management, especially related to the introduction of audio/video recording systems, semantic retrieval has now become a realistic key challenge. In this context emotion recognition engine, through the analysis of vocal signature of actors involved in judicial proceedings, could provide useful annotations for semantic retrieval of multimedia clips. With respect to the generation of semantic emotional tag in judicial domain, two main contributions are given: (1) the construction of an Italian emotional database for Italian proceedings annotation; (2) the investigation of a hierarchical classification system, based on risk minimization method, able to recognize emotional states from vocal signatures. In order to estimate the degree of affection we compared the proposed classification method with the traditional ones, highlighting in terms of classification accuracy the improvements given by a hierarchical learning approach.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.