Advertisements, especially in online social media, are often based on visual and/or textual persuasive messages, frequently showing women as subjects. Some of these advertisements create a biased portrays of women, finally resulting as sexist and in some cases misogynist. In this paper we give a first insight in the field of automatic detection of sexist multimedia contents, by proposing both a unimodal and a multimodal approach. In the unimodal approach we propose binary classifiers based on different visual features to automatically detect sexist visual content. In the multimodal approach both visual and textual features are considered. We created a manually labeled database of sexist and non sexist advertisements, composed of two main datasets: a first one containing 423 advertisements with images that have been considered sexist (or non sexist) with respect to their visual content, and a second dataset comprising 192 advertisements labeled as sexist and non sexist according to visual and/or textual cues. We adopted the first dataset to train a visual classifier. Finally we proved that a multimodal approach that considers the trained visual classifier and a textual one permits good classification performance on the second dataset, reaching 87% of recall and 75% of accuracy, which are significantly higher than the performance obtained by each of the corresponding unimodal approaches.

Gasparini, F., Erba, I., Fersini, E., Corchs, S. (2018). Multimodal classification of sexist advertisements. In ICETE 2018 - Proceedings of the 15th International Joint Conference on e-Business and Telecommunications - Volume 2 (pp.399-406). SciTePress [10.5220/0006859405650572].

Multimodal classification of sexist advertisements

Gasparini, F;Fersini, E;Corchs, S
2018

Abstract

Advertisements, especially in online social media, are often based on visual and/or textual persuasive messages, frequently showing women as subjects. Some of these advertisements create a biased portrays of women, finally resulting as sexist and in some cases misogynist. In this paper we give a first insight in the field of automatic detection of sexist multimedia contents, by proposing both a unimodal and a multimodal approach. In the unimodal approach we propose binary classifiers based on different visual features to automatically detect sexist visual content. In the multimodal approach both visual and textual features are considered. We created a manually labeled database of sexist and non sexist advertisements, composed of two main datasets: a first one containing 423 advertisements with images that have been considered sexist (or non sexist) with respect to their visual content, and a second dataset comprising 192 advertisements labeled as sexist and non sexist according to visual and/or textual cues. We adopted the first dataset to train a visual classifier. Finally we proved that a multimodal approach that considers the trained visual classifier and a textual one permits good classification performance on the second dataset, reaching 87% of recall and 75% of accuracy, which are significantly higher than the performance obtained by each of the corresponding unimodal approaches.
paper
Image Classification, Multimodal Classification, Sexist Advertising, Text Analysis.
English
International Conference on Signal Processing and Multimedia Applications 26-28 July
2018
ICETE 2018 - Proceedings of the 15th International Joint Conference on e-Business and Telecommunications - Volume 2
9789897583193
2018
1
399
406
https://www.scitepress.org/PublicationsDetail.aspx?ID=8ZIZEuLq1J4=&t=1
none
Gasparini, F., Erba, I., Fersini, E., Corchs, S. (2018). Multimodal classification of sexist advertisements. In ICETE 2018 - Proceedings of the 15th International Joint Conference on e-Business and Telecommunications - Volume 2 (pp.399-406). SciTePress [10.5220/0006859405650572].
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10281/219428
Citazioni
  • Scopus 14
  • ???jsp.display-item.citation.isi??? ND
Social impact