In this paper, we address the problem of automatic misogynous meme recognition by dealing with potentially biased elements that could lead to unfair models. In particular, a bias estimation technique is proposed to identify those textual and visual elements that unintendedly affect the model prediction, together with a naive bias mitigation strategy. The proposed approach is able to achieve good recognition performance characterized by promising generalization capabilities.
Balducci, G., Rizzi, G., Fersini, E. (2023). Bias Mitigation in Misogynous Meme Recognition: A Preliminary Study. In Proceedings of the 9th Italian Conference on Computational Linguistics (pp.1-7). CEUR-WS.
Bias Mitigation in Misogynous Meme Recognition: A Preliminary Study
Balducci G.Primo
;Rizzi G.Secondo
;Fersini E.
Ultimo
2023
Abstract
In this paper, we address the problem of automatic misogynous meme recognition by dealing with potentially biased elements that could lead to unfair models. In particular, a bias estimation technique is proposed to identify those textual and visual elements that unintendedly affect the model prediction, together with a naive bias mitigation strategy. The proposed approach is able to achieve good recognition performance characterized by promising generalization capabilities.File | Dimensione | Formato | |
---|---|---|---|
Balducci-2023-CLiC-it-CEUR Workshop Proceedings-VoR.pdf
accesso aperto
Descrizione: his volume and its papers are published under the Creative Commons License Attribution 4.0 International (CC BY 4.0).
Tipologia di allegato:
Publisher’s Version (Version of Record, VoR)
Licenza:
Creative Commons
Dimensione
1.26 MB
Formato
Adobe PDF
|
1.26 MB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.