
Friday May 12, 2023
CVPR 2023 - Model-Agnostic Gender Debiased Image Captioning
In this episode we discuss Model-Agnostic Gender Debiased Image Captioning by Yusuke Hirota, Yuta Nakashima, Noa Garcia. The paper discusses the issue of gender bias in image captioning models and proposes a framework named LIBRA to mitigate such bias. Prior attempts to address this problem by focusing on people created gender-stereotypical words, and it affected gender prediction. The researchers hypothesize that there are two types of bias - exploiting context to predict gender and the probability of generating stereotypical words. The proposed framework learns from synthetic data to decrease both types of bias, correct gender misclassification, and change gender-stereotypical words to more neutral ones.
Comments (0)
To leave or reply to comments, please download free Podbean or
No Comments
To leave or reply to comments,
please download free Podbean App.