Friday May 12, 2023

CVPR 2023 - Model-Agnostic Gender Debiased Image Captioning

In this episode we discuss Model-Agnostic Gender Debiased Image Captioning by Yusuke Hirota, Yuta Nakashima, Noa Garcia. The paper discusses the issue of gender bias in image captioning models and proposes a framework named LIBRA to mitigate such bias. Prior attempts to address this problem by focusing on people created gender-stereotypical words, and it affected gender prediction. The researchers hypothesize that there are two types of bias - exploiting context to predict gender and the probability of generating stereotypical words. The proposed framework learns from synthetic data to decrease both types of bias, correct gender misclassification, and change gender-stereotypical words to more neutral ones.

Comment (0)

No comments yet. Be the first to say something!

Copyright 2023 All rights reserved.

Podcast Powered By Podbean

Version: 20241125