
Thursday May 11, 2023
CVPR 2023 - Bias in Pruned Vision Models: In-Depth Analysis and Countermeasures
In this episode we discuss Bias in Pruned Vision Models: In-Depth Analysis and Countermeasures by Eugenia Iofinova, Alexandra Peste, Dan Alistarh. The paper investigates the relationship between neural network pruning and induced bias in Convolutional Neural Networks (CNNs) for computer vision. The authors show that highly-sparse models (with less than 10% remaining weights) can maintain accuracy without increasing bias when compared to dense models. However, at higher sparsities, pruned models exhibit higher uncertainties in their outputs, as well as increased correlations, which are linked to increased bias. The authors propose easy-to-use criteria to establish whether pruning will increase bias and identify samples most susceptible to biased predictions.
Comments (0)
To leave or reply to comments, please download free Podbean or
No Comments
To leave or reply to comments,
please download free Podbean App.