
Saturday May 13, 2023
CVPR 2023 - Feature Separation and Recalibration for Adversarial Robustness
In this episode we discuss Feature Separation and Recalibration for Adversarial Robustness by Woo Jae Kim, Yoonki Cho, Junsik Jung, Sung-Eui Yoon. The paper proposes a novel approach called Feature Separation and Recalibration (FSR) to improve the robustness of deep neural networks against adversarial attacks. The FSR method recalibrates the non-robust feature activations, which are responsible for model mispredictions under adversarial attacks, by disentangling them from the robust feature activations and adjusting them to restore potentially useful cues for correct model predictions. The results of extensive experiments show that FSR outperforms traditional deactivation techniques and improves the robustness of existing adversarial training methods by up to 8.57% with minimal computational overhead.
Comments (0)
To leave or reply to comments, please download free Podbean or
No Comments
To leave or reply to comments,
please download free Podbean App.