
Saturday May 06, 2023
CVPR 2023 - On Distillation of Guided Diffusion Models
In this episode we discuss On Distillation of Guided Diffusion Models by Authors: - Chenlin Meng - Robin Rombach - Ruiqi Gao - Diederik Kingma - Stefano Ermon - Jonathan Ho - Tim Salimans Affiliations: - Chenlin Meng: Stanford University - Robin Rombach: Stability AI & LMU Munich - Ruiqi Gao, Diederik Kingma, Jonathan Ho, Tim Salimans: Google Research, Brain Team - Stefano Ermon: Stanford University. The paper proposes an approach to distill classifier-free guided diffusion models, a type of high-resolution image generation model, into faster models that require fewer sampling steps. This is achieved by first learning a single model to match the output of the combined conditional and unconditional models and then progressively distilling that model to a diffusion model. The approach is shown to generate visually comparable images to the original model while being up to 256 times faster to sample from on ImageNet 64x64 and CIFAR-10 datasets. It also achieves high-fidelity images using as few as 1 to 4 denoising steps on ImageNet 256x256 and LAION datasets for diffusion models trained on the latent-space, and is effective in text-guided image editing and inpainting.
Comments (0)
To leave or reply to comments, please download free Podbean or
No Comments
To leave or reply to comments,
please download free Podbean App.