Friday May 12, 2023

CVPR 2023, highlight paper - DiffRF: Rendering-Guided 3D Radiance Field Diffusion

In this episode we discuss DiffRF: Rendering-Guided 3D Radiance Field Diffusion by Norman Müller, Yawar Siddiqui, Lorenzo Porzi, Samuel Rota Bulò, Peter Kontschieder, Matthias Nießner. The paper introduces a novel approach for 3D radiance field synthesis called DiffRF, which is based on denoising diffusion probabilistic models. Unlike existing diffusion-based methods that operate on images, latent codes, or point cloud data, DiffRF directly generates volumetric radiance fields. The model addresses the challenge of obtaining ground truth radiance field samples by pairing the denoising formulation with a rendering loss. DiffRF learns multi-view consistent priors, enabling free-view synthesis and accurate shape generation, and naturally enables conditional generation such as masked completion or single-view 3D synthesis at inference time.

Comments (0)

To leave or reply to comments, please download free Podbean or

No Comments

Copyright 2023 All rights reserved.

Podcast Powered By Podbean

Version: 20241125