File Download

There are no files associated with this item.

Conference Paper: DDP: Diffusion model for dense visual prediction

TitleDDP: Diffusion model for dense visual prediction
Authors
Issue Date2-Oct-2023
Abstract

We propose a simple, efficient, yet powerful framework for dense visual predictions based on the conditional diffusion pipeline. Our approach follows a "noise-to-map" generative paradigm for prediction by progressively removing noise from a random Gaussian distribution, guided by the image. The method, called DDP, efficiently extends the denoising diffusion process into the modern perception pipeline. Without task-specific design and architecture customization, DDP is easy to generalize to most dense prediction tasks, e.g., semantic segmentation and depth estimation. In addition, DDP shows attractive properties such as dynamic inference and uncertainty awareness, in contrast to previous single-step discriminative methods. We show top results on three representative tasks with six diverse benchmarks, without tricks, DDP achieves state-of-the-art or competitive performance on each task compared to the specialist counterparts. For example, semantic segmentation (83.9 mIoU on Cityscapes), BEV map segmentation (70.6 mIoU on nuScenes), and depth estimation (0.05 REL on KITTI). We hope that our approach will serve as a solid baseline and facilitate future research


Persistent Identifierhttp://hdl.handle.net/10722/337773

 

DC FieldValueLanguage
dc.contributor.authorJi, Yuanfeng-
dc.contributor.authorChen, Zhe-
dc.contributor.authorXie, Enze-
dc.contributor.authorHong, Lanqing-
dc.contributor.authorLiu, Xihui-
dc.contributor.authorLiu, Zhaoqiang-
dc.contributor.authorLu, Tong-
dc.contributor.authorLi, Zhenguo-
dc.contributor.authorLuo, Ping-
dc.date.accessioned2024-03-11T10:23:47Z-
dc.date.available2024-03-11T10:23:47Z-
dc.date.issued2023-10-02-
dc.identifier.urihttp://hdl.handle.net/10722/337773-
dc.description.abstract<p>We propose a simple, efficient, yet powerful framework for dense visual predictions based on the conditional diffusion pipeline. Our approach follows a "noise-to-map" generative paradigm for prediction by progressively removing noise from a random Gaussian distribution, guided by the image. The method, called DDP, efficiently extends the denoising diffusion process into the modern perception pipeline. Without task-specific design and architecture customization, DDP is easy to generalize to most dense prediction tasks, e.g., semantic segmentation and depth estimation. In addition, DDP shows attractive properties such as dynamic inference and uncertainty awareness, in contrast to previous single-step discriminative methods. We show top results on three representative tasks with six diverse benchmarks, without tricks, DDP achieves state-of-the-art or competitive performance on each task compared to the specialist counterparts. For example, semantic segmentation (83.9 mIoU on Cityscapes), BEV map segmentation (70.6 mIoU on nuScenes), and depth estimation (0.05 REL on KITTI). We hope that our approach will serve as a solid baseline and facilitate future research<br></p>-
dc.languageeng-
dc.relation.ispartofIEEE International Conference on Computer Vision 2023 (02/10/2023-06/10/2023, Paris)-
dc.titleDDP: Diffusion model for dense visual prediction-
dc.typeConference_Paper-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats