File Download

There are no files associated with this item.

Supplementary

Conference Paper: NeuS: Learning Neural Implicit Surfaces by Volume Rendering for Multi-view Reconstruction

TitleNeuS: Learning Neural Implicit Surfaces by Volume Rendering for Multi-view Reconstruction
Authors
Issue Date2021
PublisherCurran Associates.
Citation
Thirty-fifth Conference on Neural Information Processing Systems (NeurIPS 2021) (Virtual), December 6-14, 2021. In Advances in Neural Information Processing Systems: 35th conference on neural information processing systems (Neurips 2021), v. 34, p. 27171-27183 How to Cite?
AbstractWe present a novel neural surface reconstruction method, called NeuS, for reconstructing objects and scenes with high fidelity from 2D image inputs. Existing neural surface reconstruction approaches, such as DVR [Niemeyer et al., 2020] and IDR [Yariv et al., 2020], require foreground mask as supervision, easily get trapped in local minima, and therefore struggle with the reconstruction of objects with severe self-occlusion or thin structures. Meanwhile, recent neural methods for novel view synthesis, such as NeRF [Mildenhall et al., 2020] and its variants, use volume rendering to produce a neural scene representation with robustness of optimization, even for highly complex objects. However, extracting high-quality surfaces from this learned implicit representation is difficult because there are not sufficient surface constraints in the representation. In NeuS, we propose to represent a surface as the zero-level set of a signed distance function (SDF) and develop a new volume rendering method to train a neural SDF representation. We observe that the conventional volume rendering method causes inherent geometric errors (i.e. bias) for surface reconstruction, and therefore propose a new formulation that is free of bias in the first order of approximation, thus leading to more accurate surface reconstruction even without the mask supervision. Experiments on the DTU dataset and the BlendedMVS dataset show that NeuS outperforms the state-of-the-arts in high-quality surface reconstruction, especially for objects and scenes with complex structures and self-occlusion.
Persistent Identifierhttp://hdl.handle.net/10722/319919

 

DC FieldValueLanguage
dc.contributor.authorWang, P-
dc.contributor.authorLiu, L-
dc.contributor.authorLiu, Y-
dc.contributor.authorTheobalt, C-
dc.contributor.authorKomura, T-
dc.contributor.authorWang, WP-
dc.date.accessioned2022-10-14T05:22:09Z-
dc.date.available2022-10-14T05:22:09Z-
dc.date.issued2021-
dc.identifier.citationThirty-fifth Conference on Neural Information Processing Systems (NeurIPS 2021) (Virtual), December 6-14, 2021. In Advances in Neural Information Processing Systems: 35th conference on neural information processing systems (Neurips 2021), v. 34, p. 27171-27183-
dc.identifier.urihttp://hdl.handle.net/10722/319919-
dc.description.abstractWe present a novel neural surface reconstruction method, called NeuS, for reconstructing objects and scenes with high fidelity from 2D image inputs. Existing neural surface reconstruction approaches, such as DVR [Niemeyer et al., 2020] and IDR [Yariv et al., 2020], require foreground mask as supervision, easily get trapped in local minima, and therefore struggle with the reconstruction of objects with severe self-occlusion or thin structures. Meanwhile, recent neural methods for novel view synthesis, such as NeRF [Mildenhall et al., 2020] and its variants, use volume rendering to produce a neural scene representation with robustness of optimization, even for highly complex objects. However, extracting high-quality surfaces from this learned implicit representation is difficult because there are not sufficient surface constraints in the representation. In NeuS, we propose to represent a surface as the zero-level set of a signed distance function (SDF) and develop a new volume rendering method to train a neural SDF representation. We observe that the conventional volume rendering method causes inherent geometric errors (i.e. bias) for surface reconstruction, and therefore propose a new formulation that is free of bias in the first order of approximation, thus leading to more accurate surface reconstruction even without the mask supervision. Experiments on the DTU dataset and the BlendedMVS dataset show that NeuS outperforms the state-of-the-arts in high-quality surface reconstruction, especially for objects and scenes with complex structures and self-occlusion.-
dc.languageeng-
dc.publisherCurran Associates.-
dc.relation.ispartofAdvances in Neural Information Processing Systems: 35th conference on neural information processing systems (Neurips 2021)-
dc.titleNeuS: Learning Neural Implicit Surfaces by Volume Rendering for Multi-view Reconstruction-
dc.typeConference_Paper-
dc.identifier.emailKomura, T: taku@cs.hku.hk-
dc.identifier.authorityKomura, T=rp02741-
dc.identifier.authorityWang, WP=rp00186-
dc.identifier.hkuros338926-
dc.identifier.volume34-
dc.identifier.spage27171-
dc.publisher.placeUnited States-
dc.identifier.eisbn9781713845393-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats