File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Anisotropic Convolutional Neural Networks for RGB-D based Semantic Scene Completion

TitleAnisotropic Convolutional Neural Networks for RGB-D based Semantic Scene Completion
Authors
Keywords3D scene understanding
anisotropic convolution
Context modeling
Convolution
dimensional decomposition convolution
Kernel
semantic scene completion
Semantics
Solid modeling
Task analysis
Three-dimensional displays
Issue Date2021
Citation
IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021 How to Cite?
AbstractSemantic Scene Completion (SSC) is a computer vision task aiming to simultaneously infer the occupancy and semantic labels for each voxel in a scene from partial information consisting of a depth image and/or a RGB image. As a voxel-wise labeling task, the key for SSC is how to effectively model the visual and geometrical variations to complete the scene. To this end, we propose the Anisotropic Network, with novel convolutional modules that can model varying anisotropic receptive fields voxel-wisely in a computationally efficient manner. The basic idea to achieve such anisotropy is to decompose 3D convolution into consecutive dimensional convolutions, and determine the dimension-wise kernels on the fly. One module, termed kernel-selection anisotropic convolution, adaptively selects the optimal kernel for each dimensional convolution from a set of candidate kernels, and the other module, termed kernel-modulation anisotropic convolution, modulates a single kernel for each dimension to derive more flexible receptive field. By stacking multiple such modules, the 3D context modeling capability and flexibility can be further enhanced. Moreover, we present a new end-to-end trainable framework to approach the SSC task avoiding the expensive TSDF pre-processing as in existing methods. Extensive experiments on SSC benchmarks show the advantage of the proposed methods.
Persistent Identifierhttp://hdl.handle.net/10722/311518
ISSN
2021 Impact Factor: 24.314
2020 SCImago Journal Rankings: 3.811

 

DC FieldValueLanguage
dc.contributor.authorLi, Jie-
dc.contributor.authorWang, Peng-
dc.contributor.authorHan, Kai-
dc.contributor.authorLiu, Yu-
dc.date.accessioned2022-03-22T11:54:08Z-
dc.date.available2022-03-22T11:54:08Z-
dc.date.issued2021-
dc.identifier.citationIEEE Transactions on Pattern Analysis and Machine Intelligence, 2021-
dc.identifier.issn0162-8828-
dc.identifier.urihttp://hdl.handle.net/10722/311518-
dc.description.abstractSemantic Scene Completion (SSC) is a computer vision task aiming to simultaneously infer the occupancy and semantic labels for each voxel in a scene from partial information consisting of a depth image and/or a RGB image. As a voxel-wise labeling task, the key for SSC is how to effectively model the visual and geometrical variations to complete the scene. To this end, we propose the Anisotropic Network, with novel convolutional modules that can model varying anisotropic receptive fields voxel-wisely in a computationally efficient manner. The basic idea to achieve such anisotropy is to decompose 3D convolution into consecutive dimensional convolutions, and determine the dimension-wise kernels on the fly. One module, termed kernel-selection anisotropic convolution, adaptively selects the optimal kernel for each dimensional convolution from a set of candidate kernels, and the other module, termed kernel-modulation anisotropic convolution, modulates a single kernel for each dimension to derive more flexible receptive field. By stacking multiple such modules, the 3D context modeling capability and flexibility can be further enhanced. Moreover, we present a new end-to-end trainable framework to approach the SSC task avoiding the expensive TSDF pre-processing as in existing methods. Extensive experiments on SSC benchmarks show the advantage of the proposed methods.-
dc.languageeng-
dc.relation.ispartofIEEE Transactions on Pattern Analysis and Machine Intelligence-
dc.subject3D scene understanding-
dc.subjectanisotropic convolution-
dc.subjectContext modeling-
dc.subjectConvolution-
dc.subjectdimensional decomposition convolution-
dc.subjectKernel-
dc.subjectsemantic scene completion-
dc.subjectSemantics-
dc.subjectSolid modeling-
dc.subjectTask analysis-
dc.subjectThree-dimensional displays-
dc.titleAnisotropic Convolutional Neural Networks for RGB-D based Semantic Scene Completion-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/TPAMI.2021.3081499-
dc.identifier.scopuseid_2-s2.0-85106746781-
dc.identifier.eissn1939-3539-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats