File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)

Article: NeRFBuff: Fast Neural Rendering via Inter-frame Feature Buffering

TitleNeRFBuff: Fast Neural Rendering via Inter-frame Feature Buffering
Authors
KeywordsCoherence
Image color analysis
Memory management
neural radiance fields
Neural rendering
Real-time systems
Rendering (computer graphics)
rendering acceleration
Three-dimensional displays
Trajectory
Issue Date1-Jan-2024
PublisherInstitute of Electrical and Electronics Engineers
Citation
IEEE Transactions on Visualization and Computer Graphics, 2024, p. 1-14 How to Cite?
AbstractNeural radiance fields (NeRF) have demonstrated impressive performance in novel view synthesis, but are still slow to render complex scenes at a high resolution. We introduce a novel method to boost the NeRF rendering speed by utilizing the temporal coherence between consecutive frames. Rather than computing features of each frame entirely from scratch, we reuse the coherent information (e.g., density and color) computed from the previous frames to help render the current frame, which significantly boosts rendering speed. To effectively manage the coherent information of previous frames, we introduce a history buffer with a multiple-plane structure, which is built online and updated from old frames to new frames. We name this buffer as multiple plane buffer (MPB). With this MPB, a new frame can be efficiently rendered using the warped features from previous frames. Extensive experiments on the NeRF-Synthetic, LLFF, and Mip-NeRF-360 datasets demonstrate that our method significantly boosts rendering efficiency and achieves 4× speedup on real-world scenes compared to the baseline methods while preserving competitive rendering quality.
Persistent Identifierhttp://hdl.handle.net/10722/350888
ISSN
2023 Impact Factor: 4.7
2023 SCImago Journal Rankings: 2.056

 

DC FieldValueLanguage
dc.contributor.authorLiu, Anran-
dc.contributor.authorLiu, Yuan-
dc.contributor.authorLong, Xiaoxiao-
dc.contributor.authorWang, Peng-
dc.contributor.authorLin, Cheng-
dc.contributor.authorLuo, Ping-
dc.contributor.authorWang, Wenping-
dc.date.accessioned2024-11-06T00:30:27Z-
dc.date.available2024-11-06T00:30:27Z-
dc.date.issued2024-01-01-
dc.identifier.citationIEEE Transactions on Visualization and Computer Graphics, 2024, p. 1-14-
dc.identifier.issn1077-2626-
dc.identifier.urihttp://hdl.handle.net/10722/350888-
dc.description.abstractNeural radiance fields (NeRF) have demonstrated impressive performance in novel view synthesis, but are still slow to render complex scenes at a high resolution. We introduce a novel method to boost the NeRF rendering speed by utilizing the temporal coherence between consecutive frames. Rather than computing features of each frame entirely from scratch, we reuse the coherent information (e.g., density and color) computed from the previous frames to help render the current frame, which significantly boosts rendering speed. To effectively manage the coherent information of previous frames, we introduce a history buffer with a multiple-plane structure, which is built online and updated from old frames to new frames. We name this buffer as multiple plane buffer (MPB). With this MPB, a new frame can be efficiently rendered using the warped features from previous frames. Extensive experiments on the NeRF-Synthetic, LLFF, and Mip-NeRF-360 datasets demonstrate that our method significantly boosts rendering efficiency and achieves 4× speedup on real-world scenes compared to the baseline methods while preserving competitive rendering quality.-
dc.languageeng-
dc.publisherInstitute of Electrical and Electronics Engineers-
dc.relation.ispartofIEEE Transactions on Visualization and Computer Graphics-
dc.rightsThis work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.-
dc.subjectCoherence-
dc.subjectImage color analysis-
dc.subjectMemory management-
dc.subjectneural radiance fields-
dc.subjectNeural rendering-
dc.subjectReal-time systems-
dc.subjectRendering (computer graphics)-
dc.subjectrendering acceleration-
dc.subjectThree-dimensional displays-
dc.subjectTrajectory-
dc.titleNeRFBuff: Fast Neural Rendering via Inter-frame Feature Buffering -
dc.typeArticle-
dc.identifier.doi10.1109/TVCG.2024.3393715-
dc.identifier.scopuseid_2-s2.0-85191881701-
dc.identifier.spage1-
dc.identifier.epage14-
dc.identifier.eissn1941-0506-
dc.identifier.issnl1077-2626-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats