File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/TVCG.2024.3393715
- Scopus: eid_2-s2.0-85191881701
- Find via
Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Article: NeRFBuff: Fast Neural Rendering via Inter-frame Feature Buffering
Title | NeRFBuff: Fast Neural Rendering via Inter-frame Feature Buffering |
---|---|
Authors | |
Keywords | Coherence Image color analysis Memory management neural radiance fields Neural rendering Real-time systems Rendering (computer graphics) rendering acceleration Three-dimensional displays Trajectory |
Issue Date | 1-Jan-2024 |
Publisher | Institute of Electrical and Electronics Engineers |
Citation | IEEE Transactions on Visualization and Computer Graphics, 2024, p. 1-14 How to Cite? |
Abstract | Neural radiance fields (NeRF) have demonstrated impressive performance in novel view synthesis, but are still slow to render complex scenes at a high resolution. We introduce a novel method to boost the NeRF rendering speed by utilizing the temporal coherence between consecutive frames. Rather than computing features of each frame entirely from scratch, we reuse the coherent information (e.g., density and color) computed from the previous frames to help render the current frame, which significantly boosts rendering speed. To effectively manage the coherent information of previous frames, we introduce a history buffer with a multiple-plane structure, which is built online and updated from old frames to new frames. We name this buffer as multiple plane buffer (MPB). With this MPB, a new frame can be efficiently rendered using the warped features from previous frames. Extensive experiments on the NeRF-Synthetic, LLFF, and Mip-NeRF-360 datasets demonstrate that our method significantly boosts rendering efficiency and achieves 4× speedup on real-world scenes compared to the baseline methods while preserving competitive rendering quality. |
Persistent Identifier | http://hdl.handle.net/10722/350888 |
ISSN | 2023 Impact Factor: 4.7 2023 SCImago Journal Rankings: 2.056 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Liu, Anran | - |
dc.contributor.author | Liu, Yuan | - |
dc.contributor.author | Long, Xiaoxiao | - |
dc.contributor.author | Wang, Peng | - |
dc.contributor.author | Lin, Cheng | - |
dc.contributor.author | Luo, Ping | - |
dc.contributor.author | Wang, Wenping | - |
dc.date.accessioned | 2024-11-06T00:30:27Z | - |
dc.date.available | 2024-11-06T00:30:27Z | - |
dc.date.issued | 2024-01-01 | - |
dc.identifier.citation | IEEE Transactions on Visualization and Computer Graphics, 2024, p. 1-14 | - |
dc.identifier.issn | 1077-2626 | - |
dc.identifier.uri | http://hdl.handle.net/10722/350888 | - |
dc.description.abstract | Neural radiance fields (NeRF) have demonstrated impressive performance in novel view synthesis, but are still slow to render complex scenes at a high resolution. We introduce a novel method to boost the NeRF rendering speed by utilizing the temporal coherence between consecutive frames. Rather than computing features of each frame entirely from scratch, we reuse the coherent information (e.g., density and color) computed from the previous frames to help render the current frame, which significantly boosts rendering speed. To effectively manage the coherent information of previous frames, we introduce a history buffer with a multiple-plane structure, which is built online and updated from old frames to new frames. We name this buffer as multiple plane buffer (MPB). With this MPB, a new frame can be efficiently rendered using the warped features from previous frames. Extensive experiments on the NeRF-Synthetic, LLFF, and Mip-NeRF-360 datasets demonstrate that our method significantly boosts rendering efficiency and achieves 4× speedup on real-world scenes compared to the baseline methods while preserving competitive rendering quality. | - |
dc.language | eng | - |
dc.publisher | Institute of Electrical and Electronics Engineers | - |
dc.relation.ispartof | IEEE Transactions on Visualization and Computer Graphics | - |
dc.rights | This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. | - |
dc.subject | Coherence | - |
dc.subject | Image color analysis | - |
dc.subject | Memory management | - |
dc.subject | neural radiance fields | - |
dc.subject | Neural rendering | - |
dc.subject | Real-time systems | - |
dc.subject | Rendering (computer graphics) | - |
dc.subject | rendering acceleration | - |
dc.subject | Three-dimensional displays | - |
dc.subject | Trajectory | - |
dc.title | NeRFBuff: Fast Neural Rendering via Inter-frame Feature Buffering | - |
dc.type | Article | - |
dc.identifier.doi | 10.1109/TVCG.2024.3393715 | - |
dc.identifier.scopus | eid_2-s2.0-85191881701 | - |
dc.identifier.spage | 1 | - |
dc.identifier.epage | 14 | - |
dc.identifier.eissn | 1941-0506 | - |
dc.identifier.issnl | 1077-2626 | - |