File Download
  Links for fulltext
     (May Require Subscription)
Supplementary

Article: LightFormer: Light-Oriented Global Neural Rendering in Dynamic Scene

TitleLightFormer: Light-Oriented Global Neural Rendering in Dynamic Scene
Authors
Issue Date19-Jul-2024
PublisherACM
Citation
Transcations on Graphics, 2024, v. 43, n. 4 How to Cite?
Abstract

The generation of global illumination in real time has been a long-standing challenge in the graphics community, particularly in dynamic scenes with complex illumination. Recent neural rendering techniques have shown great promise by utilizing neural networks to represent the illumination of scenes and then decoding the final radiance. However, incorporating object parameters into the representation may limit their effectiveness in handling fully dynamic scenes. This work presents a neural rendering approach, dubbed LightFormer, that can generate realistic global illumination for fully dynamic scenes, including dynamic lighting, materials, cameras, and animated objects, in real time. Inspired by classic many-lights methods, the proposed approach focuses on the neural representation of light sources in the scene rather than the entire scene, leading to the overall better generalizability. The neural prediction is achieved by leveraging the virtual point lights and shading clues for each light. Specifically, two stages are explored. In the light encoding stage, each light generates a set of virtual point lights in the scene, which are then encoded into an implicit neural light representation, along with screen-space shading clues like visibility. In the light gathering stage, a pixel-light attention mechanism composites all light representations for each shading point. Given the geometry and material representation, in tandem with the composed light representations of all lights, a lightweight neural network predicts the final radiance. Experimental results demonstrate that the proposed LightFormer can yield reasonable and realistic global illumination in fully dynamic scenes with real-time performance.


Persistent Identifierhttp://hdl.handle.net/10722/361856
ISSN
2023 Impact Factor: 7.8
2023 SCImago Journal Rankings: 7.766

 

DC FieldValueLanguage
dc.contributor.authorRen, Haocheng-
dc.contributor.authorHuo, Yuchi-
dc.contributor.authorPeng, Yifan-
dc.contributor.authorSheng, Hongtao-
dc.contributor.authorXue, Weidong-
dc.contributor.authorHuang, Hongxiang-
dc.contributor.authorLan, Jingzhen-
dc.contributor.authorRui, Wang-
dc.contributor.authorBao, Hujun-
dc.date.accessioned2025-09-17T00:31:14Z-
dc.date.available2025-09-17T00:31:14Z-
dc.date.issued2024-07-19-
dc.identifier.citationTranscations on Graphics, 2024, v. 43, n. 4-
dc.identifier.issn0730-0301-
dc.identifier.urihttp://hdl.handle.net/10722/361856-
dc.description.abstract<p> The generation of global illumination in real time has been a long-standing challenge in the graphics community, particularly in dynamic scenes with complex illumination. Recent neural rendering techniques have shown great promise by utilizing neural networks to represent the illumination of scenes and then decoding the final radiance. However, incorporating object parameters into the representation may limit their effectiveness in handling fully dynamic scenes. This work presents a neural rendering approach, dubbed <em>LightFormer</em>, that can generate realistic global illumination for fully dynamic scenes, including dynamic lighting, materials, cameras, and animated objects, in real time. Inspired by classic many-lights methods, the proposed approach focuses on the neural representation of light sources in the scene rather than the entire scene, leading to the overall better generalizability. The neural prediction is achieved by leveraging the virtual point lights and shading clues for each light. Specifically, two stages are explored. In the light encoding stage, each light generates a set of virtual point lights in the scene, which are then encoded into an implicit neural light representation, along with screen-space shading clues like visibility. In the light gathering stage, a pixel-light attention mechanism composites all light representations for each shading point. Given the geometry and material representation, in tandem with the composed light representations of all lights, a lightweight neural network predicts the final radiance. Experimental results demonstrate that the proposed LightFormer can yield reasonable and realistic global illumination in fully dynamic scenes with real-time performance. <br></p>-
dc.languageeng-
dc.publisherACM-
dc.relation.ispartofTranscations on Graphics-
dc.rightsThis work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.-
dc.titleLightFormer: Light-Oriented Global Neural Rendering in Dynamic Scene -
dc.typeArticle-
dc.description.naturepublished_or_final_version-
dc.identifier.doi10.1145/3658229-
dc.identifier.volume43-
dc.identifier.issue4-
dc.identifier.issnl0730-0301-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats