LightFormer: Light-Oriented Global Neural Rendering in Dynamic Scene


Haocheng Ren1, Yuchi Huo1, Yifan Peng2, Hongtao Sheng1, Weidong Xue1, Hongxiang Huang1, Jingzhen Lan1, Rui Wang1, Hujun Bao1,

1State Key Laboratory of CAD & CG, Zhejiang University 2University of Hong Kong

ACM Transactions on Graphics (SIGGRAPH 2024)

Paper | Supp | Video | Interactive viewer



Abstract

The generation of global illumination in real time has been a long-standing challenge in the graphics community, particularly in dynamic scenes with complex illumination. Recent neural rendering techniques have shown great promise by utilizing neural networks to represent the illumination of scenes and then decoding the final radiance. However, incorporating object parameters into the representation may limit their effectiveness in handling fully dynamic scenes. This work presents a neural rendering approach, dubbed LightFormer, that can generate realistic global illumination for fully dynamic scenes, including dynamic lighting, materials, cameras, and animated objects, in real time. Inspired by classic many-lights methods, the proposed approach focuses on the neural representation of light sources in the scene rather than the entire scene, leading to the overall better generalizability. The neural prediction is achieved by leveraging the virtual point lights and shading clues for each light. Specifically, two stages are explored. In the light encoding stage, each light generates a set of virtual point lights in the scene, which are then encoded into an implicit neural light representation, along with screen-space shading clues like visibility. In the light gathering stage, a pixel-light attention mechanism composites all light representations for each shading point. Given the geometry and material representation, in tandem with the composed light representations of all lights, a lightweight neural network predicts the final radiance. Experimental results demonstrate that the proposed LightFormer can yield reasonable and realistic global illumination in fully dynamic scenes with real-time performance.



Acknowledgements

We thank all reviewers for their insightful comments. We also thank Chuankun Zheng, Yuzhi Liang for helpful discussions, and He Zhu for preparing 3D scenes.


Citation


@article{ren2024lightformer,
  title={LightFormer: Light-Oriented Global Neural Rendering in Dynamic Scene},
  author={Ren, Haocheng and Huo, Yuchi and Peng, Yifan and Sheng, Hongtao and Huang, Hongxiang and Xue, Weidong and Lan, Jingzhen and Wang, Rui and Bao, Hujun},
  journal={ACM Transactions on Graphics},
  volume={43},
  number={4},
  pages={1--14},
  year={2024},
  publisher={ACM New York, NY}
}