NeuLF: Efficient Novel View Synthesis with Neural 4D Light Field

Eurographics Symposium on Rendering(EGSR)2022


Zhong Li1, Liangchen Song2, Celong Liu1, Junsong Yuan2, Yi Xu1
1OPPO US Research Center, Palo Alto, California, USA
2University at Buffalo

Paper arXiv Code Data

In this paper, we present an efficient and robust deep learning solution for novel view synthesis of complex scenes. In our approach, a 3D scene is represented as a light field, i.e., a set of rays, each of which has a corresponding color when reaching the image plane. For efficient novel view rendering, we adopt a two-plane parameterization of the light field, where each ray is characterized by a 4D parameter. We then formulate the light field as a 4D function that maps 4D coordinates to corresponding color values. We train a deep fully connected network to optimize this implicit function and memorize the 3D scene. Then, the scene-specific model is used to synthesize novel views. Different from previous light field approaches which require dense view sampling to reliably render novel views, our method can render novel views by sampling rays and querying the color for each ray from the network directly, thus enabling high-quality light field rendering with a sparser set of training images. Per-ray depth can be optionally predicted by the network, thus enabling applications such as auto refocus. Our novel view synthesis results are comparable to the state-of-the-arts, and even superior in some challenging scenes with refraction and reflection. We achieve this while maintaining an interactive frame rate and a small memory footprint.

Demo


Real Time Demo


Overview


For a set of sampled rays from training images, their 4D coordinates and the corresponding color values can be obtained. The input for NeLF is the 4D coordinate of a ray (query) and the output is its RGB color and scene depth. By optimizing the differences between the predicted colors and ground-truth colors, NeLF can faithfully learn the mapping between a 4D coordinate that characterizes the ray and its color. We also build a depth branch to let the network learn the per ray scene depth by self-supervised losses

Results Video


Qualitative Results


Qualitative Results on test views from shinny dataset. Our method captures more details on the reflection and refraction areas of the scenes.

Bibtex


@inproceedings {li2022neulf, booktitle = {Eurographics Symposium on Rendering}, title = {NeuLF: Efficient Novel View Synthesis with Neural 4D Light Field}, author = {Li, Zhong and Song, Liangchen and Liu, Celong and Yuan, Junsong and Xu, Yi}, year = {2022}, }

@article{li2021neulf, title={NeuLF: Efficient Novel View Synthesis with Neural 4D Light Field}, author={Li, Zhong and Song, Liangchen and Liu, Celong and Yuan, Junsong and Xu, Yi}, journal={arXiv e-prints}, pages={arXiv--2105}, year={2021} }

Related works


Modeling light field with neural networks is also well studied by the following works: