Relit-NeuLF: Efficient Relighting and Novel View Synthesis via Neural 4D Light Field

ACM MM 2023

Zhong Li1 Liangchen Song2 Zhang Chen1 Xiangyu Du1 Lele Chen1Junsong Yuan2Yi Xu1

1OPPO US Research Center
2University at Buffalo

[Paper]      [Dataset]      [Code]      [Video]     


A neural 4D light field approach to simultaneously adjust lighting and create new viewpoints for complex scenes modifications.

Abstract


In this paper, we address the problem of simultaneous relighting and novel view synthesis of a complex scene from multi-view images with a limited number of light sources. We propose an analysis-synthesis approach called Relit-NeuLF. Following the recent neural 4D light field network (NeuLF), Relit-NeuLF first leverages a two-plane light field representation to parameterize each ray in a 4D coordinate system, enabling efficient learning and inference. Then, we recover the spatially-varying bidirectional reflectance distribution function (SVBRDF) of a 3D scene in a self-supervised manner. A DecomposeNet learns to map each ray to its SVBRDF components: albedo, normal, and roughness. Based on the decomposed BRDF components and conditioning light directions, a RenderNet learns to synthesize the color of the ray. To self-supervise the SVBRDF decomposition, we encourage the predicted ray color to be close to the physically-based rendering result using the microfacet model. Comprehensive experiments demonstrate that the proposed method is efficient and effective on both synthetic data and real-world human face data, and outperforms the state-of-the-art results.

Approach


An overview of our proposed Relit-NeuLF. The input is the 4D coordinate of a ray and a light direction. The output is the RGB radiance of the ray under the light direction.

Our DecomposeNet first takes the 4D coordinate as input and outputs SVBRDF parameters. Next, the SVBRDF parameters together with the light direction are fed into an implicit rendering network (RenderNet) to synthesize the target color. The network is trained end-to-end with photometric loss and self-supervised rendering loss.

LumiView Dataset


We used Blender’s physically based path tracer renderer and rendered 3 textured objects: synthetic face, wood train, and face mask. We set up 5 × 5 camera views on the front hemisphere, set 105 directional light sources around the full sphere, and render at a resolution of 800 × 800 pixels. Our rendered data could be download here.

Relighting and Novel View Synthesis Results


Our Relit-NeuLF model can generate rendering results under novel viewpoints and novel lighting directions. Here, we show qualitative relighting results for different synthetic and real data.

HDRI Relighting Results


We demonstrate the ability of our method to synthesize visually pleasing relighting under arbitrary HDRI environment maps. Because our method can recover the reflectance under novel lighting directions with a high angular resolution, we can relight the object by treating an HDRI environment map as a collection of OLAT lighting conditions.

BibTex

@article{li2023relitneulf,
  title={Relit-NeuLF: Efficient Novel View Synthesis with Neural 4D Light Field},
  author={Li, Zhong, Song, Liangchen, Chen, Zhang, Du, Xiangyu, Chen, Lele, Yuan, Junsong, Xu, Yi},
  booktitle={Proceedings of the 31th ACM International Conference on Multimedia},
  year={2023}
}

Acknowledgements

We wish to convey our gratitude to our previous intern, Yuqi Ding, for his foundational efforts on the dome structure and the develop- ment of the multi-lighting system. Website adapted from Dreambooth.