NeRFFaceLighting: Implicit and Disentangled Face Lighting Representation Leveraging Generative Prior in Neural Radiance Fields

 

Kaiwen Jiang1,2          Shu-Yu Chen1           Hongbo Fu3       Lin Gao1 *         

 

1 Institute of Computing Technology, Chinese Academy of Sciences

 

2 Beijing Jiaotong University

 

3 City University of Hong Kong      

 

* Corresponding author  

 

 

 

Accepted by ACM Transactions on Graphics

 

 

 

 

Figure: Our NeRFFaceLighting method achieves disentangled and 3D-aware lighting control with realistic shading and real-time rendering speed. We construct two separated latent spaces: one for geometry and appearance, as shown in the leftmost diagram, and the other for lighting, as shown in the rightmost diagram. Samples are generated by sampling from the geometry and appearance latent space, whose lighting conditions are solely controlled by sampling from the lighting latent space. We demonstrate an example for generated samples in the first row and an example for real portraits in the second row. (a) and (d) show the extracted geometry and the pseudo-albedo. (b) and (e) show the portraits under their own lighting condition with different poses. (c) and (f) show the portraits whose lighting conditions and camera poses are changed simultaneously. The lighting condition in (e) is the same as the input target image. All the lighting conditions are visualized as a sphere placed at the bottom-right corner of the portraits throughout the article. Original image courtesy of Aminatk.

 

 

Abstract

 

3D-aware portrait lighting control is an emerging and promising domain thanks to the recent advance of generative adversarial networks and neural radiance fields. Existing solutions typically try to decouple the lighting from the geometry and appearance for disentangled control with an explicit lighting representation (e.g., Lambertian or Phong). However, they either are limited to a constrained lighting condition (e.g., directional light) or demand a tricky-to-fetch dataset as supervision for the intrinsic compositions (e.g., the albedo). We propose NeRFFaceLighting to explore an implicit representation for portrait lighting based on the pretrained tri-plane representation to address the above limitations. We approach this disentangled lighting-control problem by distilling the shading from the original fused representation of both appearance and lighting (i.e., one tri-plane) to their disentangled representations (i.e., two tri-planes) with the conditional discriminator to supervise the lighting effects. We further carefully design the regularization to reduce the ambiguity of such decomposition and enhance the ability of generalization to unseen lighting conditions. Moreover, our method can be extended to enable 3D-aware real portrait relighting. Through extensive quantitative and qualitative evaluations, we demonstrate the superior 3D- aware lighting control ability of our model compared to alternative and existing solutions.

 

 

 

 

Video

 

 

 

 

Paper

 


PDF

 

 

 

Code

 

PyTorch    Jittor

 

 

 

BibTex

 

@article {NeRFFaceLighting,
    author = {Jiang, Kaiwen and Chen, Shu-Yu and Fu, Hongbo and Gao, Lin},
    title = {NeRFFaceLighting: Implicit and Disentangled Face Lighting Representation Leveraging Generative Prior in Neural Radiance Fields},
    journal = {ACM Trans. Graph.},
    year = {2023},
    volume = {42},
    articleno = {35},
    numpages = {18}
}