NeRO: Neural Geometry and BRDF Reconstruction of
Reflective Objects from Multiview Images

SIGGRAPH 2023 (ACM TOG)


Yuan Liu1, Peng Wang1, Cheng Lin2, Xiaoxiao Long1, Jiepeng Wang1, Lingjie Liu3,4, Taku Komura1, Wenping Wang5

1The University of Hong Kong    2Tencent Games     3University of Pennsylvania      4Max Planck Institute for Informatics      5Texas A&M University

Abstract


NeRO is able to reconstruct both shape and BRDF of reflective objects using only multiview images.

We present a neural rendering-based method called NeRO for reconstructing the geometry and the BRDF of reflective objects from multiview images captured in an unknown environment. Multiview reconstruction of reflective objects is extremely challenging because specular reflections are view-dependent and thus violate the multiview consistency, which is the cornerstone for most multiview reconstruction methods. Recent neural rendering techniques can model the interaction between environment lights and the object surfaces to fit the view-dependent reflections, thus making it possible to reconstruct reflective objects from multiview images. However, accurately modeling environment lights in the neural rendering is intractable, especially when the geometry is unknown. Most existing neural rendering methods, which can model environment lights, only consider direct lights and rely on object masks to reconstruct objects with weak specular reflections. Therefore, these methods fail to reconstruct reflective objects, especially when the object mask is not available and the object is illuminated by indirect lights. We propose a two-step approach to tackle this problem. First, by applying the split-sum approximation and the integrated directional encoding to approximate the shading effects of both direct and indirect lights, we are able to accurately reconstruct the geometry of reflective objects without any object masks. Then, with the object geometry fixed, we use more accurate sampling to recover the environment lights and the BRDF of the object. Extensive experiments demonstrate that our method is capable of accurately reconstructing the geometry and the BRDF of reflective objects from only posed RGB images without knowing the environment lights and the object masks.


Comparison on Shape Reconstruction


In comparison with: COLMAP [1] NeuS [2] Ref-NeRF [3] NvDiffRecMC [4]

[1] Pixelwise view selection for unstructured multi-view stereo. ECCV 2016.
[2] NeuS: Learning neural implicit surfaces by volume rendering for multi-view reconstruction. NeurIPS 2021.
[3] Ref-NeRF: Structured view-dependent appearance for neural radiance fields. CVPR 2022.
[4] Shape, Light, and Material Decomposition from Images using Monte Carlo Rendering and Denoising. NeurIPS 2022.


Comparison on Relighting


In comparison with: MII [1] NeILF [2] NvDiffRecMC [3]

[1] Modeling Indirect Illumination for Inverse Rendering. CVPR 2022.
[2] NeiLF: Neural incident light field for physically-based material estimation. ECCV 2022.
[3] Shape, Light, and Material Decomposition from Images using Monte Carlo Rendering and Denoising. NeurIPS 2022.


More Shape Reconstruction



More Relighting



Citation


@inproceedings{liu2023nero,
  title={NeRO: Neural Geometry and BRDF Reconstruction of Reflective Objects from Multiview Images},
  author={Liu, Yuan and Wang, Peng and Lin, Cheng and Long, Xiaoxiao and Wang, Jiepeng and Liu, Lingjie and Komura, Taku and Wang, Wenping},
  year={2023}
}