Neural Rays for Occlusion-aware Image-based Rendering

CVPR 2022


Yuan Liu1, Sida Peng2, Lingjie Liu3, Qianqian Wang4, Peng Wang1, Christian Theobalt3, Xiaowei Zhou2, Wenping Wang5

1The University of Hong Kong    2Zhejiang University    3Max Planck Institute for Informatics    4Cornell University    5Texas A&M University

Abstract


Neural rays can be used for novel view synthesis without per-scene training or with few training steps on the scene.
*Results below are generated on a custom scene without training on the scene.

We present a new neural representation, called Neural Ray (NeuRay), for the novel view synthesis task. Recent works construct radiance fields from image features of input views to render novel view images, which enables the generalization to new scenes. However, due to occlusions, a 3D point may be invisible to some input views. On such a 3D point, these generalization methods will include inconsistent image features from invisible views, which interfere with the radiance field construction. To solve this problem, we predict the visibility of 3D points to input views within our NeuRay representation. This visibility enables the radiance field construction to focus on visible image features, which significantly improves its rendering quality. Meanwhile, a novel consistency loss is proposed to refine the visibility in NeuRay when finetuning on a specific scene. Experiments demonstrate that our approach achieves state-of-the-art performance on the novel view synthesis task when generalizing to unseen scenes and outperforms per-scene optimization methods after finetuning.


Generalization results



Finetuned results



Citation


@inproceedings{liu2022neuray,
  title={Neural Rays for Occlusion-aware Image-based Rendering},
  author={Liu, Yuan and Peng, Sida and Liu, Lingjie and Wang, Qianqian and Wang, Peng and Christian, Theobalt and Zhou, Xiaowei and Wang, Wenping},
  booktitle={CVPR},
  year={2022}
}