1The University of Hong Kong
2Tencent Games
3University of Pennsylvania
4Texas A&M University
*The first two authors contribute equally.
†Corresponding authors.
In this paper, we present a novel diffusion model called SyncDreamer that generates multiview-consistent images from a single-view image. Using pretrained large-scale 2D diffusion models, recent work Zero123 demonstrates the ability to generate plausible novel views from a single-view image of an object. However, maintaining consistency in geometry and colors for the generated images remains a challenge. To address this issue, we propose a synchronized multiview diffusion model that models the joint probability distribution of multiview images, enabling the generation of multiview-consistent images in a single reverse process. SyncDreamer synchronizes the intermediate states of all the generated images at every step of the reverse process through a 3D-aware feature attention mechanism that correlates the corresponding features across different views. Experiments show that SyncDreamer generates images with high consistency across different views, thus making it well-suited for various 3D generation tasks such as novel-view-synthesis, text-to-3D, and image-to-3D.
Reverse process of SyncDreamer's multiview diffusion.
SyncDreamer enables generating 3D models from 2D designs and hand drawings including skectches, Chinese ink paintings, oil paintings and so on.
Given the same single-view image, SynDreame allows generating different instances using different random seeds.
Test images are downloaded from the Internet and some of them are from Genshin Impact Wiki.
@article{liu2023syncdreamer,
title={SyncDreamer: Generating Multiview-consistent Images from a Single-view Image},
author={Liu, Yuan and Lin, Cheng and Zeng, Zijiao and Long, Xiaoxiao and Liu, Lingjie and Komura, Taku and Wang, Wenping},
journal={arXiv preprint arXiv:2309.03453},
year={2023}
}