DiffuScene
DiffuScene copied to clipboard
[CVPR 2024] DiffuScene: Denoising Diffusion Models for Generative Indoor Scene Synthesis
DiffuScene
Paper | arXiv | Video | Project Page
This is the repository that contains source code for the paper:
DiffuScene: Denoising Diffusion Models for Generative Indoor Scene Synthesis
- We present DiffuScene, a diffusion model for diverse and realistic indoor scene synthesis.
License Issues
Due to some license issues, our code is not available now. However, we will make it avaiable as soon as possible, after we fix the license issues.
Relevant Research
Please also check out the following papers that explore similar ideas:
- LEGO-Net: Learning Regular Rearrangements of Objects in Rooms.[homepage]
- Learning 3D Scene Priors with 2D Supervision. [homepage]
- Sceneformer: Indoor Scene Generation with Transformers. [homepage]
- ATISS: Autoregressive Transformers for Indoor Scene Synthesis. [homepage]
- Scene Synthesis via Uncertainty-Driven Attribute Synchronization [pdf]
- Indoor Scene Generation from a Collection of Semantic-Segmented Depth Images [pdf]
- Fast and Flexible Indoor Scene Synthesis via Deep Convolutional Generative Models [pdf]
Citation
If you find DiffuScene useful for your work please cite:
@inproceedings{tang2024diffuscene,
title={Diffuscene: Denoising diffusion models for generative indoor scene synthesis},
author={Tang, Jiapeng and Nie, Yinyu and Markhasin, Lev and Dai, Angela and Thies, Justus and Nie{\ss}ner, Matthias},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
year={2024}
}
Contact Jiapeng Tang for questions, comments and reporting bugs.