Generation3D
Generation3D copied to clipboard
3D Shape Generation Baselines in PyTorch.
Generation3D
3D Shape Generation Baselines in PyTorch.
Feature
- Hack of DataParallel for balanced memory usage
- More Models WIP
- Configurable model parameters
- Customizable model, dataset
Representation
- 💎 Polygonal Mesh
- 👾 Volumetric
- 🎲 Point Cloud
- 🎯 Implicit Function
- 💊 Primitive
Input Observation
- 🏞 RGB Image
- 📡 Depth Image
- 👾 Voxel
- 🎲 Point Cloud
- 🎰 Unconditional Random
Evaluation Metrics
- Chamfer Distance
- F-score
- IoU
Model Zoo
- [x] 💎 Pixel2Mesh
- [x] 🎯 DISN
- [x] 👾 3DGAN
- [ ] 👾 Voxel Based Method
- [ ] 🎲 PointCloud Based Method
Get Started
Environment
- Ubuntu 16.04 / 18.04
- Pytorch 1.3.1
- CUDA 10
- conda > 4.6.2
Using Anaconda to install all dependences.
conda env create -f environment.yml
Train
CUDA_VISIBLE_DEVICES=<gpus> python train.py --options <config>
Predict
CUDA_VISIBLE_DEVICES=<gpus> python predictor.py --options <config>
Evaluation [WIP]
Custom guide
- custom scheduler for
training/inference
loop, add code inscheduler
and inherit base class. - custom model in
models/zoo
- custom config options in
utils/config
- custom dataset in
datasets/data
External
- Chamfer Distance
Baselines
Pixel2Mesh 🏞 💎
- Input: RGB Image
- Representation: Mesh
- Output: Mesh camera-view
DISN 🏞 🎯
- Input: RGB Image
- Representation: SDF
- Post-processing: Marching Cube
- Output: Mesh camera-view
3DGAN 🎰 👾
- Input: Random Noise
- Representation: Volumetric
- Output: Voxel
Acknowledgements
Our work is based on the codebase of an unofficial pixel2mesh framework. The Chamfer loss code is based on ChamferDistancePytorch.
Official baseline code
-
DISN: Deep Implicit Surface Network for High-quality Single-view 3D Reconstruction
-
Pixel2Mesh: Generating 3D Mesh Models from Single RGB Images
-
Learning a Probabilistic Latent Space of Object Shapes via 3D Generative-Adversarial Modeling
License
Please follow the License of official implementation for each model.