FastMVSNet
                                
                                 FastMVSNet copied to clipboard
                                
                                    FastMVSNet copied to clipboard
                            
                            
                            
                        [CVPR'20] Fast-MVSNet: Sparse-to-Dense Multi-View Stereo With Learned Propagation and Gauss-Newton Refinement
Fast-MVSNet
PyTorch implementation of our CVPR 2020 paper:
Fast-MVSNet: Sparse-to-Dense Multi-View Stereo With Learned Propagation and Gauss-Newton Refinement
Zehao Yu, Shenghua Gao
How to use
git clone [email protected]:svip-lab/FastMVSNet.git
Installation
pip install -r requirements.txt
Training
- 
Download the preprocessed DTU training data from MVSNet and unzip it to data/dtu.
- 
Train the network python fastmvsnet/train.py --cfg configs/dtu.yamlYou could change the batch size in the configuration file according to your own pc. 
Testing
- 
Download the rectified images from DTU benchmark and unzip it to data/dtu/Eval.
- 
Test with the pretrained model python fastmvsnet/test.py --cfg configs/dtu.yaml TEST.WEIGHT outputs/pretrained.pth
Depth Fusion
We need to apply depth fusion tools/depthfusion.py to get the complete point cloud. Please refer to MVSNet for more details.
python tools/depthfusion.py -f dtu -n flow2
Acknowledgements
Most of the code is borrowed from PointMVSNet. We thank Rui Chen for his great works and repos.
Citation
Please cite our paper for any purpose of usage.
@inproceedings{Yu_2020_fastmvsnet,
  author    = {Zehao Yu and Shenghua Gao},
  title     = {Fast-MVSNet: Sparse-to-Dense Multi-View Stereo With Learned Propagation and Gauss-Newton Refinement},
  booktitle = {CVPR},
  year      = {2020}
}