NeRD-Neural-Reflectance-Decomposition icon indicating copy to clipboard operation
NeRD-Neural-Reflectance-Decomposition copied to clipboard

Unrealistic training times (~5 days per scene)

Open VisionaryMind opened this issue 2 years ago • 0 comments

This paper is simply brilliant, however the implementation appears to require at least 4-5 parallel GPUs. On a single GPU (RTX 2080), about one set of course / fine samples + illuminations are generated per hour. The args file shows fine_samples set to 128. I presume this means, at last on my hardware, that this model will take well over 5 days to train vs. ~30 minutes with the latest photogrammetry tools (with GPU-supported mesh lighting). Is there any way to make results more accessible without spending $10k+ on hardware / cloud services? Even with, say, 5 Tesla V100s, the training time for a small scene will still be over 15 hours. The value-time tradeoff is uneven. Surely, there must be a way to close this gap.

VisionaryMind avatar May 11 '22 14:05 VisionaryMind