xiemeilong
xiemeilong
@niujinshuchong Because the training effect was very poor using the default configuration, I found this parameter 0.0001 from your example and it improved the result. Moreover, changing it to 0.001...
@niujinshuchong Even when I set my pipeline.datamanager.train_num_rays_per_batch to 256, I still receive a CUDA out of memory error.
@niujinshuchong I am sorry,as it is an electrical facility, the images is not allowed to be made public.
@niujinshuchong The upper part of the object was not captured. Could this be the reason why the SDF model failed to reconstruct it well?
Today I took new photos and once again achieved excellent results using Nerfacto, while Neus-Facto didn't work well. I can share this data with you. ``` ns-train neus-facto --pipeline.model.sdf-field.inside-outside False...
@niujinshuchong https://drive.google.com/file/d/1Yv4cI9OebVoWIxVlYQc_YYafiVnv7jni/view?usp=sharing
It is very slow on my RTX 3060 12G while running BakedAngelo; it took one hour to complete only 1%.
@XianSifan --pipeline.model.sdf-field.log2_hashmap_size 20
Reducing the log2-hashmap-size will result in improved performance. The Train Iter (time) is directly proportional to the number of graphics cards. nvidia-smi topo -m ``` GPU0 GPU1 GPU2 GPU3 NIC0...
@niujinshuchong Even when reducing the log2-hashmap-size to 4, multiple GPUs still perform slower than a single GPU. 1 4090 log2-hashmap-size=4 train-num-rays-per-batch 2048 eval-num-rays-per-batch 512  4 4090 log2-hashmap-size=4 train-num-rays-per-batch 2048...