chen9run
chen9run
你好,请问一下,我在参考LoFTR训练megadepth中, 该如何设置从LoFTR云盘train-data文件中的cfg_1513_-1_0.2_0.8_0.15_reduced_v2文件的路径
请问cfg_1513_-1_0.2_0.8_0.15_reduced_v2文件放在那个路径下,十分感谢!
 我使用图片尺寸为640在训练的验证集上取得精度为auc@5=0.500-auc@10=0.665-auc@20=0.791,而我加载保存的ckpt在测试集上时精度却下降的特别多,仅为auc@5=0.379-auc@10=0.530-auc@20=0.657
Killed!
  请问一下,这是正常的嘛,随着训练内存占用越来越大,是我64G内存条硬件不够用嘛,要通过升级内存条解决嘛,还是其他层面的问题
help! Epoch 0: 0%| | 16/38300 [00:12
你好,我在epoch0加载到67%时报错内存不足,可以给出点见解嘛,我使用的是两张2080ti,batch_size=1,TRAIN_IMG_SIZE=640. RuntimeError: CUDA out of memory. Tried to allocate 314.00 MiB (GPU 1; 10.76 GiB total capacity; 8.53 GiB already allocated; 179.44 MiB free; 9.46 GiB reserved in total by...
The work is very exciting! I ask about the scaling process in the following code. Why divide by 4 here? Isn't ins * diaplacement[] supposed to restore to the original...