RandLA-Net
RandLA-Net copied to clipboard
Needs Training datasets and Epoch running during Test stage on Semantic3D
您好,感谢您的开源! 在对预训练模型的测试过程中,由于服务器磁盘我只解压了reduces-8的测试集部分,不过通过阅读报错和项目代码,发现Test还需要使用到标注为1的一个训练集(我使用的就是数据集下载的.sh的链接前两个场景数据),并且在正式跑test前还需要跑2趟Epoch用于后面的Reproject probs,按理说常规的深度学习的test都是可以直接跑的,请问为什么这里还需要【1、训练集数据,2、用GPU跑非常耗时耗费显存的两个epoch?】 之后才可以输出test结果 谢谢! Greetings, thanks for the open-source. During the test stage, I only extraced the reduces-8 dataset partially due to the limit of my server's disk size, however I met with some errors and I found 2 TRAINING SETs are required to run the TEST stage, and 2 Epoches are needed for the "Reproject probs" process. In most common scenario a TEST stage needs neither training datasets nor Epoch running. So I am wondering why I have to 1.extract training dataset for more disk space consumption 2.Have to use extra GPU resources to run these epoches that are highly GPU memory consuming? that I can get the final TEST result. Thanks. Looking forward to your earliest reply.
Hi @kxdkxd, 感谢你的建议! 训练集数据只是load进去并没有使用,我们后续会修改一下code。此外,test需要跑两个epoch只是因为对于有些点云数据太大(比如说semantic3D中有的点云有几亿个点)无法直接一次性测试完,所以需要一块一块的进行测试。
希望会有帮助!
Hi @kxdkxd, 感谢你的建议! 训练集数据只是load进去并没有使用,我们后续会修改一下code。此外,test需要跑两个epoch只是因为对于有些点云数据太大(比如说semantic3D中有的点云有几亿个点)无法直接一次性测试完,所以需要一块一块的进行测试。
希望会有帮助!
Hello, I met the same question. It seems that you haven't fixed this problem.