InstPIFu
InstPIFu copied to clipboard
Ground Truth Mesh in Test Set is Incomplete
Hi, nice work!
I'm confused that gt test mesh only has 1038 samples, while the test set actually has 2000 samples. Could you please double check this?
Thank you!
Hi, it is because some of the samples share the same CAD gt model. If you wish to test on all test samples, you can firstly use the preprocess script to convert all 3D future mesh to watertight mesh.
Best,
But actually I found here, when try to get mesh gt's path, there will be a lot of mesh gts couldn't be found. Is it correct?
And also, I reproduce the result of object reconstruction of 80 epochs, as following shows:
. Therefore, I suspect there might exist some lost test gts? It's much better than the result you reported in the paper.
Hi, I will check it by evaluating it on all testing samples instead of the subset. Also, I would check if the released gt test mesh missed some samples. But the testing metric is still average of all and should be correct. The quantitative performance is better may be due to the reason that I cleaned some highly occluded training and testing samples after the paper is published. Training on cleaner data improve performance, and the test samples would be easier. I already see some missing test samples, I will update the link to all watertight samples including both training and testing set.
I see. Thank you for your patience and wait for the conclusion!
Hi, the link for all watertight mesh is already updated. After I run the evaluation, the quantitative results is similar to your reproduced results.
Hi, is the quantitative results evaluated on the "wrong" watertight meshes or the updated ones?
Also, why desk is None? Have you removed all the desk samples?
@xXuHaiyang The desk is no longer None currently and I update it in the readme file. The quantitative results is run on the updated file, which is zip as 3D-FUTURE-watertight.zip. It contains all training and testing gt mesh, so it will no longer miss samples. By the way, make sure to evaluate on the split files in ./data/3d-front/split-filter/test/all.json. You may need to change the split_path entry in the evaluation script.
Best,
@xXuHaiyang The desk is no longer None currently and I update it in the readme file. The quantitative results is run on the updated file, which is zip as 3D-FUTURE-watertight.zip. It contains all training and testing gt mesh, so it will no longer miss samples. By the way, make sure to evaluate on the split files in ./data/3d-front/split-filter/test/all.json. You may need to change the split_path entry in the evaluation script.
Best,
I see. Thanks for your quick reply!
@UncleMEDM Hi, I wonder what do "split" and "split-filter" actually mean? In Object Reconstrcution Task configs, I found the data path in train.json is "split-filter", while in test.json is "split" and when evaluating it's "split_filter" again. Should they be consistent? Also, why do you clean some highly occluded training and testing samples? It could be appreciated if you could provide data and running scripts that could directly reproduce the performance in your paper.
Yes, they should be consistent. I would recommend to use split-filter since this filter some highly occluded samples. And the pretrained weight is from training on the split-filter set. I will update the codes so that they will be consistent.