KPConv
KPConv copied to clipboard
Questions regarding performance on ModelNet40
Hi @HuguesTHOMAS ,
Thanks for the awesome work and open-sourcing the code.
I am trying to reproduce the results of KPConv on ModelNet40. I have some follow-up questions to issue #61 which I am posting here. Can you please clarify what you mean by "the score was obtained during validation and with a progressive voting scheme"?
I am confused since there is no separate validation set or anything related to it in the code. Was the test set used for validation directly as done here: https://github.com/HuguesTHOMAS/KPConv/blob/132fdc628fb4850548e931c8b02c6325e7cac85e/datasets/ModelNet40.py#L319-L322
Also, how is the final model chosen for evaluation? As you save models are regular intervals, is it the model that gives best performance on the validation (which is probably the test set here) set, or is it the final converged model?
Thanks for the help!
Can you please clarify what you mean by "the score was obtained during validation and with a progressive voting scheme"? I am confused since there is no separate validation set or anything related to it in the code.
Yes, for ModelNet40, the test and validation sets are the same.
Also, how is the final model chosen for evaluation? As you save models are regular intervals, is it the model that gives best performance on the validation (which is probably the test set here) set, or is it the final converged model?
You can use both, in my experiments, they usually have very similar scores.
Hi @imankgoyal
So could you share your reproduced results? I could get ~91%