KPConv icon indicating copy to clipboard operation
KPConv copied to clipboard

Questions regarding performance on ModelNet40

Open imankgoyal opened this issue 4 years ago • 2 comments

Hi @HuguesTHOMAS ,

Thanks for the awesome work and open-sourcing the code.

I am trying to reproduce the results of KPConv on ModelNet40. I have some follow-up questions to issue #61 which I am posting here. Can you please clarify what you mean by "the score was obtained during validation and with a progressive voting scheme"?

I am confused since there is no separate validation set or anything related to it in the code. Was the test set used for validation directly as done here: https://github.com/HuguesTHOMAS/KPConv/blob/132fdc628fb4850548e931c8b02c6325e7cac85e/datasets/ModelNet40.py#L319-L322

Also, how is the final model chosen for evaluation? As you save models are regular intervals, is it the model that gives best performance on the validation (which is probably the test set here) set, or is it the final converged model?

Thanks for the help!

imankgoyal avatar Jun 03 '20 03:06 imankgoyal

Can you please clarify what you mean by "the score was obtained during validation and with a progressive voting scheme"? I am confused since there is no separate validation set or anything related to it in the code.

Yes, for ModelNet40, the test and validation sets are the same.

Also, how is the final model chosen for evaluation? As you save models are regular intervals, is it the model that gives best performance on the validation (which is probably the test set here) set, or is it the final converged model?

You can use both, in my experiments, they usually have very similar scores.

HuguesTHOMAS avatar Jun 03 '20 13:06 HuguesTHOMAS

Hi @imankgoyal

So could you share your reproduced results? I could get ~91%

tangbohu avatar Aug 12 '20 04:08 tangbohu