Codes-for-PVKD
Codes-for-PVKD copied to clipboard
About mIOU score
Hi, thank you for sharing this work. I want to know why the mIOU of Cylinder3D is 71.8, which is higher than original work in leaderboard. Do yo do any modification in training process?
We iteratively finetune the model with small learning rates. Instance augmentation is also adopted (Panoptic-PolarNet, CVPR 2021). You can have a try.
Why do you use your own accounts to ask yourself questions?
Is there anything in this repository that is not fake?
The code is the same as Cylinder3D: https://github.com/cardwing/Codes-for-PVKD/issues/12 The pretrained weights can't be reproduced by anyone The so called "reproduced" results are very suspicious, and don't make any sense at all The stars are from paid bot/fake accounts: https://github.com/cardwing/Codes-for-PVKD/issues/15 The rankings are fake too: https://github.com/cardwing/Codes-for-PVKD/issues/11 Many issues are created by your own accounts
The pretrained weights can be reproduced by us on the A100 server. I have responded to most of the issues. I admit that the codes are very similar to the Cylinder3D codes since the distillation codes are not provided. The issues are created by other researchers and I just wonder what's your purpose on slandering us. @lfl256 I have gone through your github account and find that your activity is private. I wonder why you do this.
It is strange that why you are so hostile to us. The codebase is still under construction and more modules will be added. @lfl256
Here are the facts:
First of all, the pretrained weights CAN NOT be reproduced. There is no training code: https://github.com/cardwing/Codes-for-PVKD/issues/12, and you admit that.
You created many accounts on SemanticKITTI leaderboards: https://github.com/cardwing/Codes-for-PVKD/issues/19
You paid many bot accounts to get stars for this repo to mislead people and give the false impression about this repository.
And you admit that the ranking is misleading too: https://github.com/cardwing/Codes-for-PVKD/issues/11
I am just pointing out the facts
Please stop trying to fool others, everyone knows the truth. Because it is so obvious.
On the SemanticKITTI Multiple Scans benchmark: https://competitions.codalab.org/competitions/20331#results
You own these accounts: quanyyds: 2 submissions, 06/20/22 PVKD: 6 submissions, 12/10/21 PV-KD: 2 submissions, 12/01/21 PVD-KD: 4 submissions, 12/06/21 Point-Voxel-KD: 4 submissions, 11/26/21 Incredibai: 5 submissions, 07/12/22 liamlin543134: 5 submissions, 08/01/22
And many many more on other SemanticKITTI benchmarks.
According to the rules of SemanticKITTI https://competitions.codalab.org/competitions/20331#learn_the_details-terms_and_conditions
"Important note: It is NOT allowed to register multiple times to the server using different email addresses. We are actively monitoring submissions and we will revoke access and delete submissions."
First, "There is no training code" is obviously a fake comment. I provide the normal training code. The distillation codes are under preparation. Second, "You paid many bot accounts to get stars for this repo". We only recommend this repo to our friends and colleagues. How do you get the conclusion that I paid them? @lfl256
First of all, the so called "normal training code" is from Cylinder3D, you didn't provide the training code to reproduce the "pretrained weights".
Running inference using the provided "pretrained weights" is not called "reproducing". "Reproducing" means training the model from scratch. You may have used all kinds of "tricks" to train these models.
@cardwing thanks for your response. And, @lfl256, my account is individual, not belonging to the author. The method I test is Cylinder3d (single scan) and use LMNet to distinguish moving object, so the score is similar to original Cylinder3d. Sorry for confusing you.