wide-depth-range-pose
wide-depth-range-pose copied to clipboard
Proper settings for LineMOD Occlusion
Hello, thank you for your work!
In your paper, experiments on Linemod-Occlusion have been conducted. However, the configs and scripts are not contained in this repo, so I've implemented one baseline for that (See my fork), but the results are below expectation. For example, on ape
, ADI-0.10d
gets only about 8% when trained on 10 / 30
epoch. I think improper configurations may be to blame (the config is merely modified according to SwissCube Dataset).
Could you please provide corresponding scripts and configs so that I can reproduce the results in paper?
Hi, I re-implement the framework using mmcv, and run experiments on LINEMOD, training one model for each object.
The results are as below:
I shared my config here. I am sorry that I can't open-source my implement at now, but the config is easily understood. If you have any questions, please leave comment here.
Hi, I re-implement the framework using mmcv, and run experiments on LINEMOD, training one model for each object. The results are as below:
I shared my config here. I am sorry that I can't open-source my implement at now, but the config is easily understood. If you have any questions, please leave comment here.
Hi! Really appreciated for the results share! Actually, I was talking about Occluded-LineMOD in the issue, where only part of the occluded objects are annotated in a subset of LineMOD.
Have you conducted experiments upon Occluded-LineMOD? How are the results like? And could you please tell me your training settings (train dataset, number of epochs, etc)? I can hardly reproduce the result in the paper.
Sorry, I have not tested on Occluded-LineMOD. But I think the augmentation matters. Furthermore, It seems you use the Pbr synthetic data to train, which may also cause the wrong result. Anyway, I will test on Occluded-LineMOD. Please stay tuned.
Thanks for your work! Do you have the config for LMO now? It would help so much!