deep-head-pose-lite
deep-head-pose-lite copied to clipboard
about test result
Thank you for public your model and code.
Do you test your model in AFLW2000? How about the test result? I follow the test code in https://github.com/natanielruiz/deep-head-pose, and I get a not good result by shuff_epoch_120.pkl:Yaw: 19.8251, Pitch: 9.0400, Roll: 8.1506. I don't know why.
During testing, I replace x = x.mean([2, 3])
to x = x.mean(3).mean(2)
in stable_hopenetlite.py because my torch version is 0.4.1. I don't know whether it is the reason for bad result.
Hope to see your test result.
Thank you very much!
Thank you for public your model and code. Do you test your model in AFLW2000? How about the test result? I follow the test code in https://github.com/natanielruiz/deep-head-pose, and I get a not good result by shuff_epoch_120.pkl:Yaw: 19.8251, Pitch: 9.0400, Roll: 8.1506. I don't know why. During testing, I replace
x = x.mean([2, 3])
tox = x.mean(3).mean(2)
in stable_hopenetlite.py because my torch version is 0.4.1. I don't know whether it is the reason for bad result. Hope to see your test result. Thank you very much!
Hi, thanks for ur question. As I said, the model is light and takes a trade-off between speed and accuracy. So it is impossible if you want a free lunch :-). I just trained this model in the 300W-LP dataset and did not take any data augmentation strategy. Sorry, I cannot tell you whether your test result is correct because I have not tested this model in AFLW2000 yet. I just released a toy or demo model (Due to the knowledge protection policy of company, I won't release more powerful model, please understand). If you want to get a model for industry production or paper research, I recommend you to re-train this light model with careful hyper-parameter tuning and data augmentation strategy.
Thanks