deep-head-pose icon indicating copy to clipboard operation
deep-head-pose copied to clipboard

big AFLW200 loss when hopenet trained locally, and how to reproduce the models

Open simin75simin opened this issue 3 years ago • 0 comments
trafficstars

I got around 30 yaw loss, and around 15 pitch and roll loss on AFLW2000 with the provided hopenet. I re-implemented this with generally the same methods in tensorflow with different backbones like resnet50 and my own DCNN, trained them on 300W_LP and got similar test loss on AFLW2000. But I saw some people mentioned that the pretrained models have around 10 yaw loss and around 5 pitch and roll losses, which is qutie the difference. When I set up a camera for the given model it works pretty well but not my own model. So my question is basically how to reproduce this? Because I need to make the network smaller for work.

Thanks in advance.

simin75simin avatar Dec 08 '21 07:12 simin75simin