lightweight_openpose
lightweight_openpose copied to clipboard
why did my trained model's performance so poor ?
I add 5 refinestage in your net model, and trained it .but the performance is still poor.According to the chapter ,5 refinestage's performance is better than 1refinestage.
Is it score you got it ? I find the visual-performance is better than score-performance. Maybe the pose_decode
file need to be update. But i'm not sure what's wrong with your result.
I find the visual-perforance is better than score-performances too,The score is 0.016903232698188032, Is the rule of score is wrong?
I used this evaluation function just copying from ai-challenger. Maybe the file model_json.py
has some wrong. I'll check it and hope that you can also find resolution sooner.
I used this evaluation function just copying from ai-challenger. Maybe the file
model_json.py
has some wrong. I'll check it and hope that you can also find resolution sooner.
Me too,I add 2 refinestage in your net model.The score is 0.03313 and the result of the test is worse than yours. Is there something wrong with the network you wrote? Can you provide a description for how to add refinestage and train?Thanks.
I add 5 refinestage in your net model, and trained it .but the performance is still poor.According to the chapter ,5 refinestage's performance is better than 1refinestage.
I find that the model is wrong.Backblone is not concat to refinestage.
you can add the outs1 = tf.concat([net_cpm,heatmaps1, pafs1], axis=-1) to every refinestage. And using Coco dataset to train ,validating it in the cocoApi .I trained it in the coco dataset by correcting the model ,the AP is 63%. By the way,The Depthwise Separable Convolution is also corrected.
@LeifSun Thanks for your answer. Maybe guys @Henrietta95 and other people can tried his solution, i have no enough time to do this, sorry for that.
@LeifSun hi, could you tell me where have you changed about depthwise separable convolution
?
你好, 我发现一个你使用了 tf.layers.separable_conv2d, 但是这个算子的内部流程是depthwise convolution --> pointwise convolution --> activation func, 而mobilenet-v1的实现风格是depthwise conv --> batch norm --> relu --> pointwise conv --> batch norm --> relu。不知道这个差别会不会影响模型最后的精度?