John1231983
John1231983
Are you using Apex instead of AMP in pytorch? The method of @xsacha allows us to use AMP, but we should modify something in metric.py as mention. @xsacha could you...
@John1231983 : Thanks. Have you use it in your method? Do you meet the issue in video?
@guoqiangqi : could you please check the eyes with different cases: openning/closing with wflw training dataset? I checked and it is not working with these cases although dlib works well
Same problem. Have you find the solution?
I also used deeplabv3 and it achieved 74% in batch size of 8. I used pretrained resnet101 and learning rate of 7e-3 in 30k and 1e-4 in 30k with same...
Interesting for the fifth point. Although I have 2 TitanX, I cannot take full advantage of them because current tensorflow does not support syn. batchnorm. As the paper mentioned, they...
Good job. I think the main problem to reproduce the method is how to use larger batch size. I just use TitanX that maximum of 12Gb, so I can only...
I see. I think I have to install TF 1.6 for using large batch size. I am using TF 1.4 that I remember only can use batch size of 8...
Same question @tovacinni. @VladVin have you figure out the solution and reason? I think we should multiply im_arr with 255 before feeding to canny edge
Thanks. Sorry I did not give more information. I mean the benefit in small batch size as figure 2 and figure 3. For larger batch size, I totally agree with...