Boris Gribkov
Boris Gribkov
Should be OK, by default it's set to "true" in the testing phase, check this -http://caffe.berkeleyvision.org/tutorial/layers/batchnorm.html Setting this to "false" in the testing phase leads to unpredictable results, because layer...
Works strange, loss starts with something about 17, slightly decreases to 13 - 12 and than suddenly rises up to 80, convergence stops. Used Ms-Celeb (not aligned) and initial softmax...
Tried different M the same as lr, result is the same. 
Hi @xialuxi first of all thanks for your code, I found that ArcFace caffe version is very useful and now is looking at AdaCos. As it seems for me, your...
@artyom-beilis Thanks for your patch! I have tried it the same as this https://github.com/BVLC/caffe/issues/6970. But encountered with a large memory utilization in case of cudnn8. After some tests I have...
> AFAIR I noticed the difference in memory use of cudnn7 vs cudnn8 with other frameworks as well. Could you tell more about other frameworks? I have tried to find...
Anyway, thank you! )
@Qengineering Thanks for your caffe patch! I have applied it, but sometimes I observed strange behavior, for some models memory usage is about twice larger comparing to CUDA10-cudnn7 environment, has...
I see, thank you!
@Qengineering Thanks for your answer again! I agree with the backward pass. But as I see forward pass needs more memory too. I have tried a model with single conv...