HsuTzuJen

Results 31 comments of HsuTzuJen

@TengliEd NO, I got the same result just like L_Resnet_E_IR.py. I think that we should try the insightFace(mxnet), but I do not know about mxnet.

@TengliEd Thank you very much. I just check it and I found that the size of the model is only 166MB without 4* base_depth. @auroua I think we can use...

It is 320MB that the model is included the logit layer"arcface_loss(embedding, labels, out_num, w_init=None, s=64., m=0.5)"(it is a fully connected layer for softmax, and it is useful only in training)...

I just use the 50 layers resnet from slim.nets.resnet_v2.py: resnet_v2_50(inputs, num_classes=None, is_training=True, global_pool=False, output_stride=None, reuse=None, scope='resnet_v2_50'): I achieved 0.99216666666 with batch size 128. I continue the training with the cutout...

I have a GPU server with 4 1080ti. I do not use the dropout. Cutout is an image preprocessing method which is easy to implement. The paper : https://arxiv.org/abs/1708.04552

Please use train_nets_mgpu.py set parser.add_argument('--num_gpus', default=4, help='the num of gpus')

Just set 128 if you want a 128 batch size, and it would be split into num_gpu parts.

@auroua In the TF document, It says that using NCHW is better than using NHWC in training mode with CUDA, and NHWC in inference mode. Is there any way to...

@ruobop I just used the same settings as the paper, but I got this: C:\ProgramData\Anaconda3\lib\site-packages\numpy\core\_methods.py:70: RuntimeWarning: overflow encountered in reduce ret = umr_sum(arr, axis, dtype, out, keepdims) total_step 1520, total...

@billtiger The best acc I achieved is 0.9875 with batch size 64 at step 348000. I think that maybe we should change the lr step.