Teng Li
Teng Li
My L_ResNet100_E size is 402MB and L_ResNet50_E is 320 MB. How can you achieve just 166MB?
OK. Do you use WeChat or other similar apps? I wanna ask you for some details about how to get 99.5%+ accuracy on LFW. I only got 99.02%. :)
Or plz provide your train_nets.py and other modified parts. Thx
@HsuTzuJen Your result is better when using no bottle neck?
How can you achieve so large batch size? I am using p100 12G GPU and the maximum batch size is only 50 when employing L_ResNet_100E_IR. Besides, you mean dropout by...
Thx @HsuTzuJen. I also have a server with 4 GPUs but don't know how to boost batch size. It seems there is no difference in maximum batch size between 4...
Only one gpu memory is fully used. I have set coda_visible_devices to 0,1,2,3. You use train_nets.py to train your model? How can I fully use the gpu memory? ![screen shot...
OK. parser.add_argument('--batch_size', default=32, help='batch size to train network') means actual batch size is 4*32?
The added lines used to train on multiple gpus based on train_nets.py are correct? I suppose you mean the accuracy computation line. When I need a 128 batch size, I...
The momentum optimizer increments global step when applying gradients so inc_op is redundant.