cnn-quantization icon indicating copy to clipboard operation
cnn-quantization copied to clipboard

multi-gpu results WRONG

Open mikeseven opened this issue 4 years ago • 1 comments

I couldn't reproduce the results with the examples provided in the readme on my 4 GPUs. So I used batch 256 on only 1 GPU and it works. Adding one GPU, lower precision@5 by ~10%, so with 4 GPUs results were around 60%!!!

Please correct this bug asap.

Thanks, --mike

mikeseven avatar Apr 20 '20 02:04 mikeseven

After lots of testing, I think this might be a limitation of pytorch inference. Therefore, it might be better to disable multiple Gpus in devide_ids or use it to specify only 1 gpu. As such, there is no need to use distributed.

There is no need to clear Cuda cache in the code. It's better to reduce the batch size. 256 works well for 11GB Gpus (eg rtx 2080 TI).

mikeseven avatar Apr 21 '20 18:04 mikeseven