APoT_Quantization icon indicating copy to clipboard operation
APoT_Quantization copied to clipboard

Size and accuracy

Open Cyber-Neuron opened this issue 4 years ago • 5 comments

Hi,

Based on the provided pretrained model (res18_2bit), I got 64.690% and the quantized model size is 5MB (gzip) or 3.4MB (7zip). It is quite different from the results in your paper. Can you please point out why is that? I just run: python main.py -a resnet18 --bit 2 --pretrained resnet18_2bit.pth

Thanks

Cyber-Neuron avatar Jan 16 '21 04:01 Cyber-Neuron

  • -e

Cyber-Neuron avatar Jan 16 '21 04:01 Cyber-Neuron

Hi, I met the same problem. The reproduced accuracy is 64.74%, which is much lower than the result in the paper. Have you solved this problem?

Euphoria16 avatar Feb 28 '21 15:02 Euphoria16

Hi, I met the same problem. The reproduced accuracy is 64.74%, which is much lower than the result in the paper. Have you solved this problem?

Kind of. The batch size matters, however the accuracy is still around ~65% which is the same as other 2-bit quant methods.

Cyber-Neuron avatar Mar 08 '21 17:03 Cyber-Neuron

Hi,

the accuracy mismatch is probably due to the different implementation of the data-loader between my training environment and the official pytorch environments.

Did you verify it through direct training?

yhhhli avatar Mar 20 '21 15:03 yhhhli

Hi, I found a typo in the dataloader, can you test it now?

yhhhli avatar Mar 30 '21 09:03 yhhhli