Nonuniform-to-Uniform-Quantization
Nonuniform-to-Uniform-Quantization copied to clipboard
Accuracy of the floating-point ResNet18 model?
Hello, thanks for your excellent work and code! There is one question that confused me. In Table 1 of your paper, the Top1 Accuracy of the pre-trained FP Resnet18 model is 71.8%. But in your code, the pre-trained FP Resnet18 model whose Top1 Accuracy is 69.758% came from the torchvision. The link to torchvision's pre-trained weight is [https://github.com/pytorch/vision/blob/main/torchvision/models/resnet.py], from lines 312 to 329. Why are they quite different? Did I use the right pre-trained weight (resnet18-f37072fd.pth)?
I have the same question. Have you got any answers?