CalibTIP
CalibTIP copied to clipboard
Improving Post Training Neural Quantization: Layer-wise Calibration and Integer Programming
Good afternoon, I found your paper very interesting and wanted to try out your code. I have a several questions I would be grateful if you could answer: 1) In...
Hello. I was trying to running the code as per instructed but got this error while running the code. Is there anything I am missing to add during the run?...
when i use advanced_pipeline.sh, and i achieve bugs at line 3 "sh scripts/integer-programing.sh resnet resnet50 4 4 8 8 50 loss True", which is 'No such file or directory: 'results/resnet50_w8a8.adaquant/IP_resnet50_loss.txt'....
https://github.com/itayhubara/CalibTIP/blob/69077c92611b079234706784c344e8c9156f3283/main.py#L481 [0] index into the first batch. isn't sequential adaquant supposed to update the input cache of all batches to the quantized values?
Hello and thank you for an interesting paper! I have a question concerning the optimization of the quantization step size. In section D.1 of the paper, you mentioned the usage...
This implies to calculate MSE between relu(conv_out) for conv1 and conv2 layers https://github.com/itayhubara/CalibTIP/blob/69077c92611b079234706784c344e8c9156f3283/utils/adaquant.py#L124 But in ResNet architecture, conv2 is not followed by a direct Relu. Instead it follows by a...
What data are you use in /media/drive/Datasets/imagenet/calib?
Hi! Thanks for your work! In Table 2, you showed the results of w4a4 configuration for ResNet models. But you omitted the result of MobileNet-V2 at that table. From figure...