quantized.pytorch
quantized.pytorch copied to clipboard
Hi @eladhoffer @itayhubara, I see that in the quantize.py file, self.weight is left unchanged and qweight is only used to compute gradients. This results in the use of full precision...
Hi! It seems to me as a cheat when preforming '+' operation at the end of residual block in quantized ResNet implementation. It requires 16 bit accumulator to get the...
TRAINING - Epoch: [0][410/446] Time 0.602 (0.622) Data 0.000 (0.005) Loss 4.0999 (5.5282) Prec@1 2.344 (3.435) Prec@5 19.531 (14.536) TRAINING - Epoch: [0][420/446] Time 0.602 (0.622) Data 0.000 (0.005) Loss...
if my input is torch.autograd.Variable, how to correct the code. I met the error: File "example/mpii.py", line 352, in main(parser.parse_args()) File "example/mpii.py", line 107, in main train_loss, train_acc = train(train_loader,...
It seems that in *rangBN* that scale_fix should be as in paper, but here scale_fix becomes scale_fix =
I noticed that you don't cancel gradient of the large values, when using straight through estimator [here](https://github.com/eladhoffer/quantized.pytorch/blob/master/models/modules/quantize.py#L89). In QNN paper it was claimed "Not cancelling the gradient when r is...
Hi, thank you for posting your code! I think there's a mismatch of the argument order between [here ](https://github.com/eladhoffer/quantized.pytorch/blob/master/models/modules/quantize.py#L25) and [here](https://github.com/eladhoffer/quantized.pytorch/blob/master/models/modules/quantize.py#L141). ``` def forward(cls, ctx, input, num_bits=8, min_value=None, max_value=None, stochastic=False,...
Hi, I am trying to run prediction but hitting a roadblock with CUDA not supporting Byte tensor: ``` d, l = next(iter(train_loader)) d, l = d.type(torch.ByteTensor), l.type(torch.ByteTensor) d, l =...
is there any method for inference and test model?
I wanna find the 'parameters' method in any model, but I couldn't. how can I fix that?