Guy Jacob

Results 11 comments of Guy Jacob

Apologies for the very late response. To your questions: 1. You're correct, both sets of weights are kept. I don't think deleting the set of FP32 weights would work, because...

While quantized ops were indeed published in the ONNX spec, they're still WIP in PyTorch itself. Therefore, simple export using `torch.onnx` isn't possible yet. While it might be possible to...

Thank you @jonathanbonnard! Much appreciated.

The flow is now working for `Quantizer`s that **don't** involve adding new param groups to the optimizer. That is - it doesn't work for quantizers that implement `_get_new_optimizer_params_groups()` (e.g. `PACTQuantizer`)....

Apologies for the very late response. By default, Distiller quantizes both weights and activations. It should also automatically quantize the input to the network, if that's what you're referring to...

Hi, we're looking into the model zoo issue. Hope to have an update in the coming days. Regarding the missing file you mentioned, I just pushed a commit restoring it.

Apologies it took so long. The links are live again - FOR THE MOMENT. I suggest you download the files soon, as I'm not sure what the status of the...

Hi @ashutoshmishra1014, @listener17, Indeed the last officially supported PyTorch version in Distiller is 1.3.1. At the moment we don't have plans to update it to support later versions. You might...

Modified the title to better reflect what's going on

@nzmora note that as far as I can tell, the hooks implemented by all collectors (inc. your original ones) assume the inputs are `torch.Tensor`s. So while @barrh ran the quantization...