Dheeraj Peri
Dheeraj Peri
@andi4191 Can you rebase your changes and resolve conflicts ?
Just a note: I just faced the bug 1 `RuntimeError: .numpy() is not supported for tensor subclasses.` during `torch_compile` compillation. In my case, error starts from here https://github.com/pytorch/TensorRT/blob/main/py/torch_tensorrt/dynamo/conversion/impl/shape.py#L60 The conclusion...
May I know how you've obtained this model (model source) ? Is it finetuned with any quantization technique ?
Hello @Feynman1999 This seems like a torchdynamo error. 1) Can you share a reproducer with the model ? 2) More details : There are two stages in our compilation. a)...
> I think for this converter, the output shape is dynamic because it depends on the mask. It's similar to a bug reported in #2516. So let's wait for getting...
This issue is because we deepcopy the calibrator object (whose pickling is not defined). Can you replace this line with https://github.com/pytorch/TensorRT/blob/main/py/torch_tensorrt/ts/_compile_spec.py#L228 with `compile_spec = compile_spec_` ? We shall investigate this...
Hello @laclouis5 Sorry for the delay. Can you let me know what inputs are you receiving for this function ? https://github.com/pytorch/TensorRT/blob/main/py/torch_tensorrt/ptq.py#L24-L25
`calibrator = ptq.CacheCalibrator("calibrator.cache")` is the right usage. You don't have to ever use `get_cache_mode_batch` directly and the signature for this function is https://github.com/pytorch/TensorRT/blob/main/py/torch_tensorrt/ptq.py#L24. Once you define the `CacheCalibrator` class, it...
Hello @laclouis5 sorry for the delay. I have a workaround for you for the error in https://github.com/pytorch/TensorRT/issues/2168#issuecomment-1732336504. You can try using a `DataLoaderCalibrator` with use_cache=True to use the calibration cache...
From the error log, it looks like some system version issues. Can you try building and installing pytorch-quantization toolkit from source ? https://github.com/NVIDIA/TensorRT/tree/release/8.6/tools/pytorch-quantization#pytorch-quantization If that doesn't work, please post this...