Chao Zhang
Chao Zhang
Going to need to rebase this on master
I'm currently taking a look at this...
Hmm thinking about this some more, maybe the dry run option doesn't actually provide that much more value. Chances are, if you're able to run `torch2trt` on your inputs, then...
Just tested this on master `d1768aa3d2c7d7d91d9f061e3e5dc5f976124dfe` built in NGC pytorch:22.08-py3, but I'm still seeing the same errors running the above script: ``` root@2e1a9b9e2880:/opt/TensorRT# python /scripts/gathernd.py PyTorch: torch.Size([2, 4]) WARNING: [Torch-TensorRT]...
Tested the latest release 1.2.0 using NGC `nvcr.io/nvidia/pytorch:22.09-py3`. Running the above script, I'm still seeing the same errors: ``` PyTorch: torch.Size([2, 4]) WARNING: [Torch-TensorRT] - For input x.1, found user...
I'll put up a PR up for this.
This is needed for #755
I believe this is a TRT issue; you can't permute the batch dimension in TRT (see [here](https://forums.developer.nvidia.com/t/tensorrt-does-not-support-permute-in-n-batch-dimension/69925)), whereas PyTorch doesn't have such a restriction, as TRT assumes the first dim...
This issue should be addressed by #738.
Let me take a look at this. This is probably from our usage of the explicit batch dimension, which might require the different optimization profiles after all, in order to...