Results 25 comments of Anurag Dixit

@noman-anjum-retro: Can you try this? I added the option in torchtrtc to support custom torch op or torch_tensorrt converter. I have used windows.h header file and LoadLibrary for loading the...

It seems to be complaining about the FileNotFound IIRC, in Windows, the paths are mentioned as: \\src\\exploration\\action_recognition\\torchtrt_runtime.dll Can you try above and share your observation?

Hi @noman-anjum-retro, I think the problem is with the mode while you are trying to load the symbol tables. I tried following and is working: ``` import ctypes import torch...

@kyikiwang : Where are you comparing the predictions? I see latency benchmark comparisons in the code snippet you shared here. Can you please share the workflow you are using to...

@narendasan: We can make a check for precision request against the SM compute capability as per support matrix. Something like https://docs.nvidia.com/deeplearning/tensorrt/support-matrix/index.html#hardware-precision-matrix This check will be done at runtime though.

I don't think we can do anything other than throwing an error instead of letting it crash. IIRC If the engines are not portable across computing capability, TensorRT fails at...

> ## Bug Description > I'm completely new to Docker but, after trying unsuccessfully to install Torch-TensorRT with its dependencies, I wanted to try this approach. However, when I try...

Hi @bowang007 Request you to please review this PR for merge?

@oazeybekoglu : Just curious, does my [PR](https://github.com/pytorch/TensorRT/pull/2678) fixes your issue?

> > @oazeybekoglu : Just curious, does my [PR](https://github.com/pytorch/TensorRT/pull/2678) fixes your issue? > > Hey @andi4191, yes it fixes the issue. @oazeybekoglu : Thank you for confirming. @peri044: Please let...