Zero Zeng

Results 582 comments of Zero Zeng

Can you try exporting the pytorch model to onnx and check the accuracy with polygraphy? reference: https://github.com/NVIDIA/TensorRT/tree/main/tools/Polygraphy https://github.com/NVIDIA/TensorRT/tree/main/tools/Polygraphy/examples/cli/run/01_comparing_frameworks

you may refer to https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/index.html#fusion-types

or create a case and run it with trtexec --verbose, you are able to see the final engine structure in the log. which will tell if TRT can support your...

![image](https://user-images.githubusercontent.com/38289304/182869691-dce0feb3-51cb-404d-bfc3-f75cf6e9e2e7.png) I think it should be configurable, @rajeevsrao is the author but he is OOTO now :)

One question here: how do you compute the memory size? In the TRT verbose log there will be memory info about the engine. e.g. ``` 54992 [08/09/2022-22:08:24] [I] Engine built...

You can see it in the verbose log. try searching "Engine Layer Information"

> On the hardware, not in the verbose, are’t they the same thing hardware memory usage usually contains other module like CUBLAS and CUDNN. they are not the same thing....

> Is there any way to get the data type(fp16 or fp32) of layers in mixed engine during inferencing? No, it's only logging in the build phase. > I have...

https://docs.nvidia.com/deeplearning/tensorrt/api/python_api/infer/Core/BuilderConfig.html#tensorrt.BuilderFlag