TensorRT
TensorRT copied to clipboard
build int8 engine failed.
Description
I created a model which has only one fully connected layer, and want to build it into int8 engine, but it turn out a fp32 engine file. Could anyone help me? Here is the script. test.py.zip
Steps To Reproduce
You didn't specify the INT8 config flag. please refer to https://docs.nvidia.com/deeplearning/tensorrt/api/python_api/infer/Core/BuilderConfig.html#tensorrt.BuilderFlag and check our documentation carefully.
closing since no activity for more than 3 weeks, please reopen if you still have question, thanks!