you
you
I also encounter the same problem. And I set the warning weight layer to FP32 manually, then the warning disappear. However, the accuracy of the result is still degraded.
> @YouSenRong Setting those layer back to FP32 just solve the subnormal value issue, but FP16 indeed has less accuracy than FP32 due to less mantissa bit. if you set...
> > > @YouSenRong Setting those layer back to FP32 just solve the subnormal value issue, but FP16 indeed has less accuracy than FP32 due to less mantissa bit. if...
> @YouSenRong Can you give you commd how to set the layers to fp32 The trtexec support command line: --layerPrecisions, --layerOutputTypes[A.2.1.4. Commonly Used Command-line Flags](https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/index.html#trtexec-flags) to set layer precision. You...