TensorRT icon indicating copy to clipboard operation
TensorRT copied to clipboard

Why can't we set the precision of all layers to fp16 or fp32?

Open sanbuphy opened this issue 1 year ago • 7 comments

Description

Hello, I'm trying to set the precision of specific layers to fp32, but after setting some layers, I don't see any improvement (the final output is still NaN). To troubleshoot this issue, I wanted to verify if setting it to fp32 actually makes a difference. However, I encountered an error when attempting to do so. Could you please explain the reason behind this error? Thank you very much.

Here is my code:

config.set_flag(trt.BuilderFlag.FP16)
config.set_flag(trt.BuilderFlag.OBEY_PRECISION_CONSTRAINTS)

for layer in network:
    layer.precision =  trt.float32

image

image

sanbuphy avatar Aug 01 '23 10:08 sanbuphy

As the warning say: some layers is force to running in INT32, you can not set those layers to FP32.

zerollzeng avatar Aug 03 '23 03:08 zerollzeng

As the warning say: some layers is force to running in INT32, you can not set those layers to FP32.

How can i skip that layer?

sanbuphy avatar Aug 03 '23 05:08 sanbuphy

I think you might do

if layer.precision == trt.float16:
    layer.precision == trt.float32

nvluxiaoz avatar Aug 03 '23 21:08 nvluxiaoz

Hello @nvluxiaoz , can you please explain, how to force precision to trt.float32 and run this from trtexec command line? If not can you provide a code snippet? Thank You

ninono12345 avatar Jan 18 '24 10:01 ninono12345

To avoid the warning, just don't set those layers precision to FP32, or just safely ignore it.

zerollzeng avatar Jan 19 '24 09:01 zerollzeng

To avoid the warning, just don't set those layers precision to FP32, or just safely ignore it.

I want to skip that layers, but i don't know how to identify the layers

sanbuphy avatar Jan 19 '24 10:01 sanbuphy

Filter with layer name or layer type?

zerollzeng avatar Jan 24 '24 14:01 zerollzeng

closing since no activity for more than 3 weeks, thanks all!

ttyio avatar Apr 16 '24 18:04 ttyio

why closing, this is important.

focusunsink avatar May 28 '24 04:05 focusunsink

for i in range(network.num_layers):

        #     layer = network.get_layer(i)
        #     if 'norm' in layer.name:
        #         print("this is a layernorm", layer.type, layer.name, layer.precision)
        #         #layer.precision = trt.DataType.FLOAT
        #     elif layer.type == trt.LayerType.MATRIX_MULTIPLY:
        #         print("this is a matmul", layer.type, layer.name, layer.precision)
        #         #layer.precision = trt.float16
        #     else:
        #         pass

focusunsink avatar May 29 '24 09:05 focusunsink