TensorRT icon indicating copy to clipboard operation
TensorRT copied to clipboard

🐛 [Bug] Expected input tensors to have type Half, found type float

Open thesword53 opened this issue 1 year ago • 12 comments

Bug Description

TensorRT throws error about fp32 tensors input despite I am using fp16 tensors as input.

I attached the file IFRNet.py adapted from https://github.com/ltkong218/IFRNet/blob/main/models/IFRNet.py

To Reproduce

Steps to reproduce the behavior:

  1. Compile model with fp16 inputs and fp16 dtype
  2. Infer model with fp16 tensors

Expected behavior

Environment

Build information about Torch-TensorRT can be found by turning on debug messages

  • Torch-TensorRT Version (e.g. 1.0.0): 1.4.0
  • PyTorch Version (e.g. 1.0): 3.
  • CPU Architecture: x86_64
  • OS (e.g., Linux): Arch Linux
  • How you installed PyTorch (conda, pip, libtorch, source): Arch Linux AUR
  • Build command you used (if compiling from source):
  • Are you using local sources or building from archives:
  • Python version: 3.11.4
  • CUDA version: 12.2
  • GPU models and configuration: RTX 2080 SUPER
  • Any other relevant information:

Additional context

WARNING: [Torch-TensorRT] - For input embt.1, found user specified input dtype as Half. The compiler is going to use the user setting Half
WARNING: [Torch-TensorRT] - Mean converter disregards dtype
WARNING: [Torch-TensorRT] - Mean converter disregards dtype
WARNING: [Torch-TensorRT] - Mean converter disregards dtype
WARNING: [Torch-TensorRT] - Trying to record the value 162 with the ITensor (Unnamed Layer* 79) [Parametric ReLU]_output again.
WARNING: [Torch-TensorRT] - Trying to record the value 185 with the ITensor (Unnamed Layer* 101) [Parametric ReLU]_output again.
WARNING: [Torch-TensorRT TorchScript Conversion Context] - Unused Input: input_2
WARNING: [Torch-TensorRT TorchScript Conversion Context] - [RemoveDeadLayers] Input Tensor input_2 is unused or used only at compile-time, but is not being removed.
WARNING: [Torch-TensorRT TorchScript Conversion Context] - Unused Input: input_2
WARNING: [Torch-TensorRT TorchScript Conversion Context] - [RemoveDeadLayers] Input Tensor input_2 is unused or used only at compile-time, but is not being removed.
WARNING: [Torch-TensorRT] - Trying to record the value 43 with the ITensor (Unnamed Layer* 17) [Parametric ReLU]_output again.
WARNING: [Torch-TensorRT] - Trying to record the value 67 with the ITensor (Unnamed Layer* 39) [Parametric ReLU]_output again.
WARNING: [Torch-TensorRT TorchScript Conversion Context] - Unused Input: input_2
WARNING: [Torch-TensorRT TorchScript Conversion Context] - [RemoveDeadLayers] Input Tensor input_2 is unused or used only at compile-time, but is not being removed.
WARNING: [Torch-TensorRT TorchScript Conversion Context] - Unused Input: input_2
WARNING: [Torch-TensorRT TorchScript Conversion Context] - [RemoveDeadLayers] Input Tensor input_2 is unused or used only at compile-time, but is not being removed.
WARNING: [Torch-TensorRT] - Trying to record the value 43 with the ITensor (Unnamed Layer* 17) [Parametric ReLU]_output again.
WARNING: [Torch-TensorRT] - Trying to record the value 67 with the ITensor (Unnamed Layer* 39) [Parametric ReLU]_output again.
WARNING: [Torch-TensorRT TorchScript Conversion Context] - Unused Input: input_2
WARNING: [Torch-TensorRT TorchScript Conversion Context] - [RemoveDeadLayers] Input Tensor input_2 is unused or used only at compile-time, but is not being removed.
WARNING: [Torch-TensorRT TorchScript Conversion Context] - Unused Input: input_2
WARNING: [Torch-TensorRT TorchScript Conversion Context] - [RemoveDeadLayers] Input Tensor input_2 is unused or used only at compile-time, but is not being removed.
WARNING: [Torch-TensorRT] - Trying to record the value 43 with the ITensor (Unnamed Layer* 17) [Parametric ReLU]_output again.
WARNING: [Torch-TensorRT] - Trying to record the value 67 with the ITensor (Unnamed Layer* 39) [Parametric ReLU]_output again.
WARNING: [Torch-TensorRT TorchScript Conversion Context] - Unused Input: input_2
WARNING: [Torch-TensorRT TorchScript Conversion Context] - [RemoveDeadLayers] Input Tensor input_2 is unused or used only at compile-time, but is not being removed.
WARNING: [Torch-TensorRT TorchScript Conversion Context] - Unused Input: input_2
WARNING: [Torch-TensorRT TorchScript Conversion Context] - [RemoveDeadLayers] Input Tensor input_2 is unused or used only at compile-time, but is not being removed.
WARNING: [Torch-TensorRT] - Input 0 of engine __torch___wrappers_ifrnet_models_IFRNet_Model_trt_engine_0x5604f02a32e0 was found to be on cpu but should be on cuda:0. This tensor is being moved by the runtime but for performance considerations, ensure your inputs are all on GPU and open an issue here (https://github.com/pytorch/TensorRT/issues) if this warning persists.
WARNING: [Torch-TensorRT] - Input 1 of engine __torch___wrappers_ifrnet_models_IFRNet_Model_trt_engine_0x5604f02a32e0 was found to be on cpu but should be on cuda:0. This tensor is being moved by the runtime but for performance considerations, ensure your inputs are all on GPU and open an issue here (https://github.com/pytorch/TensorRT/issues) if this warning persists.
RuntimeError: The following operation failed in the TorchScript interpreter.
Traceback of TorchScript (most recent call last):
RuntimeError: [Error thrown at /usr/src/debug/python-pytorch-tensorrt/TensorRT/core/runtime/execute_engine.cpp:136] Expected inputs[i].dtype() == expected_type to be true but got false
Expected input tensors to have type Half, found type float

IFRNet.py.gz

thesword53 avatar Jul 13 '23 22:07 thesword53

I dont see the torch-tensorrt code in the link you shared.

@bowang007 Keep an eye on this, might be related to some of your PRs

narendasan avatar Jul 17 '23 16:07 narendasan

I'm also having this issue

leizaf avatar Jul 25 '23 18:07 leizaf

I also noticed a simple sum between 2 fp16 tensors implicitly cast them to a fp32 tensor.

thesword53 avatar Jul 25 '23 18:07 thesword53

I'm also having this issue, how to slove it?

JXQI avatar Sep 23 '23 03:09 JXQI

I am encountering the same issue.

janblumenkamp avatar Nov 24 '23 10:11 janblumenkamp

This PR can help resolve above issue. Thanks!

bowang007 avatar Nov 27 '23 20:11 bowang007

This PR can help resolve above issue. Thanks!

@bowang007 Is there any update for your commit? It seems fail in a few check. Eagerly looking forward to your update.

Eliza-and-black avatar Nov 30 '23 03:11 Eliza-and-black

also having this issue!

johnzlli avatar Feb 29 '24 06:02 johnzlli

This PR can help resolve above issue. Thanks!

image There is a new error with this PR. Is there any update?

johnzlli avatar Mar 20 '24 06:03 johnzlli

Hi @johnzlli , can you try using dynamo path instead? We are now supporting Dynamo since Torchscript path is being deprecated. Thanks!

bowang007 avatar Mar 20 '24 18:03 bowang007

Hi @johnzlli , can you try using dynamo path instead? We are now supporting Dynamo since Torchscript path is being deprecated. Thanks!

Thanks for your reply! Dynamo is a great work, but there is no way to export the compiled model. So that we have to still use torchscript.

johnzlli avatar Mar 21 '24 07:03 johnzlli