TensorRT
TensorRT copied to clipboard
🐛 [Bug] Expected input tensors to have type Half, found type float
Bug Description
TensorRT throws error about fp32 tensors input despite I am using fp16 tensors as input.
I attached the file IFRNet.py
adapted from https://github.com/ltkong218/IFRNet/blob/main/models/IFRNet.py
To Reproduce
Steps to reproduce the behavior:
- Compile model with fp16 inputs and fp16 dtype
- Infer model with fp16 tensors
Expected behavior
Environment
Build information about Torch-TensorRT can be found by turning on debug messages
- Torch-TensorRT Version (e.g. 1.0.0): 1.4.0
- PyTorch Version (e.g. 1.0): 3.
- CPU Architecture: x86_64
- OS (e.g., Linux): Arch Linux
- How you installed PyTorch (
conda
,pip
,libtorch
, source): Arch Linux AUR - Build command you used (if compiling from source):
- Are you using local sources or building from archives:
- Python version: 3.11.4
- CUDA version: 12.2
- GPU models and configuration: RTX 2080 SUPER
- Any other relevant information:
Additional context
WARNING: [Torch-TensorRT] - For input embt.1, found user specified input dtype as Half. The compiler is going to use the user setting Half
WARNING: [Torch-TensorRT] - Mean converter disregards dtype
WARNING: [Torch-TensorRT] - Mean converter disregards dtype
WARNING: [Torch-TensorRT] - Mean converter disregards dtype
WARNING: [Torch-TensorRT] - Trying to record the value 162 with the ITensor (Unnamed Layer* 79) [Parametric ReLU]_output again.
WARNING: [Torch-TensorRT] - Trying to record the value 185 with the ITensor (Unnamed Layer* 101) [Parametric ReLU]_output again.
WARNING: [Torch-TensorRT TorchScript Conversion Context] - Unused Input: input_2
WARNING: [Torch-TensorRT TorchScript Conversion Context] - [RemoveDeadLayers] Input Tensor input_2 is unused or used only at compile-time, but is not being removed.
WARNING: [Torch-TensorRT TorchScript Conversion Context] - Unused Input: input_2
WARNING: [Torch-TensorRT TorchScript Conversion Context] - [RemoveDeadLayers] Input Tensor input_2 is unused or used only at compile-time, but is not being removed.
WARNING: [Torch-TensorRT] - Trying to record the value 43 with the ITensor (Unnamed Layer* 17) [Parametric ReLU]_output again.
WARNING: [Torch-TensorRT] - Trying to record the value 67 with the ITensor (Unnamed Layer* 39) [Parametric ReLU]_output again.
WARNING: [Torch-TensorRT TorchScript Conversion Context] - Unused Input: input_2
WARNING: [Torch-TensorRT TorchScript Conversion Context] - [RemoveDeadLayers] Input Tensor input_2 is unused or used only at compile-time, but is not being removed.
WARNING: [Torch-TensorRT TorchScript Conversion Context] - Unused Input: input_2
WARNING: [Torch-TensorRT TorchScript Conversion Context] - [RemoveDeadLayers] Input Tensor input_2 is unused or used only at compile-time, but is not being removed.
WARNING: [Torch-TensorRT] - Trying to record the value 43 with the ITensor (Unnamed Layer* 17) [Parametric ReLU]_output again.
WARNING: [Torch-TensorRT] - Trying to record the value 67 with the ITensor (Unnamed Layer* 39) [Parametric ReLU]_output again.
WARNING: [Torch-TensorRT TorchScript Conversion Context] - Unused Input: input_2
WARNING: [Torch-TensorRT TorchScript Conversion Context] - [RemoveDeadLayers] Input Tensor input_2 is unused or used only at compile-time, but is not being removed.
WARNING: [Torch-TensorRT TorchScript Conversion Context] - Unused Input: input_2
WARNING: [Torch-TensorRT TorchScript Conversion Context] - [RemoveDeadLayers] Input Tensor input_2 is unused or used only at compile-time, but is not being removed.
WARNING: [Torch-TensorRT] - Trying to record the value 43 with the ITensor (Unnamed Layer* 17) [Parametric ReLU]_output again.
WARNING: [Torch-TensorRT] - Trying to record the value 67 with the ITensor (Unnamed Layer* 39) [Parametric ReLU]_output again.
WARNING: [Torch-TensorRT TorchScript Conversion Context] - Unused Input: input_2
WARNING: [Torch-TensorRT TorchScript Conversion Context] - [RemoveDeadLayers] Input Tensor input_2 is unused or used only at compile-time, but is not being removed.
WARNING: [Torch-TensorRT TorchScript Conversion Context] - Unused Input: input_2
WARNING: [Torch-TensorRT TorchScript Conversion Context] - [RemoveDeadLayers] Input Tensor input_2 is unused or used only at compile-time, but is not being removed.
WARNING: [Torch-TensorRT] - Input 0 of engine __torch___wrappers_ifrnet_models_IFRNet_Model_trt_engine_0x5604f02a32e0 was found to be on cpu but should be on cuda:0. This tensor is being moved by the runtime but for performance considerations, ensure your inputs are all on GPU and open an issue here (https://github.com/pytorch/TensorRT/issues) if this warning persists.
WARNING: [Torch-TensorRT] - Input 1 of engine __torch___wrappers_ifrnet_models_IFRNet_Model_trt_engine_0x5604f02a32e0 was found to be on cpu but should be on cuda:0. This tensor is being moved by the runtime but for performance considerations, ensure your inputs are all on GPU and open an issue here (https://github.com/pytorch/TensorRT/issues) if this warning persists.
RuntimeError: The following operation failed in the TorchScript interpreter.
Traceback of TorchScript (most recent call last):
RuntimeError: [Error thrown at /usr/src/debug/python-pytorch-tensorrt/TensorRT/core/runtime/execute_engine.cpp:136] Expected inputs[i].dtype() == expected_type to be true but got false
Expected input tensors to have type Half, found type float
I dont see the torch-tensorrt code in the link you shared.
@bowang007 Keep an eye on this, might be related to some of your PRs
I'm also having this issue
I also noticed a simple sum between 2 fp16 tensors implicitly cast them to a fp32 tensor.
I'm also having this issue, how to slove it?
I am encountering the same issue.
This PR can help resolve above issue. Thanks!
This PR can help resolve above issue. Thanks!
@bowang007 Is there any update for your commit? It seems fail in a few check. Eagerly looking forward to your update.
also having this issue!
This PR can help resolve above issue. Thanks!
There is a new error with this PR. Is there any update?
Hi @johnzlli , can you try using dynamo path instead? We are now supporting Dynamo since Torchscript path is being deprecated. Thanks!
Hi @johnzlli , can you try using dynamo path instead? We are now supporting Dynamo since Torchscript path is being deprecated. Thanks!
Thanks for your reply! Dynamo is a great work, but there is no way to export the compiled model. So that we have to still use torchscript.