tensorrtllm_backend icon indicating copy to clipboard operation
tensorrtllm_backend copied to clipboard

Failed to build TensorRT-LLM backend for Triton server.

Open sdecoder opened this issue 7 months ago • 1 comments

Greetings, I have come across following issue when trying to build TensorRT-LLM backend for Triton server: /home/nvidia/projects/triton-inference-server/tensorrtllm_backend/inflight_batcher_llm/../tensorrt_llm/cpp/include/tensorrt_llm/common/dataType.h:40:30: error: ‘kFP4’ is not a member of ‘nvinfer1::DataType’; did you mean ‘kFP8’?

I followed the instruction found here: https://github.com/triton-inference-server/tensorrtllm_backend/blob/main/docs/build.md 1.1. The command I am using is: /home/nvidia/projects/triton-inference-server/tensorrtllm_backend/inflight_batcher_llm bash scripts/build.sh 1.2. Platform: Jetson AGX Orin; 1.3. TensorRT version: TensorRT-10.3.0.26 1.4. CUDA version: 12.6 I do believe the TensorRT version and CUDA version is compatible. Is anyone will to take a look at this and give me a hint? Thank you everyone.

sdecoder avatar Mar 24 '25 03:03 sdecoder