tensorrtllm_backend
tensorrtllm_backend copied to clipboard
Qwen2-14B inference garbled
System Info
When using Qwen2, executing inference with the engine through the run.py script outputs normally. However, when using Triton for inference, some characters appear garbled, and the output is incomplete compared to the results obtained from using the script. What could be the cause of this issue?
maybe the config.pbtxt cause the problem
Who can help?
No response
Information
- [ ] The official example scripts
- [ ] My own modified scripts
Tasks
- [ ] An officially supported task in the
examplesfolder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below)
Reproduction
- start triton server
Expected behavior
get the same results with run.py script
actual behavior
When using Qwen2, executing inference with the engine through the run.py script outputs normally. However, when using Triton for inference, some characters appear garbled, and the output is incomplete compared to the results obtained from using the script. What could be the cause of this issue?
additional notes
no