inference
inference copied to clipboard
chat template test failed when I chose DeepSeek-R1-Distill-Qwen in model family
xinference 1.2.1 vllm 0.6.1.post2 vllm-flash-attn 2.6.1 transformers 4.48.3 torch 2.4.0
You can just choose model family deepseek-r1-distill-qwen, you don't need to specify chat template on your own.
I don't change the chat template after I choose model family deepseek-r1-distill-qwen.
I encountered the same issue while using Docker. Has it been resolved?