inference icon indicating copy to clipboard operation
inference copied to clipboard

llama引擎运行deepseek-r1-qwen模型报错

Open narutoPro opened this issue 10 months ago • 2 comments

Image llama_model_load: error loading model: error loading model vocabulary: unknown pre-tokenizer type: 'deepseek-r1-qwen' llama_load_model_from_file: failed to load model 2025-02-13 07:48:29,621 xinference.core.worker 29 ERROR Failed to load model deepseek-r1-distill-qwen-0 Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/xinference/core/worker.py", line 908, in launch_builtin_model await model_ref.load() File "/usr/local/lib/python3.10/dist-packages/xoscar/backends/context.py", line 231, in send return self._process_result_message(result) File "/usr/local/lib/python3.10/dist-packages/xoscar/backends/context.py", line 102, in _process_result_message raise message.as_instanceof_cause() File "/usr/local/lib/python3.10/dist-packages/xoscar/backends/pool.py", line 667, in send result = await self._run_coro(message.message_id, coro) File "/usr/local/lib/python3.10/dist-packages/xoscar/backends/pool.py", line 370, in _run_coro return await coro

narutoPro avatar Feb 13 '25 16:02 narutoPro

docker 启动的xinference,版本信息如下图

Image

narutoPro avatar Feb 13 '25 16:02 narutoPro

同样的问题,最新的版本,请问有解决办法了吗?

sunisstar avatar Mar 24 '25 10:03 sunisstar