inference icon indicating copy to clipboard operation
inference copied to clipboard

vllm分布式推理报错

Open pyaaaa opened this issue 8 months ago • 5 comments

System Info / 系統信息

2025-04-15 10:15:22,015 xinference.api.restful_api 48577 ERROR [address=10.8.9.17:34697, pid=141952] Traceback (most recent call last): File "/root/anaconda3/envs/xinference/lib/python3.10/site-packages/xinference/api/restful_api.py", line 1002, in launch_model model_uid = await (await self._get_supervisor_ref()).launch_builtin_model( File "/root/anaconda3/envs/xinference/lib/python3.10/site-packages/xoscar/backends/context.py", line 231, in send return self._process_result_message(result) File "/root/anaconda3/envs/xinference/lib/python3.10/site-packages/xoscar/backends/context.py", line 102, in _process_result_message raise message.as_instanceof_cause() File "/root/anaconda3/envs/xinference/lib/python3.10/site-packages/xoscar/backends/pool.py", line 667, in send result = await self._run_coro(message.message_id, coro) File "/root/anaconda3/envs/xinference/lib/python3.10/site-packages/xoscar/backends/pool.py", line 370, in _run_coro return await coro File "/root/anaconda3/envs/xinference/lib/python3.10/site-packages/xoscar/api.py", line 384, in on_receive return await super().on_receive(message) # type: ignore File "xoscar/core.pyx", line 558, in on_receive raise ex File "xoscar/core.pyx", line 520, in xoscar.core._BaseActor.on_receive async with self._lock: File "xoscar/core.pyx", line 521, in xoscar.core._BaseActor.on_receive with debug_async_timeout('actor_lock_timeout', File "xoscar/core.pyx", line 526, in xoscar.core._BaseActor.on_receive result = await result File "/root/anaconda3/envs/xinference/lib/python3.10/site-packages/xinference/core/supervisor.py", line 939, in launch_builtin_model return await self._launch_builtin_sharded_model( File "/root/anaconda3/envs/xinference/lib/python3.10/site-packages/xinference/core/supervisor.py", line 1340, in _launch_builtin_sharded_model await _launch_model() File "/root/anaconda3/envs/xinference/lib/python3.10/site-packages/xinference/core/supervisor.py", line 1269, in _launch_model info = await worker_ref.launch_builtin_model( File "/root/anaconda3/envs/xinference/lib/python3.10/site-packages/xoscar/backends/context.py", line 231, in send return self._process_result_message(result) File "/root/anaconda3/envs/xinference/lib/python3.10/site-packages/xoscar/backends/context.py", line 102, in _process_result_message raise message.as_instanceof_cause() File "/root/anaconda3/envs/xinference/lib/python3.10/site-packages/xoscar/backends/pool.py", line 667, in send result = await self._run_coro(message.message_id, coro) File "/root/anaconda3/envs/xinference/lib/python3.10/site-packages/xoscar/backends/pool.py", line 370, in _run_coro return await coro File "/root/anaconda3/envs/xinference/lib/python3.10/site-packages/xoscar/api.py", line 384, in on_receive return await super().on_receive(message) # type: ignore File "xoscar/core.pyx", line 558, in on_receive raise ex File "xoscar/core.pyx", line 520, in xoscar.core._BaseActor.on_receive async with self._lock: File "xoscar/core.pyx", line 521, in xoscar.core._BaseActor.on_receive with debug_async_timeout('actor_lock_timeout', File "xoscar/core.pyx", line 526, in xoscar.core._BaseActor.on_receive result = await result File "/root/anaconda3/envs/xinference/lib/python3.10/site-packages/xinference/core/utils.py", line 93, in wrapped ret = await func(*args, **kwargs) File "/root/anaconda3/envs/xinference/lib/python3.10/site-packages/xinference/core/worker.py", line 866, in launch_builtin_model assert model_uid not in self._model_uid_to_model AssertionError: [address=10.8.9.17:34697, pid=141952] 查到此pid为xinference-worker进程的pid

Running Xinference with Docker? / 是否使用 Docker 运行 Xinfernece?

  • [ ] docker / docker
  • [ ] pip install / 通过 pip install 安装
  • [ ] installation from source / 从源码安装

Version info / 版本信息

用两台H100部署用vllm引擎分布式部署DeepSeek-R1时候报错,查到此pid为xinference-worker进程的pid

The command used to start Xinference / 用以启动 xinference 的命令

用两台H100部署用vllm引擎分布式部署DeepSeek-R1时候报错,查到此pid为xinference-worker进程的pid

Reproduction / 复现过程

用两台H100部署用vllm引擎分布式部署DeepSeek-R1时候报错,查到此pid为xinference-worker进程的pid

Expected behavior / 期待表现

希望解决此问题

pyaaaa avatar Apr 15 '25 10:04 pyaaaa

重启服务试下,是不是之前加载有失败过?

qinxuye avatar Apr 15 '25 13:04 qinxuye

export NCCL_SOCKET_IFNAME=mlx5_0,mlx5_1,mlx5_2,mlx5_3,mlx5_4,mlx5_5,mlx5_6,mlx5_7 我设置了ib卡,但是nccl报错Traceback (most recent call last): File "/root/anaconda3/envs/xinference-vllm/lib/python3.10/site-packages/xinference/core/utils.py", line 93, in wrapped ret = await func(*args, **kwargs) File "/root/anaconda3/envs/xinference-vllm/lib/python3.10/site-packages/xinference/core/worker.py", line 979, in wait_for_load await model_ref.wait_for_load() File "/root/anaconda3/envs/xinference-vllm/lib/python3.10/site-packages/xoscar/backends/context.py", line 231, in send return self._process_result_message(result) File "/root/anaconda3/envs/xinference-vllm/lib/python3.10/site-packages/xoscar/backends/context.py", line 102, in _process_result_message raise message.as_instanceof_cause() File "/root/anaconda3/envs/xinference-vllm/lib/python3.10/site-packages/xoscar/backends/pool.py", line 667, in send result = await self._run_coro(message.message_id, coro) File "/root/anaconda3/envs/xinference-vllm/lib/python3.10/site-packages/xoscar/backends/pool.py", line 370, in _run_coro return await coro File "/root/anaconda3/envs/xinference-vllm/lib/python3.10/site-packages/xoscar/api.py", line 384, in on_receive return await super().on_receive(message) # type: ignore File "xoscar/core.pyx", line 558, in on_receive raise ex File "xoscar/core.pyx", line 520, in xoscar.core._BaseActor.on_receive async with self._lock: File "xoscar/core.pyx", line 521, in xoscar.core._BaseActor.on_receive with debug_async_timeout('actor_lock_timeout', File "xoscar/core.pyx", line 526, in xoscar.core._BaseActor.on_receive result = await result File "/root/anaconda3/envs/xinference-vllm/lib/python3.10/site-packages/xinference/core/model.py", line 497, in wait_for_load await asyncio.to_thread(self._model.wait_for_load) File "/root/anaconda3/envs/xinference-vllm/lib/python3.10/asyncio/threads.py", line 25, in to_thread return await loop.run_in_executor(None, func_call) File "/root/anaconda3/envs/xinference-vllm/lib/python3.10/concurrent/futures/thread.py", line 58, in run result = self.fn(*self.args, **self.kwargs) File "/root/anaconda3/envs/xinference-vllm/lib/python3.10/site-packages/xinference/model/llm/vllm/core.py", line 473, in wait_for_load raise err.with_traceback(tb) File "/root/anaconda3/envs/xinference-vllm/lib/python3.10/site-packages/xinference/model/llm/vllm/core.py", line 442, in _load self._engine = XinferenceAsyncLLMEngine.from_engine_args( File "/root/anaconda3/envs/xinference-vllm/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 680, in from_engine_args return async_engine_cls.from_vllm_config( File "/root/anaconda3/envs/xinference-vllm/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 653, in from_vllm_config return cls( File "/root/anaconda3/envs/xinference-vllm/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 608, in init self.engine = self._engine_class(*args, **kwargs) File "/root/anaconda3/envs/xinference-vllm/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 267, in init super().init(*args, **kwargs) File "/root/anaconda3/envs/xinference-vllm/lib/python3.10/site-packages/vllm/engine/llm_engine.py", line 282, in init self.model_executor = executor_class(vllm_config=vllm_config, ) File "/root/anaconda3/envs/xinference-vllm/lib/python3.10/site-packages/xinference/model/llm/vllm/distributed_executor.py", line 111, in init super().init(vllm_config, *args, **kwargs) File "/root/anaconda3/envs/xinference-vllm/lib/python3.10/site-packages/vllm/executor/executor_base.py", line 286, in init super().init(*args, **kwargs) File "/root/anaconda3/envs/xinference-vllm/lib/python3.10/site-packages/vllm/executor/executor_base.py", line 52, in init self._init_executor() File "/root/anaconda3/envs/xinference-vllm/lib/python3.10/site-packages/xinference/model/llm/vllm/distributed_executor.py", line 186, in _init_executor self._run_workers("init_device") File "/root/anaconda3/envs/xinference-vllm/lib/python3.10/site-packages/xinference/model/llm/vllm/distributed_executor.py", line 234, in _run_workers self.driver_worker.execute_method(method, *args, **kwargs) # type: ignore File "/root/anaconda3/envs/xinference-vllm/lib/python3.10/site-packages/xinference/model/llm/vllm/distributed_executor.py", line 65, in execute_method return getattr(self._worker, method)(*args, **kwargs) File "/root/anaconda3/envs/xinference-vllm/lib/python3.10/site-packages/vllm/worker/worker_base.py", line 604, in init_device self.worker.init_device() # type: ignore File "/root/anaconda3/envs/xinference-vllm/lib/python3.10/site-packages/vllm/worker/worker.py", line 166, in init_device init_worker_distributed_environment(self.vllm_config, self.rank, File "/root/anaconda3/envs/xinference-vllm/lib/python3.10/site-packages/vllm/worker/worker.py", line 507, in init_worker_distributed_environment ensure_model_parallel_initialized(parallel_config.tensor_parallel_size, File "/root/anaconda3/envs/xinference-vllm/lib/python3.10/site-packages/vllm/distributed/parallel_state.py", line 997, in ensure_model_parallel_initialized initialize_model_parallel(tensor_model_parallel_size, File "/root/anaconda3/envs/xinference-vllm/lib/python3.10/site-packages/vllm/distributed/parallel_state.py", line 930, in initialize_model_parallel _TP = init_model_parallel_group(group_ranks, File "/root/anaconda3/envs/xinference-vllm/lib/python3.10/site-packages/vllm/distributed/parallel_state.py", line 735, in init_model_parallel_group return GroupCoordinator( File "/root/anaconda3/envs/xinference-vllm/lib/python3.10/site-packages/vllm/distributed/parallel_state.py", line 211, in init self.device_communicator = device_comm_cls( File "/root/anaconda3/envs/xinference-vllm/lib/python3.10/site-packages/vllm/distributed/device_communicators/cuda_communicator.py", line 39, in init self.pynccl_comm = PyNcclCommunicator( File "/root/anaconda3/envs/xinference-vllm/lib/python3.10/site-packages/vllm/distributed/device_communicators/pynccl.py", line 73, in init self.unique_id = self.nccl.ncclGetUniqueId() File "/root/anaconda3/envs/xinference-vllm/lib/python3.10/site-packages/vllm/distributed/device_communicators/pynccl_wrapper.py", line 270, in ncclGetUniqueId self.NCCL_CHECK(self._funcs["ncclGetUniqueId"]( File "/root/anaconda3/envs/xinference-vllm/lib/python3.10/site-packages/vllm/distributed/device_communicators/pynccl_wrapper.py", line 256, in NCCL_CHECK raise RuntimeError(f"NCCL error: {error_str}") RuntimeError: [address=10.8.9.17:38805, pid=1580704] NCCL error: internal error - please report this issue to the NCCL developers

pyaaaa avatar Apr 18 '25 03:04 pyaaaa

我用两台H100部署,请问有没有解决办法,或者部署指南,还需要设置一些什么东西嘛

pyaaaa avatar Apr 18 '25 03:04 pyaaaa

nccl 不是 xinf 的问题。你要先解了这个。

qinxuye avatar Apr 18 '25 10:04 qinxuye

This issue is stale because it has been open for 7 days with no activity.

github-actions[bot] avatar Apr 25 '25 19:04 github-actions[bot]

This issue was closed because it has been inactive for 5 days since being marked as stale.

github-actions[bot] avatar Apr 30 '25 19:04 github-actions[bot]