inference icon indicating copy to clipboard operation
inference copied to clipboard

当推理参数异常时,修改后再次启动报错

Open qlanto224 opened this issue 9 months ago • 4 comments

System Info / 系統信息

os: ubuntu 24.04 docker:24.0.7 docker-compose: 2.27.0 CUDA Version: 12.4 Driver Version: 550.144.03

Running Xinference with Docker? / 是否使用 Docker 运行 Xinfernece?

  • [x] docker / docker
  • [ ] pip install / 通过 pip install 安装
  • [ ] installation from source / 从源码安装

Version info / 版本信息

docker镜像版本: v1.4.1

The command used to start Xinference / 用以启动 xinference 的命令

services:
  xinference:
    image: xprobe/xinference:v1.4.1
    container_name: xinference
    ports:
      - 9999:9999
    shm_size: '64g'
    volumes:
      - ./.xinference:/data/xinference
      - ./huggingface:/data/huggingface
      - ./modelscope:/data/modelscope
      - ./cache:/data/cache
      - ./fix_bug:/data/fix_bug
    environment:
      - TZ=Asia/Shanghai
      - XINFERENCE_MODEL_SRC=modelscope
      - XINFERENCE_HOME=/data
    restart: unless-stopped
    command: /bin/bash -c "pip uninstall llama-cpp-python -y && pip install /data/fix_bug/llama_cpp_python-0.2.90-cp310-cp310-linux_x86_64.whl && xinference-local --host 0.0.0.0 --port 9999" 
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: all
              capabilities: [gpu]

Reproduction / 复现过程

2025-04-12 21:51:26,149 xinference.core.worker 150 ERROR    [request 3a2bf8de-17a5-11f0-871f-0242ac160002] Leave launch_builtin_model, error: , elapsed time: 0 s
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/dist-packages/xinference/core/utils.py", line 93, in wrapped
    ret = await func(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/xinference/core/worker.py", line 866, in launch_builtin_model
    assert model_uid not in self._model_uid_to_model
AssertionError
2025-04-12 21:51:26,152 xinference.api.restful_api 1 ERROR    [address=0.0.0.0:64137, pid=150]
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/dist-packages/xinference/api/restful_api.py", line 1002, in launch_model
    model_uid = await (await self._get_supervisor_ref()).launch_builtin_model(
  File "/usr/local/lib/python3.10/dist-packages/xoscar/backends/context.py", line 262, in send
    return self._process_result_message(result)
  File "/usr/local/lib/python3.10/dist-packages/xoscar/backends/context.py", line 111, in _process_result_message
    raise message.as_instanceof_cause()
  File "/usr/local/lib/python3.10/dist-packages/xoscar/backends/pool.py", line 688, in send
    result = await self._run_coro(message.message_id, coro)
  File "/usr/local/lib/python3.10/dist-packages/xoscar/backends/pool.py", line 388, in _run_coro
    return await coro
  File "/usr/local/lib/python3.10/dist-packages/xoscar/api.py", line 384, in __on_receive__
    return await super().__on_receive__(message)  # type: ignore
  File "xoscar/core.pyx", line 564, in __on_receive__
    raise ex
  File "xoscar/core.pyx", line 526, in xoscar.core._BaseActor.__on_receive__
    async with self._lock:
  File "xoscar/core.pyx", line 527, in xoscar.core._BaseActor.__on_receive__
    with debug_async_timeout('actor_lock_timeout',
  File "xoscar/core.pyx", line 532, in xoscar.core._BaseActor.__on_receive__
    result = await result
  File "/usr/local/lib/python3.10/dist-packages/xinference/core/supervisor.py", line 1191, in launch_builtin_model
    await _launch_model()
  File "/usr/local/lib/python3.10/dist-packages/xinference/core/supervisor.py", line 1126, in _launch_model
    subpool_address = await _launch_one_model(
  File "/usr/local/lib/python3.10/dist-packages/xinference/core/supervisor.py", line 1083, in _launch_one_model
    subpool_address = await worker_ref.launch_builtin_model(
  File "/usr/local/lib/python3.10/dist-packages/xoscar/backends/context.py", line 262, in send
    return self._process_result_message(result)
  File "/usr/local/lib/python3.10/dist-packages/xoscar/backends/context.py", line 111, in _process_result_message
    raise message.as_instanceof_cause()
  File "/usr/local/lib/python3.10/dist-packages/xoscar/backends/pool.py", line 688, in send
    result = await self._run_coro(message.message_id, coro)
  File "/usr/local/lib/python3.10/dist-packages/xoscar/backends/pool.py", line 388, in _run_coro
    return await coro
  File "/usr/local/lib/python3.10/dist-packages/xoscar/api.py", line 384, in __on_receive__
    return await super().__on_receive__(message)  # type: ignore
  File "xoscar/core.pyx", line 564, in __on_receive__
    raise ex
  File "xoscar/core.pyx", line 526, in xoscar.core._BaseActor.__on_receive__
    async with self._lock:
  File "xoscar/core.pyx", line 527, in xoscar.core._BaseActor.__on_receive__
    with debug_async_timeout('actor_lock_timeout',
  File "xoscar/core.pyx", line 532, in xoscar.core._BaseActor.__on_receive__
    result = await result
  File "/usr/local/lib/python3.10/dist-packages/xinference/core/utils.py", line 93, in wrapped
    ret = await func(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/xinference/core/worker.py", line 866, in launch_builtin_model
    assert model_uid not in self._model_uid_to_model

Expected behavior / 期待表现

报错后,已经加载的模型从gpu中卸载掉,使得修改后能正常提交. 否则出现异常整个服务就不可用了, 只能重启容器

qlanto224 avatar Apr 12 '25 13:04 qlanto224

例如: 先出现这个错误

025-04-12 21:57:39,467 xinference.api.restful_api 1 ERROR    [address=0.0.0.0:43423, pid=167] The model's max seq len (32768) is larger than the maximum number of tokens that can be stored in KV cache (17744). Try increasing `gpu_memory_utilization` or decreasing `max_model_len` when initializing the engine.
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/dist-packages/xinference/api/restful_api.py", line 1002, in launch_model
    model_uid = await (await self._get_supervisor_ref()).launch_builtin_model(
  File "/usr/local/lib/python3.10/dist-packages/xoscar/backends/context.py", line 262, in send
    return self._process_result_message(result)
  File "/usr/local/lib/python3.10/dist-packages/xoscar/backends/context.py", line 111, in _process_result_message
    raise message.as_instanceof_cause()
  File "/usr/local/lib/python3.10/dist-packages/xoscar/backends/pool.py", line 688, in send
    result = await self._run_coro(message.message_id, coro)
  File "/usr/local/lib/python3.10/dist-packages/xoscar/backends/pool.py", line 388, in _run_coro
    return await coro
  File "/usr/local/lib/python3.10/dist-packages/xoscar/api.py", line 384, in __on_receive__
    return await super().__on_receive__(message)  # type: ignore
  File "xoscar/core.pyx", line 564, in __on_receive__
    raise ex
  File "xoscar/core.pyx", line 526, in xoscar.core._BaseActor.__on_receive__
    async with self._lock:
  File "xoscar/core.pyx", line 527, in xoscar.core._BaseActor.__on_receive__
    with debug_async_timeout('actor_lock_timeout',
  File "xoscar/core.pyx", line 532, in xoscar.core._BaseActor.__on_receive__
    result = await result
  File "/usr/local/lib/python3.10/dist-packages/xinference/core/supervisor.py", line 1191, in launch_builtin_model
    await _launch_model()
  File "/usr/local/lib/python3.10/dist-packages/xinference/core/supervisor.py", line 1126, in _launch_model
    subpool_address = await _launch_one_model(
  File "/usr/local/lib/python3.10/dist-packages/xinference/core/supervisor.py", line 1100, in _launch_one_model
    await worker_ref.wait_for_load(_replica_model_uid)
  File "/usr/local/lib/python3.10/dist-packages/xoscar/backends/context.py", line 262, in send
    return self._process_result_message(result)
  File "/usr/local/lib/python3.10/dist-packages/xoscar/backends/context.py", line 111, in _process_result_message
    raise message.as_instanceof_cause()
  File "/usr/local/lib/python3.10/dist-packages/xoscar/backends/pool.py", line 688, in send
    result = await self._run_coro(message.message_id, coro)
  File "/usr/local/lib/python3.10/dist-packages/xoscar/backends/pool.py", line 388, in _run_coro
    return await coro
  File "/usr/local/lib/python3.10/dist-packages/xoscar/api.py", line 384, in __on_receive__
    return await super().__on_receive__(message)  # type: ignore
  File "xoscar/core.pyx", line 564, in __on_receive__
    raise ex
  File "xoscar/core.pyx", line 526, in xoscar.core._BaseActor.__on_receive__
    async with self._lock:
  File "xoscar/core.pyx", line 527, in xoscar.core._BaseActor.__on_receive__
    with debug_async_timeout('actor_lock_timeout',
  File "xoscar/core.pyx", line 532, in xoscar.core._BaseActor.__on_receive__
    result = await result
  File "/usr/local/lib/python3.10/dist-packages/xinference/core/utils.py", line 93, in wrapped
    ret = await func(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/xinference/core/worker.py", line 979, in wait_for_load
    await model_ref.wait_for_load()
  File "/usr/local/lib/python3.10/dist-packages/xoscar/backends/context.py", line 262, in send
    return self._process_result_message(result)
  File "/usr/local/lib/python3.10/dist-packages/xoscar/backends/context.py", line 111, in _process_result_message
    raise message.as_instanceof_cause()
  File "/usr/local/lib/python3.10/dist-packages/xoscar/backends/pool.py", line 688, in send
    result = await self._run_coro(message.message_id, coro)
  File "/usr/local/lib/python3.10/dist-packages/xoscar/backends/pool.py", line 388, in _run_coro
    return await coro
  File "/usr/local/lib/python3.10/dist-packages/xoscar/api.py", line 384, in __on_receive__
    return await super().__on_receive__(message)  # type: ignore
  File "xoscar/core.pyx", line 564, in __on_receive__
    raise ex
  File "xoscar/core.pyx", line 526, in xoscar.core._BaseActor.__on_receive__
    async with self._lock:
  File "xoscar/core.pyx", line 527, in xoscar.core._BaseActor.__on_receive__
    with debug_async_timeout('actor_lock_timeout',
  File "xoscar/core.pyx", line 532, in xoscar.core._BaseActor.__on_receive__
    result = await result
  File "/usr/local/lib/python3.10/dist-packages/xinference/core/model.py", line 497, in wait_for_load
    await asyncio.to_thread(self._model.wait_for_load)
  File "/usr/lib/python3.10/asyncio/threads.py", line 25, in to_thread
    return await loop.run_in_executor(None, func_call)
  File "/usr/lib/python3.10/concurrent/futures/thread.py", line 58, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/usr/local/lib/python3.10/dist-packages/xinference/model/llm/vllm/core.py", line 473, in wait_for_load
    raise err.with_traceback(tb)
  File "/usr/local/lib/python3.10/dist-packages/xinference/model/llm/vllm/core.py", line 442, in _load
    self._engine = XinferenceAsyncLLMEngine.from_engine_args(
  File "/usr/local/lib/python3.10/dist-packages/vllm/engine/async_llm_engine.py", line 644, in from_engine_args
    engine = cls(
  File "/usr/local/lib/python3.10/dist-packages/vllm/engine/async_llm_engine.py", line 594, in __init__
    self.engine = self._engine_class(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/vllm/engine/async_llm_engine.py", line 267, in __init__
    super().__init__(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/vllm/engine/llm_engine.py", line 276, in __init__
    self._initialize_kv_caches()
  File "/usr/local/lib/python3.10/dist-packages/vllm/engine/llm_engine.py", line 434, in _initialize_kv_caches
    self.model_executor.initialize_cache(num_gpu_blocks, num_cpu_blocks)
  File "/usr/local/lib/python3.10/dist-packages/vllm/executor/executor_base.py", line 122, in initialize_cache
    self.collective_rpc("initialize_cache",
  File "/usr/local/lib/python3.10/dist-packages/vllm/executor/executor_base.py", line 316, in collective_rpc
    return self._run_workers(method, *args, **(kwargs or {}))
  File "/usr/local/lib/python3.10/dist-packages/xinference/model/llm/vllm/distributed_executor.py", line 234, in _run_workers
    self.driver_worker.execute_method(method, *args, **kwargs)  # type: ignore
  File "/usr/local/lib/python3.10/dist-packages/xinference/model/llm/vllm/distributed_executor.py", line 65, in execute_method
    return getattr(self._worker, method)(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/vllm/worker/worker.py", line 291, in initialize_cache
    raise_if_cache_size_invalid(num_gpu_blocks,
  File "/usr/local/lib/python3.10/dist-packages/vllm/worker/worker.py", line 544, in raise_if_cache_size_invalid
    raise ValueError(
ValueError: [address=0.0.0.0:43423, pid=167] The model's max seq len (32768) is larger than the maximum number of tokens that can be stored in KV cache (17744). Try increasing `gpu_memory_utilization` or decreasing `max_model_len` wheninitializing the engine.

重新设置后,无法启动,报错:

2025-04-12 21:59:32,037 xinference.core.worker 150 INFO     [request 5bc8a234-17a6-11f0-ab3d-0242ac160002] Enter launch_builtin_model, args: <xinference.core.worker.WorkerActor object at 0x7be6d07cb6f0>, kwargs: model_uid=qwen2.5-instruct-32b-gptq-int8-0,model_name=qwen2.5-instruct,model_size_in_billions=32,model_format=gptq,quantization=Int8,model_engine=vLLM,model_type=LLM,n_gpu=auto,request_limits=None,peft_model_config=None,gpu_idx=[1, 2],download_hub=modelscope,model_path=None,xavier_config=None,gpu_memory_utilization=0.95,enable_prefix_caching=True,cpu_offload_gb=True,max_model_len=17700
2025-04-12 21:59:32,037 xinference.core.worker 150 INFO     You specify to launch the model: qwen2.5-instruct on GPU index: [1, 2] of the worker: 0.0.0.0:21706, xinference will automatically ignore the `n_gpu` option.
2025-04-12 21:59:32,037 xinference.core.worker 150 ERROR    [request 5bc8a234-17a6-11f0-ab3d-0242ac160002] Leave launch_builtin_model, error: , elapsed time: 0 s
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/dist-packages/xinference/core/utils.py", line 93, in wrapped
    ret = await func(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/xinference/core/worker.py", line 866, in launch_builtin_model
    assert model_uid not in self._model_uid_to_model
AssertionError
2025-04-12 21:59:32,041 xinference.api.restful_api 1 ERROR    [address=0.0.0.0:21706, pid=150]
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/dist-packages/xinference/api/restful_api.py", line 1002, in launch_model
    model_uid = await (await self._get_supervisor_ref()).launch_builtin_model(
  File "/usr/local/lib/python3.10/dist-packages/xoscar/backends/context.py", line 262, in send
    return self._process_result_message(result)
  File "/usr/local/lib/python3.10/dist-packages/xoscar/backends/context.py", line 111, in _process_result_message
    raise message.as_instanceof_cause()
  File "/usr/local/lib/python3.10/dist-packages/xoscar/backends/pool.py", line 688, in send
    result = await self._run_coro(message.message_id, coro)
  File "/usr/local/lib/python3.10/dist-packages/xoscar/backends/pool.py", line 388, in _run_coro
    return await coro
  File "/usr/local/lib/python3.10/dist-packages/xoscar/api.py", line 384, in __on_receive__
    return await super().__on_receive__(message)  # type: ignore
  File "xoscar/core.pyx", line 564, in __on_receive__
    raise ex
  File "xoscar/core.pyx", line 526, in xoscar.core._BaseActor.__on_receive__
    async with self._lock:
  File "xoscar/core.pyx", line 527, in xoscar.core._BaseActor.__on_receive__
    with debug_async_timeout('actor_lock_timeout',
  File "xoscar/core.pyx", line 532, in xoscar.core._BaseActor.__on_receive__
    result = await result
  File "/usr/local/lib/python3.10/dist-packages/xinference/core/supervisor.py", line 1191, in launch_builtin_model
    await _launch_model()
  File "/usr/local/lib/python3.10/dist-packages/xinference/core/supervisor.py", line 1126, in _launch_model
    subpool_address = await _launch_one_model(
  File "/usr/local/lib/python3.10/dist-packages/xinference/core/supervisor.py", line 1083, in _launch_one_model
    subpool_address = await worker_ref.launch_builtin_model(
  File "/usr/local/lib/python3.10/dist-packages/xoscar/backends/context.py", line 262, in send
    return self._process_result_message(result)
  File "/usr/local/lib/python3.10/dist-packages/xoscar/backends/context.py", line 111, in _process_result_message
    raise message.as_instanceof_cause()
  File "/usr/local/lib/python3.10/dist-packages/xoscar/backends/pool.py", line 688, in send
    result = await self._run_coro(message.message_id, coro)
  File "/usr/local/lib/python3.10/dist-packages/xoscar/backends/pool.py", line 388, in _run_coro
    return await coro
  File "/usr/local/lib/python3.10/dist-packages/xoscar/api.py", line 384, in __on_receive__
    return await super().__on_receive__(message)  # type: ignore
  File "xoscar/core.pyx", line 564, in __on_receive__
    raise ex
  File "xoscar/core.pyx", line 526, in xoscar.core._BaseActor.__on_receive__
    async with self._lock:
  File "xoscar/core.pyx", line 527, in xoscar.core._BaseActor.__on_receive__
    with debug_async_timeout('actor_lock_timeout',
  File "xoscar/core.pyx", line 532, in xoscar.core._BaseActor.__on_receive__
    result = await result
  File "/usr/local/lib/python3.10/dist-packages/xinference/core/utils.py", line 93, in wrapped
    ret = await func(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/xinference/core/worker.py", line 866, in launch_builtin_model
    assert model_uid not in self._model_uid_to_model
AssertionError: [address=0.0.0.0:21706, pid=150]

qlanto224 avatar Apr 12 '25 14:04 qlanto224

同样的问题,使用docker部署的。

Image

fengzengfly avatar Apr 16 '25 06:04 fengzengfly

同样的问题,使用docker部署的。

Image

我重启容器后就正常了

fengzengfly avatar Apr 16 '25 06:04 fengzengfly

This issue is stale because it has been open for 7 days with no activity.

github-actions[bot] avatar Apr 23 '25 19:04 github-actions[bot]

This issue was closed because it has been inactive for 5 days since being marked as stale.

github-actions[bot] avatar Apr 29 '25 19:04 github-actions[bot]