sglang icon indicating copy to clipboard operation
sglang copied to clipboard

vLLM import error

Open jlin816 opened this issue 1 year ago • 5 comments

I'm getting the following import error:

sgl ➜ export CUDA_VISIBLE_DEVICES=4; python -m sglang.launch_server --model-path meta-llama/Llama-2-7b-chat-hf --port 30000
Traceback (most recent call last):
  File "/home/jessy/.miniconda3/envs/sgl/lib/python3.10/runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/home/jessy/.miniconda3/envs/sgl/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/home/jessy/projects/sglang/python/sglang/launch_server.py", line 3, in <module>
    from sglang.srt.server import ServerArgs, launch_server
  File "/home/jessy/projects/sglang/python/sglang/srt/server.py", line 56, in <module>
    from sglang.srt.managers.router.manager import start_router_process
  File "/home/jessy/projects/sglang/python/sglang/srt/managers/router/manager.py", line 9, in <module>
    from sglang.srt.managers.router.model_rpc import ModelRpcClient
  File "/home/jessy/projects/sglang/python/sglang/srt/managers/router/model_rpc.py", line 24, in <module>
    from sglang.srt.managers.router.model_runner import ModelRunner
  File "/home/jessy/projects/sglang/python/sglang/srt/managers/router/model_runner.py", line 15, in <module>
    from vllm.model_executor.model_loader import _set_default_torch_dtype
ImportError: cannot import name '_set_default_torch_dtype' from 'vllm.model_executor.model_loader' (/home/jessy/.miniconda3/envs/sgl/lib/python3.10/site-packages/vllm/model_executor/model_loader/__init__.py)

It looks like _set_default_torch_dtype no longer exists in vllm: https://github.com/vllm-project/vllm/blob/main/vllm/model_executor/model_loader/__init__.py

I've tried both pip install sglang[all] and installing from source. My versions are sglang==0.1.14 and vllm==0.4.1.

jlin816 avatar Apr 24 '24 23:04 jlin816

As of now, you need to downgrade your vllm version to 0.3.3

m0g1cian avatar Apr 25 '24 03:04 m0g1cian

I have the same error. But as far as I can see version 0.3.3 of vllm doesn't have model_loader in vllm.model_executor.

dmilcevski avatar Apr 25 '24 18:04 dmilcevski

I also have this error, looks like that function got removed from vllm in this pr when they refactored model loading: https://github.com/vllm-project/vllm/pull/4097

It's defined here: https://github.com/vllm-project/vllm/blob/05434764cd99990035779cf9a4ed86623b528825/vllm/model_executor/model_loader.py#L22-L28.

Looks like some other functions also got removed in the same PR like hf_model_weights_iterator that are also used to load all the models.

epellis avatar Apr 26 '24 07:04 epellis

Attempting to downgrade to vllm 0.3.3 causing the installation of the most recent version of sglang to hang indefinitely

david-vectorflow avatar May 02 '24 15:05 david-vectorflow

I too am seeing this just hang indefinitely if I downgrade. Are there any workarounds?

ddemillard avatar May 10 '24 04:05 ddemillard

I too am seeing this just hang indefinitely if I downgrade. Are there any workarounds?

My setup is vllm==0.3.3, transformers==4.38, sglang==0.1.14 My sglang script also hangs

ChloeL19 avatar Jun 04 '24 16:06 ChloeL19

Please try the latest version https://pypi.org/project/sglang/

zhyncs avatar Jul 18 '24 16:07 zhyncs