vllm icon indicating copy to clipboard operation
vllm copied to clipboard

[Usage]: Experiencing weird import bugs and errors after installing with pip install -e .

Open KevinCL16 opened this issue 10 months ago • 3 comments

Your current environment

Traceback (most recent call last):
  File "/home/yangzhiyu/workspace/open-long-agent/collect_env.py", line 721, in <module>
    main()
  File "/home/yangzhiyu/workspace/open-long-agent/collect_env.py", line 700, in main
    output = get_pretty_env_info()
  File "/home/yangzhiyu/workspace/open-long-agent/collect_env.py", line 695, in get_pretty_env_info
    return pretty_str(get_env_info())
  File "/home/yangzhiyu/workspace/open-long-agent/collect_env.py", line 532, in get_env_info
    vllm_version = get_vllm_version()
  File "/home/yangzhiyu/workspace/open-long-agent/collect_env.py", line 264, in get_vllm_version
    return vllm.__version__
AttributeError: module 'vllm' has no attribute '__version__'

How would you like to use vllm

#594 Like in the previous issue, I tried to install from the repo using pip install -e . and had trouble importing LLM.


Traceback (most recent call last):
File "", line 1, in
ImportError: cannot import name 'LLM' from 'vllm' (unknown location)

I got around this issue by using:

尝试一下使用 from vllm.entrypoints.llm import LLM from vllm.sampling_params import SamplingParams

However, I ran into another error:

    self.llm_engine = LLMEngine.from_engine_args(
  File "/home/yangzhiyu/workspace/open-long-agent/vllm/vllm/engine/llm_engine.py", line 291, in from_engine_args
    engine = cls(
  File "/home/yangzhiyu/workspace/open-long-agent/vllm/vllm/engine/llm_engine.py", line 110, in __init__
    vllm.__version__,
AttributeError: module 'vllm' has no attribute '__version__'

I wonder if using pip install -e . is bugged?

KevinCL16 avatar May 02 '24 11:05 KevinCL16

I met the same error. I build the container based on nvcr.io/nvidia/pytorch:24.04-py3 docker image and install xformers from source code to maintain torch version. (Otherwise, it made a torch version conflict for me.) After build from source with "pip install -e ." command, I tried

python3 -m vllm.entrypoints.api_server ....

Then, below error happens.

Traceback (most recent call last):
  File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/workspace/vllm/vllm/entrypoints/openai/api_server.py", line 25, in <module>
    from vllm.entrypoints.openai.serving_chat import OpenAIServingChat
  File "/workspace/vllm/vllm/entrypoints/openai/serving_chat.py", line 19, in <module>
    from vllm.model_executor.guided_decoding import (
  File "/workspace/vllm/vllm/model_executor/guided_decoding/__init__.py", line 5, in <module>
    from vllm.model_executor.guided_decoding.lm_format_enforcer_decoding import (
  File "/workspace/vllm/vllm/model_executor/guided_decoding/lm_format_enforcer_decoding.py", line 8, in <module>
    from lmformatenforcer.integrations.vllm import (
  File "/usr/local/lib/python3.10/dist-packages/lmformatenforcer/integrations/vllm.py", line 34, in <module>
    def build_vllm_token_enforcer_tokenizer_data(tokenizer: Union[vllm.LLM, PreTrainedTokenizerBase]) -> TokenEnforcerTokenizerData:
AttributeError: module 'vllm' has no attribute 'LLM'

Deok-min avatar May 02 '24 13:05 Deok-min

I changed the root directory name. And it solved the problem.

Deok-min avatar May 05 '24 12:05 Deok-min

I have encountered the same problem as you @KevinCL16 . It might be because you have placed the Python file for running the model (assuming it's called run.py) and the vllm repository folder in the same directory. Even if you have run pip install -e . to install the vllm package into site-packages, run.py will still try to import modules from the current directory's vllm/ first (whereas the modules are actually in vllm/vllm/), causing the import to fail. Try moving run.py to a different location and then try from vllm import LLM, SamplingParams, perhaps you can solve this problem.

GARRYHU avatar May 06 '24 10:05 GARRYHU

I met the same error. I build the container based on nvcr.io/nvidia/pytorch:24.04-py3 docker image and install xformers from source code to maintain torch version. (Otherwise, it made a torch version conflict for me.) After build from source with "pip install -e ." command, I tried

python3 -m vllm.entrypoints.api_server ....

Then, below error happens.

Traceback (most recent call last):
  File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/workspace/vllm/vllm/entrypoints/openai/api_server.py", line 25, in <module>
    from vllm.entrypoints.openai.serving_chat import OpenAIServingChat
  File "/workspace/vllm/vllm/entrypoints/openai/serving_chat.py", line 19, in <module>
    from vllm.model_executor.guided_decoding import (
  File "/workspace/vllm/vllm/model_executor/guided_decoding/__init__.py", line 5, in <module>
    from vllm.model_executor.guided_decoding.lm_format_enforcer_decoding import (
  File "/workspace/vllm/vllm/model_executor/guided_decoding/lm_format_enforcer_decoding.py", line 8, in <module>
    from lmformatenforcer.integrations.vllm import (
  File "/usr/local/lib/python3.10/dist-packages/lmformatenforcer/integrations/vllm.py", line 34, in <module>
    def build_vllm_token_enforcer_tokenizer_data(tokenizer: Union[vllm.LLM, PreTrainedTokenizerBase]) -> TokenEnforcerTokenizerData:
AttributeError: module 'vllm' has no attribute 'LLM'

I encounted the same problem.

chg0901 avatar Jun 18 '24 03:06 chg0901

Please make sure the vllm folder is not at the directory you run the command. This way worked for me.

trislee02 avatar Jul 03 '24 14:07 trislee02