vllm icon indicating copy to clipboard operation
vllm copied to clipboard

Can not run vllm with docker

Open raj-khare opened this issue 2 years ago • 3 comments

I'm trying to run this project with the following Dockerfile:

FROM nvcr.io/nvidia/pytorch:22.12-py3
RUN pip uninstall torch -y

WORKDIR /workspace

COPY /inference/vllm /workspace/inference/vllm
WORKDIR /workspace/inference/vllm
RUN pip install -e .

ENV PYTHONPATH=/workspace/inference/vllm

EXPOSE 8000
CMD ["python", "-m", "vllm.entrypoints.openai.api_server", "--host", "0.0.0.0"]

My directory structure:

(api) srikanth@instance-1:~/api/inference$ ls
Dockerfile  main.py  vllm
(api) srikanth@instance-1:~/api/inference$ ls vllm/
CONTRIBUTING.md  MANIFEST.in  benchmarks  docs      mypy.ini        requirements-dev.txt  setup.py  vllm
LICENSE          README.md    csrc        examples  pyproject.toml  requirements.txt      tests

However when i run this container, I get: /usr/bin/python: Error while finding module specification for 'vllm.entrypoints.openai.api_server' (ImportError: cannot import name 'activation_ops' from partially initialized module 'vllm' (most likely due to a circular import) (/workspace/inference/vllm/vllm/__init__.py))

Any help is appreciated :)

raj-khare avatar Jun 28 '23 21:06 raj-khare

you need to install https://github.com/vllm-project/vllm/blob/main/requirements-dev.txt for dev locally i guess, which is not installed when calling setup.py

yhyu13 avatar Jun 29 '23 13:06 yhyu13

I tried on this docker images 'nvidia/cuda:11.8.0-cudnn8-devel-ubuntu20.04', and it works fine. Or maybe you can run pip install vllm before run pip install -e ..

gesanqiu avatar Jun 30 '23 12:06 gesanqiu

thanks -- will try this. i had to comment if CUDA_HOME is None: in setup.py to make it work (https://github.com/vllm-project/vllm/blob/main/setup.py#L23)

raj-khare avatar Jun 30 '23 20:06 raj-khare

Close this issue due to inactivity. Please reopen if you have any new questions.

zhuohan123 avatar Jul 17 '23 07:07 zhuohan123