vllm icon indicating copy to clipboard operation
vllm copied to clipboard

Install OSError when running pip install vllm with python 3.10

Open oximi123 opened this issue 2 years ago • 4 comments

  C:\Users\botao\AppData\Local\Temp\pip-build-env-d2ffimq3\overlay\Lib\site-packages\torch\nn\modules\transformer.py:20: UserWarning: Failed to initialize NumPy: No module named 'numpy' (Triggered internally at ..\torch\csrc\utils\tensor_numpy.cpp:84.)
    device: torch.device = torch.device(torch._C._get_default_device()),  # torch.device('cpu'),
  Traceback (most recent call last):
    File "C:\Users\botao\anaconda3\envs\vllm\lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 353, in <module>
      main()
    File "C:\Users\botao\anaconda3\envs\vllm\lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 335, in main
      json_out['return_val'] = hook(**hook_input['kwargs'])
    File "C:\Users\botao\anaconda3\envs\vllm\lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 118, in get_requires_for_build_wheel
      return hook(config_settings)
    File "C:\Users\botao\AppData\Local\Temp\pip-build-env-d2ffimq3\overlay\Lib\site-packages\setuptools\build_meta.py", line 325, in get_requires_for_build_wheel
      return self._get_build_requires(config_settings, requirements=['wheel'])
    File "C:\Users\botao\AppData\Local\Temp\pip-build-env-d2ffimq3\overlay\Lib\site-packages\setuptools\build_meta.py", line 295, in _get_build_requires
      self.run_setup()
    File "C:\Users\botao\AppData\Local\Temp\pip-build-env-d2ffimq3\overlay\Lib\site-packages\setuptools\build_meta.py", line 311, in run_setup
      exec(code, locals())
    File "<string>", line 230, in <module>
    File "C:\Users\botao\AppData\Local\Temp\pip-build-env-d2ffimq3\overlay\Lib\site-packages\torch\utils\cpp_extension.py", line 1076, in CUDAExtension
      library_dirs += library_paths(cuda=True)
    File "C:\Users\botao\AppData\Local\Temp\pip-build-env-d2ffimq3\overlay\Lib\site-packages\torch\utils\cpp_extension.py", line 1210, in library_paths
      paths.append(_join_cuda_home(lib_dir))
    File "C:\Users\botao\AppData\Local\Temp\pip-build-env-d2ffimq3\overlay\Lib\site-packages\torch\utils\cpp_extension.py", line 2416, in _join_cuda_home
      raise OSError('CUDA_HOME environment variable is not set. '
  OSError: CUDA_HOME environment variable is not set. Please set it to your CUDA install root.
  [end of output]

note: This error originates from a subprocess, and is likely not a problem with pip. error: subprocess-exited-with-error

oximi123 avatar Dec 27 '23 07:12 oximi123

I also encountered the same problem. I finally solved it by upgrading the cuda version.

echo669 avatar Dec 28 '23 14:12 echo669

Hi @oximi123, unfortunately, vLLM does not officially support windows at the moment (while some users succeeded in using it on windows). Could you please use WSL and see whether the bug happens again?

WoosukKwon avatar Jan 03 '24 03:01 WoosukKwon

I also encountered the same problem. I finally solved it by upgrading the cuda version.

Hi, which cuda version did you use to install it?

oximi123 avatar Jan 04 '24 08:01 oximi123

I also encountered the same problem. I finally solved it by upgrading the cuda version.

Hi, which cuda version did you use to install it?

12.2,But I am using linux

echo669 avatar Jan 04 '24 09:01 echo669