openchat
openchat copied to clipboard
`pip install ochat` - name 'nvcc_cuda_version' is not defined
PS C:\Users\SantaSpeen> py -3.11 -m pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121
Requirement already satisfied: ...
PS C:\Users\SantaSpeen> py -3.11 -m pip install ochat
Defaulting to user installation because normal site-packages is not writeable
Collecting ochat
Using cached ochat-3.5.1-py3-none-any.whl.metadata (22 kB)
Collecting beautifulsoup4 (from ochat)
Using cached beautifulsoup4-4.12.3-py3-none-any.whl.metadata (3.8 kB)
Collecting markdownify (from ochat)
....
Requirement already satisfied: pydantic in c:\users\santaspeen\appdata\roaming\python\python311\site-packages (from ochat) (2.5.2)
Collecting shortuuid (from ochat)
Using cached shortuuid-1.0.11-py3-none-any.whl (10 kB)
Requirement already satisfied: uvicorn in c:\users\santaspeen\appdata\roaming\python\python311\site-packages (from ochat) (0.24.0.post1)
Collecting vllm (from ochat)
Using cached vllm-0.2.7.tar.gz (170 kB)
Installing build dependencies ... done
Getting requirements to build wheel ... error
error: subprocess-exited-with-error
× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> [22 lines of output]
C:\Users\SantaSpeen\AppData\Local\Temp\pip-build-env-nlo5g4j2\overlay\Lib\site-packages\torch\nn\modules\transformer.py:20: UserWarning: Failed to initialize NumPy: No module named 'numpy' (Triggered internally at ..\torch\csrc\utils\tensor_numpy.cpp:84.)
device: torch.device = torch.device(torch._C._get_default_device()), # torch.device('cpu'),
No CUDA runtime is found, using CUDA_HOME='C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.3'
Traceback (most recent call last):
File "C:\Program Files\Python311\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 353, in <module>
main()
File "C:\Program Files\Python311\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 335, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\Python311\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 118, in get_requires_for_build_wheel
return hook(config_settings)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\SantaSpeen\AppData\Local\Temp\pip-build-env-nlo5g4j2\overlay\Lib\site-packages\setuptools\build_meta.py", line 325, in get_requires_for_build_wheel
return self._get_build_requires(config_settings, requirements=['wheel'])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\SantaSpeen\AppData\Local\Temp\pip-build-env-nlo5g4j2\overlay\Lib\site-packages\setuptools\build_meta.py", line 295, in _get_build_requires
self.run_setup()
File "C:\Users\SantaSpeen\AppData\Local\Temp\pip-build-env-nlo5g4j2\overlay\Lib\site-packages\setuptools\build_meta.py", line 311, in run_setup
exec(code, locals())
File "<string>", line 298, in <module>
File "<string>", line 268, in get_vllm_version
NameError: name 'nvcc_cuda_version' is not defined. Did you mean: 'cuda_version'?
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
Have you installed CUDA toolkit? If not, try installing CUDA 12.1
CUDA 12.3 installed and privatGPT works good within
Have you tried Transformers or vLLM? PyTorch compatibility with CUDA 12.3 is experimental.
This worked for me:
pip install ochat --index-url https://download.pytorch.org/whl/cu121 --extra-index-url https://pypi.org/simple