text-generation-webui
text-generation-webui copied to clipboard
quant-cuda installation error
Describe the bug
how can i solve this problem of installing?
Is there an existing issue for this?
- [X] I have searched the existing issues
Reproduction
trying to install and getting this error
Screenshot
No response
Logs
Building wheels for collected packages: quant-cuda
Building wheel for quant-cuda (setup.py) ... error
error: subprocess-exited-with-error
× python setup.py bdist_wheel did not run successfully.
│ exit code: 1
╰─> [50 lines of output]
...
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for quant-cuda
Running setup.py clean for quant-cuda
Failed to build quant-cuda
Installing collected packages: quant-cuda
Running setup.py install for quant-cuda ... error
error: subprocess-exited-with-error
× Running setup.py install for quant-cuda did not run successfully.
│ exit code: 1
╰─> [54 lines of output]
...
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: legacy-install-failure
× Encountered error while trying to install package.
╰─> quant-cuda
note: This is an issue with the package mentioned above, not pip.
hint: See above for output from the failure.
ERROR: GPTQ CUDA kernel compilation failed.
Attempting installation with wheel.
Collecting quant-cuda==0.0.0
Using cached https://github.com/jllllll/GPTQ-for-LLaMa-Wheels/raw/main/quant_cuda-0.0.0-cp310-cp310-win_amd64.whl (398 kB)
Installing collected packages: quant-cuda
Successfully installed quant-cuda-0.0.0
System Info
3060
Powershell:
$env:DISTUTILS_USE_SDK = 1CMD:
SET DISTUTILS_USE_SDK=1Run that command just before running
python setup_cuda.py install
S/O jllllll he commented this under my issue
thank you, but there is no such file as setup_cuda.py where can i see more detailed instructions about this? why not add this line to the setup file to avoid conflicts?
thank you, but there is no such file as setup_cuda.py where can i see more detailed instructions about this? why not add this line to the setup file to avoid conflicts?
You need to downlad it by doing:
mkdir repositories
cd repositories ...
have you followed the instructions here: https://www.reddit.com/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/ ?
thank you, but there is no such file as setup_cuda.py where can i see more detailed instructions about this? why not add this line to the setup file to avoid conflicts?
You need to downlad it by doing:
mkdir repositories cd repositories ...have you followed the instructions here: https://www.reddit.com/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/ ?
i used oobabooga from the first day and i have used any llama-like llms too. but after last updates of the ooba it doesn't work. i have using cuda 12 all this time and all were fine but now accidentally it has to use cuda 11.7. it's not a problem to downgrade to 11.7 but other programs have to use cuda 12. so i wonder why ooba did it
I have the same issue, used to work but stopped working a couple of days ago. Same problem as quant-cuda above. I've been trying to fix it for days now and nothing is working.
still wonder if there is any simple fix for it? any fix file, alternative installation file or something without dancing around pc?
Same issue trying to install oobabooga
I also have it, maybe dependency got screwed up?
Hi I fixed this issue since.
It was because when i originally installed pytorch i installed the cpu version. I had to uninstall pytorch and reinstall the correct one.
Hm, bu I just download oobabooga, and ran install, shouldn't it choose correct version? Well I'll try to override manually too.
Hm, bu I just download oobabooga, and ran install, shouldn't it choose correct version? Well I'll try to override manually too.
Before running the installer/update. Make sure you have Python, Cuda, Visual Studio C++ library installed. Then install the correct version of Pytorch for Cuda (i used Cuda 11.7).
Then run the installer. Im not sure if all of the above is necessary but it fixed this problem for me.
Well I'm on linux so it will be different for me :)
Well I'm on linux so it will be different for me :)
It won't be though? PyTorch is python thing and the command through pip is pretty much same for all OS's.
For Linux you should be able to ignore the --index-url though.
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu117
Even installing with using conda is pretty much same thing on both Linux and Windows.
conda install pytorch torchvision torchaudio pytorch-cuda=11.7 -c pytorch -c nvidia
Although.. Even this does not seem to really help whether it is through pip or conda. Atleast if you are just trying to use the update batch file. ~~Not sure if you need fresh install for it to work.~~ Fresh install does not help either.
The OP error is more like a warning. The installer tries to compile gptq-for-llama and, if it fails (usually because nvcc is not available), it proceeds to install a precompiled wheel
The updated version of webui.py shows this warning to make it clear that this error can be ignored
print("\n\n*******************************************************************")
print("* WARNING: GPTQ-for-LLaMa compilation failed, but this FINE and can be ignored!")
print("* The installer will proceed to install a pre-compiled wheel.")
print("*******************************************************************\n\n")