text-generation-webui
text-generation-webui copied to clipboard
undefined symbol: cget_col_row_stats / 8-bit not working / libsbitsandbytes_cpu.so not found
Describe the bug
On starting the server, I recieve the following error messages :
===================================BUG REPORT===================================
Welcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
================================================================================
CUDA SETUP: Required library version not found: libsbitsandbytes_cpu.so. Maybe you need to compile it from source?
CUDA SETUP: Defaulting to libbitsandbytes_cpu.so...
argument of type 'WindowsPath' is not iterable
CUDA SETUP: Required library version not found: libsbitsandbytes_cpu.so. Maybe you need to compile it from source?
CUDA SETUP: Defaulting to libbitsandbytes_cpu.so...
argument of type 'WindowsPath' is not iterable
C:\Oobabooga_new\installer_files\env\lib\site-packages\bitsandbytes\cextension.py:31: UserWarning: The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers and GPU quantization are unavailable.
warn("The installed version of bitsandbytes was compiled without GPU support. "
This is not the same as #388.
Is there an existing issue for this?
- [X] I have searched the existing issues
Reproduction
Start web UI using the supplied batch file.
Screenshot
Logs
None. See screenshot.
System Info
Windows 11
No GPU, CPU only
CPU : Ryzen 7 6800H
RAM : 32Gb
I have an exact same issue with Nvidia GPU and Win10, tried a new install several times — nothing seems to work. Very frustrating, given the fact that yesterday it worked just fine. Shouldn't have done git pull today, it seem to have broken the UI.
I'm on CPU. It does work, but I'm not sure I'm getting the best out of it. Still getting short, low-quality responses with very little RP, which is why I did a fresh install.
On Sat., Mar. 18, 2023, 4:51 a.m. fuomag9, @.***> wrote:
Same issue on linux as well.
I even tried inside a 11.8.0-runtime-ubuntu22.04 https://hub.docker.com/layers/nvidia/cuda/11.8.0-runtime-ubuntu22.04/images/sha256-61187bc58b1411daa436202bebc96022e9c5339611589a022cd913b1b54cdead?context=explore Nvidia container
— Reply to this email directly, view it on GitHub https://github.com/oobabooga/text-generation-webui/issues/400#issuecomment-1474824582, or unsubscribe https://github.com/notifications/unsubscribe-auth/A6I5UHICGDV7OML6GCZFFDTW4WOS7ANCNFSM6AAAAAAV7NCHH4 . You are receiving this because you authored the thread.Message ID: @.***>
That's just a warning, not a bug
But it says no GPU detected, falling back to CPU, I'd assume that's not the correct behavior?
OP doesn't have a GPU, so it's expected behavior
On Windows, I recommend installing using the new WSL recommended method
OP doesn't have a GPU, so it's expected behavior
On Windows, I recommend installing using the new WSL recommended method
In my case I had the same issue and I have a gpu passed with --gpus=all inside docker :(
OP doesn't have a GPU, so it's expected behavior
On Windows, I recommend installing using the new WSL recommended method
Unfortunately it's not possible, Microsoft store doesn't work in my country. Is it possible to download the previous working version of UI somewhere?
You may be better just running an Ubuntu VM, your GPU should pass through
I have deleted my conda environment and created a new one following the README and now I also can't use 8bit heh
undefined symbol: cget_col_row_stats
https://github.com/TimDettmers/bitsandbytes/issues/112
Just tried to run:
conda install torchvision=0.14.1 torchaudio=0.13.1 pytorch-cuda=11.7 -c pytorch -c nvidia
and git pulled my local folder
Everything went successfully but now I'm getting:
Traceback (most recent call last):
File "F:\Anakonda3\envs\textgen_webui_04\lib\site-packages\requests\compat.py", line 11, in
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "F:\Program Files (x86)\textgen_webui_04\text-generation-webui\server.py", line 10, in
Installing those older versions had worked for me briefly, then it stopped working again.
I have deleted my conda environment and created a new one following the README and now I also can't use 8bit heh
undefined symbol: cget_col_row_stats
gettings this one as well too
This may be relevant
https://github.com/TimDettmers/bitsandbytes/issues/156#issuecomment-1462329713
Nothing changed in bits&bytes.
Ok I got it
- Start over
conda deactivate
conda remove -n textgen --all
conda create -n textgen python=3.10.9
conda activate textgen
pip3 install torch torchvision torchaudio
cd text-generation-webui
pip install -r requirements.txt
- Do the dirty fix in https://github.com/TimDettmers/bitsandbytes/issues/156#issuecomment-1462329713:
cd /home/yourname/miniconda3/envs/textgen/lib/python3.10/site-packages/bitsandbytes/
cp libbitsandbytes_cuda120.so libbitsandbytes_cpu.so
cd -
- Install cudatoolkit
conda install cudatoolkit
- It now works
python server.py --listen --model llama-7b --lora alpaca-lora-7b --load-in-8bit
This may be relevant
Running pip3 install torch torchvision torchaudio
in the new commit + replacing the cpu file with the cuda117 file seemed to have fixed:
undefined symbol: cget_col_row_stats
For me.
Nothing changed in bits&bytes.
I think the problem was the recent pytorch update.
Ok I got it
- Start over
conda deactivate conda remove -n textgen --all conda create -n textgen python=3.10.9 conda activate textgen pip3 install torch torchvision torchaudio cd text-generation-webui pip install -r requirements.txt
- Do the dirty fix in bitsandbytes/libbitsandbytes_cpu.so: undefined symbol: cget_col_row_stats TimDettmers/bitsandbytes#156 (comment):
cd /home/yourname/miniconda3/envs/textgen/lib/python3.10/site-packages/bitsandbytes/ cp libbitsandbytes_cuda120.so libbitsandbytes_cpu.so cd -
- Install cudatoolkit
conda install cudatoolkit
- It now works
python server.py --listen --model llama-7b --lora alpaca-lora-7b --load-in-8bit
Doing git pull
and then this worked for me as well!
I am using miniconda so my folder was /home/$USER/.conda/envs/textgen/lib/python3.10/site-packages/bitsandbytes/
conda install cudatoolkit
I'm using Anaconda3, so I couldn't do the step 2, just can't find the folders, but I did everything else and was able to launch the UI, seems to be working fine right now, thank you!
Although I've found those files in F:\Anakonda3\envs\textgen_webui_05\Lib\site-packages\bitsandbytes
are those the same files?
So I've changed those files in F:\Anakonda3\envs\textgen_webui_05\Lib\site-packages\bitsandbytes
nothing seem to change though, still gives the warning:
Warning: torch.cuda.is_available() returned False.
It works, but doesn't seem to use GPU at all.
Also llama-7b-hf --gptq-bits 4
doesn't work anymore, although it used to in the previous version of UI. Says CUDA extension not installed.
It was possible before to load llama-13b-hf --auto-devices --gpu-memory 4
but now it just eats all of 32 Gb Ram, so I aborted it.
Ok I got it
1. Start over
conda deactivate conda remove -n textgen --all conda create -n textgen python=3.10.9 conda activate textgen pip3 install torch torchvision torchaudio cd text-generation-webui pip install -r requirements.txt
2. Do the dirty fix in [bitsandbytes/libbitsandbytes_cpu.so: undefined symbol: cget_col_row_stats TimDettmers/bitsandbytes#156 (comment)](https://github.com/TimDettmers/bitsandbytes/issues/156#issuecomment-1462329713):
cd /home/yourname/miniconda3/envs/textgen/lib/python3.10/site-packages/bitsandbytes/ cp libbitsandbytes_cuda120.so libbitsandbytes_cpu.so cd -
3. Install cudatoolkit
conda install cudatoolkit
4. It now works
python server.py --listen --model llama-7b --lora alpaca-lora-7b --load-in-8bit
I had a problem with these instructions which I narrowed down to this line:
pip3 install torch torchvision torchaudio
PyTorch has now updated to 2.0.0 and so running this command will install 2.0.0, but errors occur when running this code using 2.0.0 and using
conda install cudatoolkit
would install a version of cuda which is not compatible with PyTorch 2.0.0, resulting in @KirillRepinArt's error:
Warning: torch.cuda.is_available() returned False.
To fix this, simply install the version of PyTorch immediately preceding 2.0.0. I did this using the command from the PyTorch website instead:
pip install torch==1.13.1+cu116 torchvision==0.14.1+cu116 torchaudio==0.13.1 --extra-index-url https://download.pytorch.org/whl/cu116
I also didn't have to do conda install cudatoolkit
after using this pip command.
Ok I got it
1. Start over
conda deactivate conda remove -n textgen --all conda create -n textgen python=3.10.9 conda activate textgen pip3 install torch torchvision torchaudio cd text-generation-webui pip install -r requirements.txt
2. Do the dirty fix in [bitsandbytes/libbitsandbytes_cpu.so: undefined symbol: cget_col_row_stats TimDettmers/bitsandbytes#156 (comment)](https://github.com/TimDettmers/bitsandbytes/issues/156#issuecomment-1462329713):
cd /home/yourname/miniconda3/envs/textgen/lib/python3.10/site-packages/bitsandbytes/ cp libbitsandbytes_cuda120.so libbitsandbytes_cpu.so cd -
3. Install cudatoolkit
conda install cudatoolkit
4. It now works
python server.py --listen --model llama-7b --lora alpaca-lora-7b --load-in-8bit
I had a problem with these instructions which I narrowed down to this line:
pip3 install torch torchvision torchaudio
PyTorch has now updated to 2.0.0 and so running this command will install 2.0.0, but errors occur when running this code using 2.0.0 and using
conda install cudatoolkit
would install a version of cuda which is not compatible with PyTorch 2.0.0, resulting in @KirillRepinArt's error:
Warning: torch.cuda.is_available() returned False.
To fix this, simply install the version of PyTorch immediately preceding 2.0.0. I did this using the command from the PyTorch website instead:
pip install torch==1.13.1+cu116 torchvision==0.14.1+cu116 torchaudio==0.13.1 --extra-index-url https://download.pytorch.org/whl/cu116
I also didn't have to do
conda install cudatoolkit
after using this pip command.
This worked for me, thank you! I had to use though pip install torch==1.13.1+cu117 torchvision==0.14.1+cu117 torchaudio==0.13.1 --extra-index-url https://download.pytorch.org/whl/cu117
for cuda_11.7 and I didn't do conda install cudatoolkit
also.
Now it seems to be working as in the previous state, uses GPU, I can load llama-7b-hf --cai-chat --gptq-bits 4
As in the previous version now --load-in-8bit
doesn't work for me anymore, gives CUDA Setup failed despite GPU being available.
I also can't load --model llama-13b-hf --gptq-bits 4 --cai-chat --auto-devices --gpu-memory 4
, gives me torch.cuda.OutOfMemoryError: CUDA out of memory.
But I had this issues before the last update, and everything that worked previously is also working now, so thanks again!
i tried the command and got this error (d:\myenvs\textgen1) D:\text-generation-webui\repositories\GPTQ-for-LLaMa>pip install torch==1.13.1+cu117 torchvision==0.14.1+cu117 torchaudio==0.13.1 --extra-index-url https://download.pytorch.org/whl/cu117 Looking in indexes: https://pypi.org/simple, https://download.pytorch.org/whl/cu117 Collecting torch==1.13.1+cu117 Using cached https://download.pytorch.org/whl/cu117/torch-1.13.1%2Bcu117-cp310-cp310-win_amd64.whl (2255.4 MB) Collecting torchvision==0.14.1+cu117 Using cached https://download.pytorch.org/whl/cu117/torchvision-0.14.1%2Bcu117-cp310-cp310-win_amd64.whl (4.8 MB) Collecting torchaudio==0.13.1 Using cached https://download.pytorch.org/whl/cu117/torchaudio-0.13.1%2Bcu117-cp310-cp310-win_amd64.whl (2.3 MB) Requirement already satisfied: typing-extensions in d:\myenvs\textgen1\lib\site-packages (from torch==1.13.1+cu117) (4.5.0) Requirement already satisfied: numpy in d:\myenvs\textgen1\lib\site-packages (from torchvision==0.14.1+cu117) (1.24.2) Requirement already satisfied: requests in d:\myenvs\textgen1\lib\site-packages (from torchvision==0.14.1+cu117) (2.28.2) Requirement already satisfied: pillow!=8.3.*,>=5.3.0 in d:\myenvs\textgen1\lib\site-packages (from torchvision==0.14.1+cu117) (9.4.0) Requirement already satisfied: certifi>=2017.4.17 in d:\myenvs\textgen1\lib\site-packages (from requests->torchvision==0.14.1+cu117) (2022.12.7) Requirement already satisfied: charset-normalizer<4,>=2 in d:\myenvs\textgen1\lib\site-packages (from requests->torchvision==0.14.1+cu117) (3.1.0) Requirement already satisfied: idna<4,>=2.5 in d:\myenvs\textgen1\lib\site-packages (from requests->torchvision==0.14.1+cu117) (3.4) Requirement already satisfied: urllib3<1.27,>=1.21.1 in d:\myenvs\textgen1\lib\site-packages (from requests->torchvision==0.14.1+cu117) (1.26.15) Installing collected packages: torch, torchvision, torchaudio Attempting uninstall: torch Found existing installation: torch 2.0.0 Uninstalling torch-2.0.0: Successfully uninstalled torch-2.0.0 Attempting uninstall: torchvision Found existing installation: torchvision 0.15.0 Uninstalling torchvision-0.15.0: Successfully uninstalled torchvision-0.15.0 Attempting uninstall: torchaudio Found existing installation: torchaudio 2.0.0 Uninstalling torchaudio-2.0.0: Successfully uninstalled torchaudio-2.0.0 Successfully installed torch-1.13.1+cu117 torchaudio-0.13.1+cu117 torchvision-0.14.1+cu117
(d:\myenvs\textgen1) D:\text-generation-webui\repositories\GPTQ-for-LLaMa>python setup_cuda.py install
running install
d:\myenvs\textgen1\lib\site-packages\setuptools\command\install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
warnings.warn(
d:\myenvs\textgen1\lib\site-packages\setuptools\command\easy_install.py:144: EasyInstallDeprecationWarning: easy_install command is deprecated. Use build and pip and other standards-based tools.
warnings.warn(
running bdist_egg
running egg_info
writing quant_cuda.egg-info\PKG-INFO
writing dependency_links to quant_cuda.egg-info\dependency_links.txt
writing top-level names to quant_cuda.egg-info\top_level.txt
reading manifest file 'quant_cuda.egg-info\SOURCES.txt'
writing manifest file 'quant_cuda.egg-info\SOURCES.txt'
installing library code to build\bdist.win-amd64\egg
running install_lib
running build_ext
d:\myenvs\textgen1\lib\site-packages\torch\utils\cpp_extension.py:358: UserWarning: Error checking compiler version for cl: [WinError 2] The system cannot find the file specified
warnings.warn(f'Error checking compiler version for {compiler}: {error}')
Traceback (most recent call last):
File "D:\text-generation-webui\repositories\GPTQ-for-LLaMa\setup_cuda.py", line 4, in
@gsgoldma I ran into this error as well. Your CUDA version is 12.0 which isn't compatible with your PyTorch version 11.7. You need to downgrade your CUDA version to one that is compatible with PyTorch 11.7. You could try redoing everything with my instructions as well.
Works on linux with CUDA 12.1: NVIDIA-SMI 530.30.02 Driver Version: 530.30.02 CUDA Version: 12.1
Ok I got it
1. Start over
conda deactivate conda remove -n textgen --all conda create -n textgen python=3.10.9 conda activate textgen pip3 install torch torchvision torchaudio cd text-generation-webui pip install -r requirements.txt
2. Do the dirty fix in [bitsandbytes/libbitsandbytes_cpu.so: undefined symbol: cget_col_row_stats TimDettmers/bitsandbytes#156 (comment)](https://github.com/TimDettmers/bitsandbytes/issues/156#issuecomment-1462329713):
cd /home/yourname/miniconda3/envs/textgen/lib/python3.10/site-packages/bitsandbytes/ cp libbitsandbytes_cuda120.so libbitsandbytes_cpu.so cd -
3. Install cudatoolkit
conda install cudatoolkit
4. It now works
python server.py --listen --model llama-7b --lora alpaca-lora-7b --load-in-8bit
Note that on windows, if you have Python 3.10 set as sys path variable, the python 3.10 directory is entirely skipped. So the path is "cd Drive path/users/yourname/etcetcetc/miniconda3/envs/textgen/lib/site-packages/bitsandbytes/".
I got the same issue when using the new one-click-installer, even though it is supposed to do the dirty fixes automatically. Nvidia gpu is not recognized, and it uses only CPU when I try to --load-in-8bit
CUDA SETUP: Required library version not found: libsbitsandbytes_cpu.so. Maybe you need to compile it from source? CUDA SETUP: Defaulting to libbitsandbytes_cpu.so... argument of type 'WindowsPath' is not iterable
cc @jllllll
I got the same issue when using the new one-click-installer, even though it is supposed to do the dirty fixes automatically. Nvidia gpu is not recognized, and it uses only CPU when I try to --load-in-8bit
CUDA SETUP: Required library version not found: libsbitsandbytes_cpu.so. Maybe you need to compile it from source? CUDA SETUP: Defaulting to libbitsandbytes_cpu.so... argument of type 'WindowsPath' is not iterable
There are no dirty fixes anymore. Try this: https://github.com/oobabooga/text-generation-webui/issues/659#issuecomment-1493555255
Also, you may have installed the cpu version of torch. I've seen that happen before, though I don't know the cause. You can try this to replace it:
python -m pip install torch --index-url https://download.pytorch.org/whl/cu117 --force-reinstall
--OR--
python -m pip install https://download.pytorch.org/whl/cu117/torch-2.0.0%2Bcu117-cp310-cp310-win_amd64.whl --force-reinstall
This will tell you about your torch installation: python -m torch.utils.collect_env
Ok I got it
- Start over
conda deactivate conda remove -n textgen --all conda create -n textgen python=3.10.9 conda activate textgen pip3 install torch torchvision torchaudio cd text-generation-webui pip install -r requirements.txt
- Do the dirty fix in bitsandbytes/libbitsandbytes_cpu.so: undefined symbol: cget_col_row_stats TimDettmers/bitsandbytes#156 (comment):
cd /home/yourname/miniconda3/envs/textgen/lib/python3.10/site-packages/bitsandbytes/ cp libbitsandbytes_cuda120.so libbitsandbytes_cpu.so cd -
- Install cudatoolkit
conda install cudatoolkit
- It now works
python server.py --listen --model llama-7b --lora alpaca-lora-7b --load-in-8bit
you forgot an s
cp libbitsandbytes_cuda120.so libsbitsandbytes_cpu.so