text-generation-webui icon indicating copy to clipboard operation
text-generation-webui copied to clipboard

CUDA Setup Failed after trying to run at 8-bit

Open iChristGit opened this issue 2 years ago • 9 comments

I had a working LLamma 7b installation, but 13b failed with 24gb vram (3090ti) and 32gb ram, so I tried this:

https://github.com/oobabooga/text-generation-webui/issues/147#issuecomment-1456040134

(downloaded the dll and put it in the right folder, and edited those 3 lines in main.py for bitsandbytes) I have cuda 11.8 and Miniconda

Then I get this error, any idea? 👍




(base) D:\MachineLearning\TextWebui\text-generation-webui>python server.py --model LLaMA-13B --load-in-8bit Traceback (most recent call last): File "D:\MachineLearning\TextWebui\text-generation-webui\server.py", line 10, in import gradio as gr ModuleNotFoundError: No module named 'gradio'

(base) D:\MachineLearning\TextWebui\text-generation-webui>conda activate textgen

(textgen) D:\MachineLearning\TextWebui\text-generation-webui>python server.py --model LLaMA-13B --load-in-8bit Loading LLaMA-13B...

===================================BUG REPORT=================================== Welcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues

D:\MachineLearning\Miniconda\envs\textgen\lib\site-packages\bitsandbytes\cuda_setup\main.py:136: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {WindowsPath('D')} warn(msg) D:\MachineLearning\Miniconda\envs\textgen\lib\site-packages\bitsandbytes\cuda_setup\main.py:136: UserWarning: D:\MachineLearning\Miniconda\envs\textgen did not contain libcudart.so as expected! Searching further paths... warn(msg) CUDA_SETUP: WARNING! libcudart.so not found in any environmental path. Searching /usr/local/cuda/lib64... D:\MachineLearning\Miniconda\envs\textgen\lib\site-packages\bitsandbytes\cuda_setup\main.py:136: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {WindowsPath('/usr/local/cuda/lib64')} warn(msg) CUDA SETUP: WARNING! libcuda.so not found! Do you have a CUDA driver installed? If you are on a cluster, make sure you are on a CUDA machine! D:\MachineLearning\Miniconda\envs\textgen\lib\site-packages\bitsandbytes\cuda_setup\main.py:136: UserWarning: WARNING: No libcudart.so found! Install CUDA or the cudatoolkit package (anaconda)! warn(msg) D:\MachineLearning\Miniconda\envs\textgen\lib\site-packages\bitsandbytes\cuda_setup\main.py:136: UserWarning: WARNING: No GPU detected! Check your CUDA paths. Proceeding to load CPU-only library... warn(msg) CUDA SETUP: Loading binary D:\MachineLearning\Miniconda\envs\textgen\lib\site-packages\bitsandbytes\libbitsandbytes_cpu.so... argument of type 'WindowsPath' is not iterable CUDA_SETUP: WARNING! libcudart.so not found in any environmental path. Searching /usr/local/cuda/lib64... CUDA SETUP: WARNING! libcuda.so not found! Do you have a CUDA driver installed? If you are on a cluster, make sure you are on a CUDA machine! CUDA SETUP: Loading binary D:\MachineLearning\Miniconda\envs\textgen\lib\site-packages\bitsandbytes\libbitsandbytes_cpu.so... argument of type 'WindowsPath' is not iterable CUDA SETUP: Problem: The main issue seems to be that the main CUDA library was not detected. CUDA SETUP: Solution 1): Your paths are probably not up-to-date. You can update them via: sudo ldconfig. CUDA SETUP: Solution 2): If you do not have sudo rights, you can do the following: CUDA SETUP: Solution 2a): Find the cuda library via: find / -name libcuda.so 2>/dev/null CUDA SETUP: Solution 2b): Once the library is found add it to the LD_LIBRARY_PATH: export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:FOUND_PATH_FROM_2a CUDA SETUP: Solution 2c): For a permanent solution add the export from 2b into your .bashrc file, located at ~/.bashrc Traceback (most recent call last): File "D:\MachineLearning\TextWebui\text-generation-webui\server.py", line 194, in shared.model, shared.tokenizer = load_model(shared.model_name) File "D:\MachineLearning\TextWebui\text-generation-webui\modules\models.py", line 123, in load_model model = eval(command) File "", line 1, in File "D:\MachineLearning\Miniconda\envs\textgen\lib\site-packages\transformers\models\auto\auto_factory.py", line 471, in from_pretrained return model_class.from_pretrained( File "D:\MachineLearning\Miniconda\envs\textgen\lib\site-packages\transformers\modeling_utils.py", line 2503, in from_pretrained from .utils.bitsandbytes import get_keys_to_not_convert, replace_8bit_linear File "D:\MachineLearning\Miniconda\envs\textgen\lib\site-packages\transformers\utils\bitsandbytes.py", line 7, in import bitsandbytes as bnb File "D:\MachineLearning\Miniconda\envs\textgen\lib\site-packages\bitsandbytes_init_.py", line 7, in from .autograd.functions import ( File "D:\MachineLearning\Miniconda\envs\textgen\lib\site-packages\bitsandbytes\autograd_init.py", line 1, in from ._functions import undo_layout, get_inverse_transform_indices File "D:\MachineLearning\Miniconda\envs\textgen\lib\site-packages\bitsandbytes\autograd_functions.py", line 9, in import bitsandbytes.functional as F File "D:\MachineLearning\Miniconda\envs\textgen\lib\site-packages\bitsandbytes\functional.py", line 17, in from .cextension import COMPILED_WITH_CUDA, lib File "D:\MachineLearning\Miniconda\envs\textgen\lib\site-packages\bitsandbytes\cextension.py", line 22, in raise RuntimeError(''' RuntimeError: CUDA Setup failed despite GPU being available. Inspect the CUDA SETUP outputs above to fix your environment! If you cannot find any issues and suspect a bug, please open an issue with detals about your environment: https://github.com/TimDettmers/bitsandbytes/issues

iChristGit avatar Mar 06 '23 22:03 iChristGit

For anybody having troubles still, you can try using newer library - https://github.com/james-things/bitsandbytes-prebuilt-all_arch Using v37 did it for me finally :) https://github.com/oobabooga/text-generation-webui/issues/20#issuecomment-1455762694

This may not be the issue, but give it a shot and let us know how it goes.

MarkSchmidty avatar Mar 07 '23 07:03 MarkSchmidty

For anybody having troubles still, you can try using newer library - https://github.com/james-things/bitsandbytes-prebuilt-all_arch Using v37 did it for me finally :) #20 (comment)

This may not be the issue, but give it a shot and let us know how it goes.

Ive added the v37 dll into the same folder, what next should I do?

iChristGit avatar Mar 07 '23 09:03 iChristGit

I did a reinstall of transformers (pip uninstall transformers pip install git+https://github.com/oobabooga/transformers@llama_push)

Now getting this:

Loading llama-13b... Traceback (most recent call last): File "D:\MachineLearning\TextWebui\text-generation-webui\server.py", line 194, in shared.model, shared.tokenizer = load_model(shared.model_name) File "D:\MachineLearning\TextWebui\text-generation-webui\modules\models.py", line 123, in load_model model = eval(command) File "", line 1, in File "D:\MachineLearning\TextWebui\installer_files\env\lib\site-packages\transformers\models\auto\auto_factory.py", line 471, in from_pretrained return model_class.from_pretrained( File "D:\MachineLearning\TextWebui\installer_files\env\lib\site-packages\transformers\modeling_utils.py", line 2503, in from_pretrained from .utils.bitsandbytes import get_keys_to_not_convert, replace_8bit_linear File "D:\MachineLearning\TextWebui\installer_files\env\lib\site-packages\transformers\utils\bitsandbytes.py", line 7, in import bitsandbytes as bnb File "D:\MachineLearning\TextWebui\installer_files\env\lib\site-packages\bitsandbytes_init_.py", line 7, in from .autograd.functions import ( File "D:\MachineLearning\TextWebui\installer_files\env\lib\site-packages\bitsandbytes\autograd_init.py", line 1, in from ._functions import undo_layout, get_inverse_transform_indices File "D:\MachineLearning\TextWebui\installer_files\env\lib\site-packages\bitsandbytes\autograd_functions.py", line 9, in import bitsandbytes.functional as F File "D:\MachineLearning\TextWebui\installer_files\env\lib\site-packages\bitsandbytes\functional.py", line 17, in from .cextension import COMPILED_WITH_CUDA, lib File "D:\MachineLearning\TextWebui\installer_files\env\lib\site-packages\bitsandbytes\cextension.py", line 8, in from bitsandbytes.cuda_setup.main import CUDASetup File "D:\MachineLearning\TextWebui\installer_files\env\lib\site-packages\bitsandbytes\cuda_setup\main.py", line 368 if torch.cuda.is_available(): return 'libbitsandbytes_cuda116.dll', None, None, None, None ^ IndentationError: unindent does not match any outer indentation level Press any key to continue . . .

iChristGit avatar Mar 07 '23 09:03 iChristGit

IndentationError: unindent does not match any outer indentation level

This means you changed the main.py file incorrectly. You have changed how many spaces are before line 368. There should only be four spaces before. No more, no less.

I recommend using Notepad++ to do any changes.

EDIT: Also, you need to make the changes to that main.py to reflect the new dll name of libbitsandbytes_cudaall.dll as per https://github.com/james-things/bitsandbytes-prebuilt-all_arch#using-with-sd-dreambooth-extension

askmyteapot avatar Mar 07 '23 09:03 askmyteapot

Dow

IndentationError: unindent does not match any outer indentation level

This means you changed the main.py file incorrectly. You have changed how many spaces are before line 368. There should only be four spaces before. No more, no less.

I recommend using Notepad++ to do any changes.

EDIT: Also, you need to make the changes to that main.py to reflect the new dll name of libbitsandbytes_cudaall.dll as per https://github.com/james-things/bitsandbytes-prebuilt-all_arch#using-with-sd-dreambooth-extension

Okay doing that step again now with notepad, the last step what lines do I need to change for cudaall? I am trying to do this for like 10 hours lol

iChristGit avatar Mar 07 '23 10:03 iChristGit

Should just be on line 368.

The entire line should look like: if torch.cuda.is_available(): return 'libbitsandbytes_cudaall.dll', None, None, None, None

Noting the 4 spaces at the front.

askmyteapot avatar Mar 07 '23 10:03 askmyteapot

Should just be on line 368.

The entire line should look like: if torch.cuda.is_available(): return 'libbitsandbytes_cudaall.dll', None, None, None, None

Noting the 4 spaces at the front.

Thank you! doing it all from scratch + using Notepad++ and copying your line worked perfectly! once again thank you :D

iChristGit avatar Mar 07 '23 10:03 iChristGit

You're welcome.

askmyteapot avatar Mar 07 '23 10:03 askmyteapot

This fix did not work for me, I changed the line and place the v37 dll into the bitsandbytes folder, still getting the same error:

===================================BUG REPORT===================================
Welcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
================================================================================
CUDA SETUP: Loading binary C:\Users\Emperor\miniconda3\lib\site-packages\bitsandbytes\libbitsandbytes_cudaall.dll...
argument of type 'WindowsPath' is not iterable
CUDA SETUP: Loading binary C:\Users\Emperor\miniconda3\lib\site-packages\bitsandbytes\libbitsandbytes_cudaall.dll...
argument of type 'WindowsPath' is not iterable
CUDA SETUP: Problem: The main issue seems to be that the main CUDA library was not detected.
CUDA SETUP: Solution 1): Your paths are probably not up-to-date. You can update them via: sudo ldconfig.
CUDA SETUP: Solution 2): If you do not have sudo rights, you can do the following:
CUDA SETUP: Solution 2a): Find the cuda library via: find / -name libcuda.so 2>/dev/null
CUDA SETUP: Solution 2b): Once the library is found add it to the LD_LIBRARY_PATH: export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:FOUND_PATH_FROM_2a
CUDA SETUP: Solution 2c): For a permanent solution add the export from 2b into your .bashrc file, located at ~/.bashrc
Traceback (most recent call last):
  File "D:\Documents\Textgen\text-generation-webui\server.py", line 194, in <module>
    shared.model, shared.tokenizer = load_model(shared.model_name)
  File "D:\Documents\Textgen\text-generation-webui\modules\models.py", line 123, in load_model
    model = eval(command)
  File "<string>", line 1, in <module>
  File "C:\Users\Emperor\miniconda3\lib\site-packages\transformers\models\auto\auto_factory.py", line 471, in from_pretrained
    return model_class.from_pretrained(
  File "C:\Users\Emperor\miniconda3\lib\site-packages\transformers\modeling_utils.py", line 2503, in from_pretrained
    from .utils.bitsandbytes import get_keys_to_not_convert, replace_8bit_linear
  File "C:\Users\Emperor\miniconda3\lib\site-packages\transformers\utils\bitsandbytes.py", line 7, in <module>
    import bitsandbytes as bnb
  File "C:\Users\Emperor\miniconda3\lib\site-packages\bitsandbytes\__init__.py", line 7, in <module>
    from .autograd._functions import (
  File "C:\Users\Emperor\miniconda3\lib\site-packages\bitsandbytes\autograd\__init__.py", line 1, in <module>
    from ._functions import undo_layout, get_inverse_transform_indices
  File "C:\Users\Emperor\miniconda3\lib\site-packages\bitsandbytes\autograd\_functions.py", line 9, in <module>
    import bitsandbytes.functional as F
  File "C:\Users\Emperor\miniconda3\lib\site-packages\bitsandbytes\functional.py", line 17, in <module>
    from .cextension import COMPILED_WITH_CUDA, lib
  File "C:\Users\Emperor\miniconda3\lib\site-packages\bitsandbytes\cextension.py", line 22, in <module>
    raise RuntimeError('''
RuntimeError:
        CUDA Setup failed despite GPU being available. Inspect the CUDA SETUP outputs above to fix your environment!
        If you cannot find any issues and suspect a bug, please open an issue with detals about your environment:
        https://github.com/TimDettmers/bitsandbytes/issues

Mozoloa avatar Mar 09 '23 18:03 Mozoloa

This issue has been closed due to inactivity for 30 days. If you believe it is still relevant, please leave a comment below.

github-actions[bot] avatar Apr 08 '23 23:04 github-actions[bot]

with llama2 torch 2.0.1+cu117 torchaudio 2.0.2+cu117 torchvision 0.15.2+cu117 transformers 4.33.1 bitsandbytes 0.41.1 accelerate 0.22.0

can you please let me know how to resolve the following issue. False

===================================BUG REPORT=================================== C:\Users\admin\AppData\Local\Programs\Python\Python311\Lib\site-packages\bitsandbytes\cuda_setup\main.py:166: UserWarning: Welcome to bitsandbytes. For bug reports, please run

python -m bitsandbytes

warn(msg)

CUDA_SETUP: WARNING! libcudart.so not found in any environmental path. Searching in backup paths... The following directories listed in your path were found to be non-existent: {WindowsPath('/usr/local/cuda/lib64')} DEBUG: Possible options found for libcudart.so: set() CUDA SETUP: PyTorch settings found: CUDA_VERSION=117, Highest Compute Capability: 8.6. CUDA SETUP: To manually override the PyTorch CUDA version please see:https://github.com/TimDettmers/bitsandbytes/blob/main/how_to_use_nonpytorch_cuda.md CUDA SETUP: Loading binary C:\Users\admin\AppData\Local\Programs\Python\Python311\Lib\site-packages\bitsandbytes\libbitsandbytes_cuda117.so... argument of type 'WindowsPath' is not iterable CUDA SETUP: Problem: The main issue seems to be that the main CUDA runtime library was not detected. CUDA SETUP: Solution 1: To solve the issue the libcudart.so location needs to be added to the LD_LIBRARY_PATH variable CUDA SETUP: Solution 1a): Find the cuda runtime library via: find / -name libcudart.so 2>/dev/null CUDA SETUP: Solution 1b): Once the library is found add it to the LD_LIBRARY_PATH: export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:FOUND_PATH_FROM_1a CUDA SETUP: Solution 1c): For a permanent solution add the export from 1b into your .bashrc file, located at ~/.bashrc CUDA SETUP: Solution 2: If no library was found in step 1a) you need to install CUDA. CUDA SETUP: Solution 2a): Download CUDA install script: wget https://github.com/TimDettmers/bitsandbytes/blob/main/cuda_install.sh CUDA SETUP: Solution 2b): Install desired CUDA version to desired location. The syntax is bash cuda_install.sh CUDA_VERSION PATH_TO_INSTALL_INTO. CUDA SETUP: Solution 2b): For example, "bash cuda_install.sh 113 ~/local/" will download CUDA 11.3 and install into the folder ~/local Traceback (most recent call last): File "", line 189, in run_module_as_main File "", line 148, in get_module_details File "", line 112, in get_module_details File "C:\Users\admin\AppData\Local\Programs\Python\Python311\Lib\site-packages\bitsandbytes_init.py", line 6, in from . import cuda_setup, utils, research File "C:\Users\admin\AppData\Local\Programs\Python\Python311\Lib\site-packages\bitsandbytes\research_init.py", line 1, in from . import nn File "C:\Users\admin\AppData\Local\Programs\Python\Python311\Lib\site-packages\bitsandbytes\research\nn_init.py", line 1, in from .modules import LinearFP8Mixed, LinearFP8Global File "C:\Users\admin\AppData\Local\Programs\Python\Python311\Lib\site-packages\bitsandbytes\research\nn\modules.py", line 8, in from bitsandbytes.optim import GlobalOptimManager File "C:\Users\admin\AppData\Local\Programs\Python\Python311\Lib\site-packages\bitsandbytes\optim_init_.py", line 6, in from bitsandbytes.cextension import COMPILED_WITH_CUDA File "C:\Users\admin\AppData\Local\Programs\Python\Python311\Lib\site-packages\bitsandbytes\cextension.py", line 20, in raise RuntimeError(''' RuntimeError: CUDA Setup failed despite GPU being available. Please run the following command to get more information:

    python -m bitsandbytes

    Inspect the output of the command and see if you can locate CUDA libraries. You might need to add them
    to your LD_LIBRARY_PATH. If you suspect a bug, please take the information from python -m bitsandbytes
    and open an issue at: https://github.com/TimDettmers/bitsandbytes/issues

MalleswararaoMaguluri avatar Sep 07 '23 10:09 MalleswararaoMaguluri