bitsandbytes icon indicating copy to clipboard operation
bitsandbytes copied to clipboard

AttributeError: 'NoneType' object has no attribute 'cquantize_blockwise_fp16_nf4'

Open wissamee opened this issue 1 year ago • 9 comments

System Info

I am using a Tesla T4 16 gb

Reproduction

import torch from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig

base_model_id = "mistralai/Mistral-7B-Instruct-v0.1" bnb_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_use_double_quant=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=torch.bfloat16 )

model = AutoModelForCausalLM.from_pretrained(base_model_id, quantization_config=bnb_config, device_map="auto", token=access_token)

Expected behavior

Hello, I am trying to finetune Mistral 7b using qlora but m facing this error does anyone know how to solve it : AttributeError: 'NoneType' object has no attribute 'cquantize_blockwise_fp16_nf4'. These are the versions of the packages I am using :

bitsandbytes==0.43.2 transformers==4.34.0 torch==2.3.0 accelerate==0.29.3

I am using python 3.9

wissamee avatar May 13 '24 11:05 wissamee

same problem

DaDuo-c avatar May 18 '24 16:05 DaDuo-c

I am experiencing the same issue. How did you resolve this problem?

cyj7222 avatar Jun 03 '24 07:06 cyj7222

Oh, it was a version issue. After changing to version 0.43.0, it works fine.

pip install bitsandbytes==0.43.0

cyj7222 avatar Jun 03 '24 08:06 cyj7222

yes it is working with the version bitsandbytes==0.43.1

wissamee avatar Jun 05 '24 10:06 wissamee

I also got this error, but installing the above versions of bitsandbytes did not work. I was able to fix it by specifying where I had cuda installed. See my answer on SO if you think this might help you.

MatousAc avatar Jun 07 '24 22:06 MatousAc

I also got this error, but installing the above versions of bitsandbytes did not work. I was able to fix it by specifying where I had cuda installed. See my answer on SO if you think this might help you.

I got the simlar issue in the latest main branch (commit 432a4f4d45e9eb6c3e31971d1e0e69a9bc852a21 ). Yet I found the link in your post is no longer accessible. Could you provide some insights? Thanks. The following is my log. @MatousAc

[rank0]: File "/home/ubuntu/qian/git/vllm/vllm/model_executor/models/llama.py", line 474, in load_weights [rank0]: for name, loaded_weight in weights: [rank0]: File "/home/ubuntu/qian/git/vllm/vllm/model_executor/model_loader/loader.py", line 893, in _unquantized_generator [rank0]: processed_weight, quant_state = quantize_4bit( [rank0]: ^^^^^^^^^^^^^^ [rank0]: File "/home/ubuntu/qian/git/bitsandbytes/bitsandbytes/functional.py", line 1218, in quantize_4bit [rank0]: lib.cquantize_blockwise_fp16_nf4( [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank0]: AttributeError: 'NoneType' object has no attribute 'cquantize_blockwise_fp16_nf4'

chenqianfzh avatar Aug 24 '24 00:08 chenqianfzh

I also got this error, but installing the above versions of bitsandbytes did not work. I was able to fix it by specifying where I had cuda installed. See my answer on SO if you think this might help you.

I got the simlar issue in the latest main branch (commit 432a4f4 ). Yet I found the link in your post is no longer accessible. Could you provide some insights? Thanks.

Hey! Yeah, @chenqianfzh, apparently the original question author deleted their question, and my answer went with it. Luckily I could still see the deleted answer, so I'll try to paste and format it below:

From SO:

I also got this error, but was able to fix it by specifying where I had cuda installed. My error prompted me to run python -m bitsandbytes. This command returned the steps for locating and setting up a cuda installation (which worked for me) and the steps for installing cuda.

I did all this on Ubuntu.

Locate and Set Up Existing Cuda Installation

If you have cuda installed, or don't know whether you have it, just run this command:

find / -name libcudart.so 2>/dev/null

If this returns a path (or multiple paths), copy one of those paths. Make sure to only copy the folders up to and including the lib or lib64 folder. Don't copy the libcudart.so part. Then execute this command:

export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:<thePathYouJustCopied>

To make this solution permanent, open the file in ~/.bashrc (any text editor is fine) and paste the command you just executed on a line right below the other export statements which should be near the bottom of the file.

I hope this helps solve your issue as it did mine.

Install Cuda

If you found that you did not have cuda installed, try this:

Download installation script:

wget https://raw.githubusercontent.com/TimDettmers/bitsandbytes/main/install_cuda.sh

Note that the steps given by python -m bitsandbytes actually provide an outdated path for wget, so use the one I provide or search for the script yourself online.

Execute the script with:

bash install_cuda.sh 123 ~/cuda/

You can change the version number (123) or the installation location (~/cuda/) as you wish.

MatousAc avatar Aug 24 '24 02:08 MatousAc

@MatousAc Thank you so much for your help!

Based on the info you provided, I figured out what is wrong with my env. It is also because the .so file not found.

But in my case, it is because I installed the version from source code using "pip install -e ./", which does not build the .so file automatically. So the problem is solved after I manually built the cuda files.

Again, thanks!

chenqianfzh avatar Aug 24 '24 06:08 chenqianfzh

@chenqianfzh Awesome. I'm so glad I could help. Good luck with training your models!

MatousAc avatar Aug 24 '24 13:08 MatousAc

We noticed that there has been no recent activity on this issue. As a result, we will be closing it for now. If you continue to experience this problem or have additional information to provide, please feel free to reopen the issue or create a new one.

Thank you for your understanding.

matthewdouglas avatar Feb 28 '25 15:02 matthewdouglas