bitsandbytes icon indicating copy to clipboard operation
bitsandbytes copied to clipboard

ImportError: Using `bitsandbytes` 8-bit quantization requires Accelerate: `pip install accelerate` and the latest version of bitsandbytes: `pip install -i https://pypi.org/simple/ bitsandbytes`

Open hrmgxuni opened this issue 4 months ago • 15 comments

System Info

# FROM python:3.9

# WORKDIR /code
# COPY ./requirements.txt /code/requirements.txt
# RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt

# COPY . .
# # RUN pip install --no-cache-dir --upgrade -r /requirements.txt

# # uvicorn reward_modeling:app --host 0.0.0.0 --port 6006 --reload
# CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "6006"]
# # CMD ["./my-test-shell.sh"]

# Use the official Python 3.9 image
FROM python:3.10

# Set the working directory to /code
WORKDIR /code

# Copy the current directory contents into the container at /code
COPY ./requirements.txt /code/requirements.txt

RUN pip install -i https://pypi.org/simple/ bitsandbytes --upgrade
# Install requirements.txt 
RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt


# Set up a new user named "user" with user ID 1000
RUN useradd -m -u 1000 user
# Switch to the "user" user
USER user
# Set home to the user's home directory
ENV HOME=/home/user \
	PATH=/home/user/.local/bin:$PATH

# Set the working directory to the user's home directory
WORKDIR $HOME/app

# Copy the current directory contents into the container at $HOME/app setting the owner to the user
COPY --chown=user . $HOME/app

# CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "7860"]
CMD ["uvicorn", "api-server-0226:app", "--host", "0.0.0.0", "--port", "7860"]
# uvicorn api-server-0226:app --host 0.0.0.0 --port 7860

Reproduction

ImportError: Using bitsandbytes 8-bit quantization requires Accelerate: pip install accelerate and the latest version of bitsandbytes: pip install -i https://pypi.org/simple/ bitsandbytes

a93dd0ab982d32397caef31a93a40b06

When I use auto train on Hugging Face and select the options as shown in the screenshot, I encounter an error. I noticed that most models trigger this error. Could it be related to the fact that I'm using a free [CPU?](log.txt)

Expected behavior

I hope the pros can help me figure out how to fix this error.

hrmgxuni avatar Feb 29 '24 14:02 hrmgxuni

I had this on my local windows system and it was due to having the cpu version of pytorch, where a dependency of Accelerate requires the gpu version.

`import torch

print(torch.version)`

this should give something like 2.2.1+cu118 , if its says +cpu then you have the cpu version.

DaveChini avatar Feb 29 '24 20:02 DaveChini

@DaveChini Am having the same issue. But when i print its similar to +cu only. 2.1.2+cu121 but am having the same error any help?

image

Kushalamummigatti avatar Mar 04 '24 09:03 Kushalamummigatti

python 3.10 transformer 4.38.2 bitsandbytes 0.42.0 accelerate 0.27.2 torch 2.0.1+cu117

torch.cuda.is_available() works, nvidia-smi works, import accelerate and bitsandbytes works, but 8-bit quantization doesn't works with same error

cieske avatar Mar 06 '24 01:03 cieske

Did you manage to solve the issue?

eneko98 avatar Mar 11 '24 16:03 eneko98

@DaveChini Am having the same issue. But when i print its similar to +cu only. 2.1.2+cu121 but am having the same error any help?

image

I have the same error.

nanxue2023 avatar Mar 12 '24 05:03 nanxue2023

Had the same error following this tutorial: https://huggingface.co/docs/peft/main/en/developer_guides/quantization on a Kaggle P100 GPU.

torch.cuda.is_available() returns True as well.

seyf97 avatar Mar 12 '24 15:03 seyf97

@DaveChini Am having the same issue. But when i print its similar to +cu only. 2.1.2+cu121 but am having the same error any help? image

I have the same error.

When running my code through jupyter notebook, this error appeared. I used my local vscode and run the code, the error disappeared. Hope it helpful for u. Maybe it is not suitable for interactive notebook.

nanxue2023 avatar Mar 13 '24 05:03 nanxue2023

Hey @younesbelkada,

I'm not sure what to make of this issue. Seems to me based on the error log that if anything it's more related to Transformers? Wdyt?

Titus-von-Koeller avatar Mar 15 '24 19:03 Titus-von-Koeller

I had this on my local windows system and it was due to having the cpu version of pytorch, where a dependency of Accelerate requires the gpu version.

`import torch

print(torch.version)`

this should give something like 2.2.1+cu118 , if its says +cpu then you have the cpu version.

This worked for me! Thank you :)

nerner94 avatar Mar 18 '24 13:03 nerner94

fwiw, downgrading to a lower version of transformers helped resolve the issue for me (4.38.2 to 4.31.0).

jacqueline-he avatar Mar 18 '24 17:03 jacqueline-he

get the same on my side on premise cluster with RTX

didlawowo avatar Mar 19 '24 14:03 didlawowo

fwiw, downgrading to a lower version of transformers helped resolve the issue for me (4.38.2 to 4.31.0).

This worked, Thanks !!

ashwin-js avatar Mar 22 '24 09:03 ashwin-js

@DaveChini我有同样的问题。但是当我打印它时,它只类似于+cu。 2.1.2+cu121 但遇到同样的错误有帮助吗? 图像

我有同样的错误。

当通过jupyter笔记本运行我的代码时,出现了这个错误。我使用本地 vscode 并运行代码,错误消失了。希望对你有帮助。也许它不适合交互式笔记本。

@DaveChini Am having the same issue. But when i print its similar to +cu only. 2.1.2+cu121 but am having the same error any help? image

I have the same error.

When running my code through jupyter notebook, this error appeared. I used my local vscode and run the code, the error disappeared. Hope it helpful for u. Maybe it is not suitable for interactive notebook.

Yeah, me too. Same reason.

Huyueeer avatar Apr 14 '24 02:04 Huyueeer

If you just installed the libraries such as pip install accelerate peft bitsandbytes transformers trl and running the Jupyter, you can try restart the kernel.

ibrahimberb avatar Apr 26 '24 10:04 ibrahimberb

clone the repo on colab notebook enabling tpu and use ngrok to run, worked for me, still not working on local host , but works on the link provided by the ngrok. I think it is a problem of cpu.

Adityaa-Sharma avatar Apr 27 '24 17:04 Adityaa-Sharma