InvokeAI icon indicating copy to clipboard operation
InvokeAI copied to clipboard

[bug]: Flux models doesnt work on Mac M2 device. Gets an error like this -> ImportError: The bnb modules are not available. Please install bitsandbytes if available on your platform.

Open moheshmohan opened this issue 1 year ago • 13 comments

Is there an existing issue for this problem?

  • [X] I have searched the existing issues

Operating system

macOS

GPU vendor

Apple Silicon (MPS)

GPU model

No response

GPU VRAM

No response

Version number

5.0.0

Browser

chrome 129.0.6668.60

Python dependencies

No response

What happened

When i run flux models i get the below error

ImportError: The bnb modules are not available. Please install bitsandbytes if available on your platform.

What you expected to happen

I expected flux models to run. I have been using flux on same device using other software like diffusionbee

How to reproduce the problem

No response

Additional context

No response

Discord username

No response

moheshmohan avatar Sep 27 '24 04:09 moheshmohan

I installed bitsandbytes manually (pip3 install bitsandbytes in the virtual env). Then there is the error TypeError: BFloat16 is not supported on MPS, which lead to me to https://github.com/bitsandbytes-foundation/bitsandbytes/issues/1020 : port for MPS architecture is still a WIP. From my understanding we have to wait this to be completed, unless another solution emerges with different quantizations or compression techniques. I tried to import the Flux fp8 checkpoints from DrawThings but they are not compatible.

colinux avatar Sep 27 '24 07:09 colinux

As far as I know, you will need to update torch and torchvision to a more recent version to get bfloat16 support on MPS. You can try torch 2.3.1 or a recent nightly build. However, I don't know if this will help you get quantized Flux working, as MPS does not support fp8 at all.

Adreitz avatar Sep 28 '24 17:09 Adreitz

Yep, I'm afraid you need enough memory to run the fp16 version of Flux at the moment as bitsandbtyes doesn't support MacOS.

And you'll need to upgrade torch in your InvokeAI venv.

You also need a couple of code changes doing to the InvokeAI code (which in theory now depends on what version of torch you upgrade to, with 2.3.1 you need a change a function call that is in theory fixed in the PyTorch nightlies now I not tested it yet).

Vargol avatar Sep 29 '24 11:09 Vargol

following this. i face the same problem ImportError: The bnb modules are not available. Please install bitsandbytes if available on your platform.

vicento avatar Sep 30 '24 16:09 vicento

Are you still trying to use a quantised model ? I'll not got bitsandbytes installed and the non quantised Flux [schnell] model works without issue (apart from being really slow as I've not quite got enough memory) none of the quantised formats used for Flux work on a Mac currently once the code changes have been made

(InvokeAI) M3iMac:InvokeAI $ pip show bitsandbytes
WARNING: Package(s) not found: bitsandbytes
(InvokeAI) M3iMac:InvokeAI $ cat ~/bin/run_invoke.sh 
export INVOKEAI_ROOT=/Users/xxxx/invokeai
export PYTORCH_ENABLE_MPS_FALLBACK=1
cd /Volumes/SSD2TB/AI/InvokeAI 
. bin/activate
invokeai-web

(InvokeAI) M3iMac:InvokeAI $ ~/bin/run_invoke.sh 
...
100%|████████████████████████████████| 4/4 [06:18<00:00, 94.59s/it]
[2024-09-30 18:20:58,841]::[InvokeAI]::INFO --> Graph stats: a751e29a-1cd7-4d97-afc3-211f2cecb821
                          Node   Calls   Seconds  VRAM Used
             flux_model_loader       1    0.008s     0.000G
             flux_text_encoder       1  199.630s     0.000G
                  flux_denoise       1  421.639s     0.000G
                 core_metadata       1    0.014s     0.000G
               flux_vae_decode       1    7.373s     0.000G
TOTAL GRAPH EXECUTION TIME: 628.665s
TOTAL GRAPH WALL TIME: 628.688s
RAM used by InvokeAI process: 0.81G (+0.271G)
RAM used to load models: 40.50G
RAM cache statistics:
   Model cache hits: 6
   Model cache misses: 6
   Models cached: 1
   Models cleared from cache: 1
   Cache high water mark: 22.15/11.00G

[2024-09-30 18:20:58,888]::[uvicorn.acce

Vargol avatar Sep 30 '24 17:09 Vargol

GGUF work on 2.4.1 pytorch, (nightly pytorch break GGUF)... at least in comfy, and it works with all other nodes.

Also their are now MLX nodes as well for comfy to load 4bit versions of flux ... but that has other issues (no compatability with most things yet)

cchance27 avatar Oct 11 '24 20:10 cchance27

I tried v5.5 on my MAC and have similar problems, and while some comments indicate that there may be some FLUX models that may work, I have never succeeded to find one. DrawThings works with FLUX models on the MAC, and I tried using one of their models, moving it to Invoke, but it too did not work.

DouglasDahlia779 avatar Jan 04 '25 13:01 DouglasDahlia779

It should be possible to route around Bit and Bytes. I did this for the Kandinsky 3 release to get it working on MPS. I looked at this a little. It's mostly in the flux.py and module.py, but saw some size mismatches on my quick attempts to get it working. ComfyUI itself is working with FLUX, so one would think Invoke should be able to too.

RyPoints avatar Jan 07 '25 06:01 RyPoints

Flux works out the box for me macOS 15, M3 CPU. Make sure you're not trying to use a BitsAndBytes based model, the T5 should be the full version (unless GGUF based T5 is working now, not tried for a while) and the Flux model should be the Full version or a GGUF version.

If you downloaded / installed an NF4 or F8 model try deleting it from the model manager or manually from the model cache is its not in the manager.

Here's the LinearUI set up I use in pictures.

image

image

image

Vargol avatar Jan 07 '25 09:01 Vargol

t5_bnb_int8_quantized_encoder gets installed as part of the FLUX starter pack, which is very easy for a newbie to install from the web ui.

So if you're a newbie (like me) you might install this starter pack then wonder why it isn't working on your M-series processor. You then install FLUX Schnell (bfloat16) and still get the bnb error.

The solution seems to be to remove everything installed by the starter pack, especially t5_bnb_int8_quantized_encoder, before installing the non-quanitzed FLUX Schnell.

pidg avatar Feb 21 '25 23:02 pidg

your solution worked pidg! This is the first time flux worked on my MAC. It's dredfully slow, as to make it really unusable, but it did work. Drawthings works with flux on the MAC an d creates loras you can use in invoke ai (with other SDXL models...

DouglasDahlia779 avatar May 26 '25 00:05 DouglasDahlia779

Rephrased: Remove the bnb encoder, remove all models marked as quantized, and keep away from them.

fuzzy76 avatar Aug 05 '25 11:08 fuzzy76

I'm getting similar issues as well.

I've had better luck with https://github.com/divamgupta/diffusionbee-stable-diffusion-ui which was designed with macs in mind.

Not as feature rich though, and unfortunately, there hasn't been any updates since August of last year. But it gets the job done.

Judging by this message on the Discord, efforts are probably being focused on a newer tool:

We are building something amazing. A local tool to easily create and edit high-quality images locally. ( similar to midjourney ).  We are looking for users for early access and feedback. If interested DM me and add a reaction here. In general we are also looking to talk to users, understand their pain points, so that we can build better tools for them.

alexjyong avatar Oct 30 '25 16:10 alexjyong