InvokeAI icon indicating copy to clipboard operation
InvokeAI copied to clipboard

[bug]: BNB not supported on apple silicon, don't download the t5xxxl bnb even for quantized FLUX

Open Phatcat opened this issue 8 months ago • 10 comments

Is there an existing issue for this problem?

  • [x] I have searched the existing issues

Operating system

macOS

GPU vendor

Apple Silicon (MPS)

GPU model

Mac M4 base (Mac Mini)

GPU VRAM

16GB

Version number

5.9.1

Browser

InvokeAI Launcher

Python dependencies

No response

What happened

When trying to run the quantized version of flux on apple silicon it will fail and say bnb is missing.

What you expected to happen

I expected it to generate an image.

How to reproduce the problem

Be on MAcOS on aple silicon, download a quantized version of flux, try to generate an image using the default setup.

Additional context

No response

Discord username

No response

Phatcat avatar Apr 05 '25 22:04 Phatcat

Unfortunately bnb models are not supported on macOS.

We should update the model install logic to not let you install these.

In the future it's possible that they will be supported.

psychedelicious avatar Apr 05 '25 23:04 psychedelicious

Seems like apple silicon support is perpetually 'on the way'.

Just have it download the full un-quantized t5xxl regardless of which flux pack it chosen, that one works on apple silicon. Can always reverse it back once bnb actually runs on apple silicon.

Phatcat avatar Apr 06 '25 10:04 Phatcat

We are dependent on the bitsandbytes project for macOS support. I suggest adding a thumbs-up to this issue to voice your support for MPS support.


Supporting platform-specific starter models shouldn't be too hard to do. We'd be happy to review a PR that adds this functionality.

The starter bundles are defined in this file and provided to the frontend via this route.

psychedelicious avatar Apr 07 '25 08:04 psychedelicious

https://github.com/bitsandbytes-foundation/bitsandbytes/discussions/1340

We'll have to be patient. Mac OS requires 32GB RAM for Flux and M2Max or newer.

GitKernelDesign avatar Apr 08 '25 06:04 GitKernelDesign

Maybe full version but I have been running FLUX GGUF's in invoke and comfy with no issue (mac mini m4 16gb) with the 'full size' invokeAI provided t5 and a Q6 of FLUX. Is it fast? No... or, well actually:

An another slightly related note, I just got pytorch 2.8.0 nightly running (in comfy) which is much more optimized for apple, I am no kidding diffusing images more than twice as fast, both flux and sdxl, went from 300+ seconds sdxl to 150 seconds and even better... 1500+ seconds flux schnell to 300ish seconds. Otherwise same settings and scheduler.

Phatcat avatar Apr 09 '25 00:04 Phatcat

Maybe full version but I have been running FLUX GGUF's in invoke and comfy with no issue (mac mini m4 16gb) with the 'full size' invokeAI provided t5 and a Q6 of FLUX. Is it fast? No... or, well actually:

An another slightly related note, I just got pytorch 2.8.0 nightly running (in comfy) which is much more optimized for apple, I am no kidding diffusing images more than twice as fast, both flux and sdxl, went from 300+ seconds sdxl to 150 seconds and even better... 1500+ seconds flux schnell to 300ish seconds. Otherwise same settings and scheduler.

Yes, you can generate images, but it takes too long to work. Bandwidth is too low. I'm on Mac, but I work on a personnal server Nvidia, SXDL = 20 sec Flux1. Dev 60 = sec ( 30 steps 1024px). My Mac SDXL = 50 sec, Flux 110 sec. ( 20 steps 1024px)

GitKernelDesign avatar Apr 10 '25 17:04 GitKernelDesign

@Phatcat Phatcat, Mac + Server PC Nvidia, it's perfect ^_^ (and and less expensive for AI image only)

GitKernelDesign avatar Apr 10 '25 17:04 GitKernelDesign

This is not resolved

psychedelicious avatar Apr 28 '25 04:04 psychedelicious

Will MacOs support get better when we use PyTorch 2.8?

mokko avatar Sep 14 '25 06:09 mokko

Will MacOs support get better when we use PyTorch 2.8?

Future Apple Silicon processors may enable bnb support. Cuda is not available on MacOS, but the architecture of the M5 brings new hope for easy compatibility support. PyTorch 2.8 is already compatible. But don't expect improved support for Apple Silicon M1 to M4.

I was using a Mac, then I switched to Linux. I use my tower for rendering, With InvokeAI in server mode, it's very simple. But the folks at InvokeAI are optimizing the platform really well, so there may be some surprises in 2026 ^_^

GitKernelDesign avatar Sep 14 '25 08:09 GitKernelDesign