Please officially clarify whether the project intends to support Mac or not
Hope I don't come off the wrong way, first I want to emphasize I am not complaining. I appreciate all the work done here and am enjoying it on my Windows NVIDiA machine. But as someone who also has a Mac and also used to recommend Forge to many mac users, unfortunately I can't do that anymore since Flux doesn't work on Forge on Macs.
What I would appreciate is if the devs can clarify if this project is or is not intended for Mac usage. An official statement.
For the record, I already know how to get this to work on Macs and have it running by monkey patching (This PR worked https://github.com/lllyasviel/stable-diffusion-webui-forge/pull/1162#issuecomment-2294612542), but even if these monkey patches work, it means nothing since whoever decides to use these patched up versions will not be using the most up to date Forge and instead stuck on the old version, and won't be able to take advantage of any future bug fixes, etc.
Currently, it is a fact that Forge does not work on Macs, and there are a handful of github issues where people are reporting it's not working. Basically when you try to use the fp8 version you get this error (which was supposed to be fixed by the rejected PR above)
TypeError: Trying to convert Float8_e4m3fn to the MPS backend but it does not have support for that dtype.
Trying to convert Float8_e4m3fn to the MPS backend but it does not have support for that dtype.
Especially since the PR mentioned above--which was working on Macs--was deliberately closed and another PR was accepted instead, I think this is a valid question to ask at this point.
It is totally OK if the project does NOT want to support macs officially, and I respect everyone contributing to this project still. I am using it just fine on my NVIDIA machine.
But if we can just know what the official position is, so I can either decide to give up on Forge on Macs and move on, or contribute to help with Mac support in case you DO want to support Macs, that would be very helpful. Thank you for all your work.
This project wasted my time today and the developers should really more up front and let people know that some hardware isn't going to be supported so they don't have to play wild goose software version chases trying to get applications to work. And for those out there potentially wondering if they should try StablityAI and StableDiffusion -- it's going to be less of a hassle to just buy a subscription for generating AI images than attempt to get this convoluted software and hardware mess to work.
So, just to clarify your issue, it is more about using Flux in Forge on MacOS, rather than using Forge itself? I'd assume that MacOS is supported to a high degree, although not completely, just like Automatic1111. Most issues I see relating to MacOS is when attempting to use Flux.
My understanding is that the current releases of Pytorch on Mac do not support fp8 - which is the source of the above error. Apparently newer nightlies have added support. The PR linked above changed a calculation, necessary for Flux, to process on CPU. The purpose was to avoid accelerated float64 (not available on some systems: MPS, XPU). The PR used instead should have achieved the same result in a simpler way, by using float32 instead. It's equivalent to the PR you (cocktailpeanut) submitted earlier. It can't fix fp8 if the version of Pytorch you're using doesn't include fp8 support. It also can't fix nf4 checkpoints as they require the BitsandBytes library, which again AFAIK is not available on Mac.
comments on #1264 may be useful
(There's nothing official about any of my comments BTW; I don't speak for the project.)
@DenOfEquity
My understanding is that the current releases of Pytorch on Mac do not support fp8 - which is the source of the above error. Apparently newer nightlies have added support.
Wait, does this mean a potential solution might be to just change the install method to install pytorch nightly instead, with no code changees? This sounds like a simple fix if this is really the case.
It also can't fix nf4 checkpoints as they require the BitsandBytes library, which again AFAIK is not available on Mac.
Yeah i'm not so worried about nf4, just interested in fp8.
So, just to clarify your issue, it is more about using Flux in Forge on MacOS, rather than using Forge itself? I'd assume that MacOS is supported to a high degree, although not completely, just like Automatic1111. Most issues I see relating to MacOS is when attempting to use Flux.
Yes but honestly everyone wants to use Flux (I'm not even talking about SD3.5 here) at this point instead of SDXL or older models. The only reason I'm posting this is because a proper Mac + Flux support doesn't seem like a terribly difficult task, and just want to know why it's not done and why there is no official statement about this.
I would even take "We are not yet sure" as an answer. Just ANY kind of clarification would be appreciated. I am aware this repo is an experimental project so not asking for much. Just some clarification, so those on macs can move on and find other projects or contribute to other projects if the intent is to officially not support Mac.
This project wasted my time today and the developers should really more up front and let people know that some hardware isn't going to be supported so they don't have to play wild goose software version chases trying to get applications to work. And for those out there potentially wondering if they should try StablityAI and StableDiffusion -- it's going to be less of a hassle to just buy a subscription for generating AI images than attempt to get this convoluted software and hardware mess to work.
It's free and we should all be thankful. If you can't use it (and not to sound snarky,) buy a PC. Lots of people don't own powerful GPUs either and seem to make do. But insulting the single developer who dedicates his time to this community is not a good way to go about things. Also, if I can figure out things, so can you as well.
@cocktailpeanut I would say that the general direction of this project would be to fix high-priority issues and get closer to maximum speed and memory optimization. Knowing this, full Mac support is most likely on the bottom of the list and would be considered once Forge is at its "peak." To me, the answer is "we are not yet sure," and for now, the best option is to try to find a fix someone else commented or hope someone adds a commit or a fork that will bring full support.
I want to use Flux on a Mac. I couldn't even get anything but black/blank images to be generated with Forge.
I couldn't find any docs on why it's failing, or how to get Forge working on a Mac.
I finally got Flux working after weeks of experimenting with dead end solutions all over the web, as well after obtaining a new laptop with a better video card (RTX 3060 6GB). I ended up asking ChatGPT the best way to create AI generated images and it got me setup with Forge/InvokeAI, however it didn't work out of the box. I had to find a checkpoint that wouldn't cause my card to error out and so far that's only been the flux1-schnell-q2_k.gguf and v1-5-pruned-emaonly.safetensors checkpoints. All in all, AI image generation is incredibly complicated and has been a major hassle trying to generate my own AI images, but I found a solution that works. If you've got a newer video card with 6+ GB of RAM, asking ChatGPT for instructions on how to install Forge/InvokeAI using the aforementioned checkpoints might be a possible solution for you too. It also may be easier to just buy an AI image generation subscription online than trying to set it up yourself. Good luck.
This project wasted my time today and the developers should really more up front and let people know that some hardware isn't going to be supported so they don't have to play wild goose software version chases trying to get applications to work. And for those out there potentially wondering if they should try StablityAI and StableDiffusion -- it's going to be less of a hassle to just buy a subscription for generating AI images than attempt to get this convoluted software and hardware mess to work.
It's free and we should all be thankful. If you can't use it (and not to sound snarky,) buy a PC. Lots of people don't own powerful GPUs either and seem to make do. But insulting the single developer who dedicates his time to this community is not a good way to go about things. Also, if I can figure out things, so can you as well.
I have a 3090, and my mac has still a gpu with more ram than that.
I'm no expert, but you could try it with PyTorch nightly build to see if it improves things. Sometimes the PyTorch version set in the Git repo.
Update the webui-user.sh file as below and it should create a new venv-torch-nightly folder you can test out. If it doesn't work, just revert the webui-user.sh file and you can delete the venv-torch-nightly folder.
#!/bin/bash
#########################################################
# Uncomment and change the variables below to your need:#
#########################################################
# Install directory without trailing slash
#install_dir="/home/$(whoami)"
# Name of the subdirectory
#clone_dir="stable-diffusion-webui"
# Commandline arguments for webui.py, for example: export COMMANDLINE_ARGS="--medvram --opt-split-attention"
#export COMMANDLINE_ARGS=""
# python3 executable
#python_cmd="python3"
# git executable
#export GIT="git"
# python3 venv without trailing slash (defaults to ${install_dir}/${clone_dir}/venv)
#venv_dir="venv"
venv_dir="venv-torch-nightly"
# script to launch to start the app
#export LAUNCH_SCRIPT="launch.py"
# install command for torch
#export TORCH_COMMAND="pip install torch==1.12.1+cu113 --extra-index-url https://download.pytorch.org/whl/cu113"
export TORCH_COMMAND="pip install --pre torch torchvision -f https://download.pytorch.org/whl/nightly/cpu/torch_nightly.html"
# Requirements file to use for stable-diffusion-webui
#export REQS_FILE="requirements_versions.txt"
# Fixed git repos
#export K_DIFFUSION_PACKAGE=""
#export GFPGAN_PACKAGE=""
# Fixed git commits
#export STABLE_DIFFUSION_COMMIT_HASH=""
#export CODEFORMER_COMMIT_HASH=""
#export BLIP_COMMIT_HASH=""
# Uncomment to enable accelerated launch
#export ACCELERATE="True"
# Uncomment to disable TCMalloc
#export NO_TCMALLOC="True"
###########################################
I'm no expert, but you could try it with PyTorch nightly build to see if it improves things. Sometimes the PyTorch version set in the Git repo.
I've had nothing but trouble trying to install and run the nightly builds. I would get python errors, and after working throught all that mess and get things "running" I'd get corrupted images. I gave up on the nightly builds after too many hours of frustration. Unless someone has a specific answer on how to get the nightly builds working on MacOS, I advise people to not waste their time with suggestions like "try this".
If someone has a proven method for running the PyTorch nightly build, please speak up!
What's the current status? Still no support for Apple's M-chips?
Same error - TypeError: Trying to convert Float8_e4m3fn to the MPS backend but it does not have support for that dtype.
Not sure if ever will, the only one that works for Float8_e4m3fn is Comfy on MPS. Illyasviel seems busy with framepack and not sure if any of the others doing commits do care about MacOS / MPS.
Well an Invoke AI works, it and ComfyUI can run https://civitai.com/models/699688/8-steps-creart-hyper-flux-dev on MPS for example while Forge cannot.