Does not work with m1 mac
Yup it do not work Add support asap
Have you tried setting the device to "mps"? I'm trying this rn but ran out of RAM so I'm gonna try again later on a more powerful Mac
I just edited the value in the yaml config from cuda:0 to mps and ran out of ram on my 16GB MBP. I did not take a look into the code yet but it really seems like you are right (and I believe so).
I will try later this week (probably Monday or later as I am not available before) on a 32GB Mac. I did not really take the time yet to try editing the code but I am willing to help you on the fork.
Can we communicate through another platform like Discord, Telegram or other? GitHub issues is not the best way for me…
Message sent!
现在他懒惰,没有做任何工作。他当前在 2分钟名声顶点上。他的名声会消失,希望他会回到现实生活中。希望他工作后才能那样。
it never will work with m1 mac, doesn't have bf16 support and doesn't have autocast support or autograd
you need to be less rude to project maintainers, Ostris doesn't deserve that
@bghira 他住在美国,我住在中国
@bghira 你是中国人吗
sorry, no one can understand you.
@bghira you are racist toward china people. all you discriminate. i first thought you are also a Chinese man. but no
also do not say "no one", most of the people in AI top ones are chinese! look at all our research papers also
nothing to do with race. it is that you are using Chinese on an English project space. thus, no one can understand you. obviously you know what i meant. you are still rude - John Cena could come here and speak Chinese and receive same response. and it doesn't change how rude you are.
wild, they deleted their whole Github.
can we stay on-topic please?
when you tell people to shush because you personally don't like the noise, it amplifies the noise, and does the opposite. as now i am here explaining to you that asking us to stay on topic emitted yet another email to everyone watching the project. instead. unsubscribe from the thread.
Guys, what's the progress here?
I wanna try to run it on my M3, but from what I see it won't work as well, right?
@martintomov @KodeurKubik were you able guys to get it work?
There’s some functions that are not compatible yet with PyTorch on MPS - so we will still be waiting for those before continuing
It would really be awesome if we could locally fine-tune FLUX.1-dev on mac. I have m3 max with 128gb unified memory and would really appreciate the effort.
Same here, M3 Max with 96GB RAM. What would be even awesomer, is if there were a way for it to make use of the neural cores!
Any news on this?
anything on this?
I just edited the value in the yaml config from cuda:0 to mps and ran out of ram on my 16GB MBP. I did not take a look into the code yet but it really seems like you are right (and I believe so).
I will try later this week (probably Monday or later as I am not available before) on a 32GB Mac. I did not really take the time yet to try editing the code but I am willing to help you on the fork.
Can we communicate through another platform like Discord, Telegram or other? GitHub issues is not the best way for me…
Hi, did you ever figure this out further? I'm trying with a m3 96gig but I kept running out of memory as well, wondering what I'm missing
I just edited the value in the yaml config from cuda:0 to mps and ran out of ram on my 16GB MBP. I did not take a look into the code yet but it really seems like you are right (and I believe so).
I will try later this week (probably Monday or later as I am not available before) on a 32GB Mac. I did not really take the time yet to try editing the code but I am willing to help you on the fork.
Can we communicate through another platform like Discord, Telegram or other? GitHub issues is not the best way for me…
Hi, did you ever figure this out further? I'm trying with a m3 96gig but I kept running out of memory as well, wondering what I'm missing
nope. running this on apple silicon would never work.
it doesn't have bf16 support and doesn't have autocast support or autograd
use nvidia gpu from runpod if you don't have nvidia powered pc
I just edited the value in the yaml config from cuda:0 to mps and ran out of ram on my 16GB MBP. I did not take a look into the code yet but it really seems like you are right (and I believe so). I will try later this week (probably Monday or later as I am not available before) on a 32GB Mac. I did not really take the time yet to try editing the code but I am willing to help you on the fork. Can we communicate through another platform like Discord, Telegram or other? GitHub issues is not the best way for me…
Hi, did you ever figure this out further? I'm trying with a m3 96gig but I kept running out of memory as well, wondering what I'm missing
nope. running this on apple silicon would never work.
it doesn't have bf16 support and doesn't have autocast support or autograd
use nvidia gpu from runpod if you don't have nvidia powered pc
Dang. Thanks so much for the response!