ai-toolkit icon indicating copy to clipboard operation
ai-toolkit copied to clipboard

Does not work with m1 mac

Open ghost opened this issue 1 year ago • 25 comments

Yup it do not work Add support asap

ghost avatar Aug 23 '24 17:08 ghost

Have you tried setting the device to "mps"? I'm trying this rn but ran out of RAM so I'm gonna try again later on a more powerful Mac

KodeurKubik avatar Aug 24 '24 10:08 KodeurKubik

I just edited the value in the yaml config from cuda:0 to mps and ran out of ram on my 16GB MBP. I did not take a look into the code yet but it really seems like you are right (and I believe so).

I will try later this week (probably Monday or later as I am not available before) on a 32GB Mac. I did not really take the time yet to try editing the code but I am willing to help you on the fork.

Can we communicate through another platform like Discord, Telegram or other? GitHub issues is not the best way for me…

KodeurKubik avatar Aug 24 '24 11:08 KodeurKubik

Message sent!

KodeurKubik avatar Aug 24 '24 12:08 KodeurKubik

现在他懒惰,没有做任何工作。他当前在 2分钟名声顶点上。他的名声会消失,希望他会回到现实生活中。希望他工作后才能那样。

ghost avatar Aug 24 '24 20:08 ghost

it never will work with m1 mac, doesn't have bf16 support and doesn't have autocast support or autograd

you need to be less rude to project maintainers, Ostris doesn't deserve that

bghira avatar Aug 24 '24 20:08 bghira

@bghira 他住在美国,我住在中国

ghost avatar Aug 24 '24 20:08 ghost

@bghira 你是中国人吗

ghost avatar Aug 24 '24 20:08 ghost

sorry, no one can understand you.

bghira avatar Aug 24 '24 20:08 bghira

@bghira you are racist toward china people. all you discriminate. i first thought you are also a Chinese man. but no

ghost avatar Aug 24 '24 20:08 ghost

also do not say "no one", most of the people in AI top ones are chinese! look at all our research papers also

ghost avatar Aug 24 '24 20:08 ghost

nothing to do with race. it is that you are using Chinese on an English project space. thus, no one can understand you. obviously you know what i meant. you are still rude - John Cena could come here and speak Chinese and receive same response. and it doesn't change how rude you are.

bghira avatar Aug 24 '24 20:08 bghira

wild, they deleted their whole Github.

bghira avatar Aug 24 '24 20:08 bghira

can we stay on-topic please?

KodeurKubik avatar Aug 24 '24 21:08 KodeurKubik

when you tell people to shush because you personally don't like the noise, it amplifies the noise, and does the opposite. as now i am here explaining to you that asking us to stay on topic emitted yet another email to everyone watching the project. instead. unsubscribe from the thread.

bghira avatar Aug 24 '24 22:08 bghira

Guys, what's the progress here?

I wanna try to run it on my M3, but from what I see it won't work as well, right?

@martintomov @KodeurKubik were you able guys to get it work?

OmarMuhtaseb avatar Aug 31 '24 07:08 OmarMuhtaseb

There’s some functions that are not compatible yet with PyTorch on MPS - so we will still be waiting for those before continuing

KodeurKubik avatar Aug 31 '24 07:08 KodeurKubik

It would really be awesome if we could locally fine-tune FLUX.1-dev on mac. I have m3 max with 128gb unified memory and would really appreciate the effort.

ahmetkca avatar Sep 22 '24 11:09 ahmetkca

Same here, M3 Max with 96GB RAM. What would be even awesomer, is if there were a way for it to make use of the neural cores!

sanctimon avatar Sep 24 '24 17:09 sanctimon

Any news on this?

emiliodeme avatar Apr 17 '25 03:04 emiliodeme

anything on this?

bhupesh-sf avatar Jul 14 '25 09:07 bhupesh-sf

I just edited the value in the yaml config from cuda:0 to mps and ran out of ram on my 16GB MBP. I did not take a look into the code yet but it really seems like you are right (and I believe so).

I will try later this week (probably Monday or later as I am not available before) on a 32GB Mac. I did not really take the time yet to try editing the code but I am willing to help you on the fork.

Can we communicate through another platform like Discord, Telegram or other? GitHub issues is not the best way for me…

Hi, did you ever figure this out further? I'm trying with a m3 96gig but I kept running out of memory as well, wondering what I'm missing

Illumibatchi avatar Oct 08 '25 20:10 Illumibatchi

I just edited the value in the yaml config from cuda:0 to mps and ran out of ram on my 16GB MBP. I did not take a look into the code yet but it really seems like you are right (and I believe so).

I will try later this week (probably Monday or later as I am not available before) on a 32GB Mac. I did not really take the time yet to try editing the code but I am willing to help you on the fork.

Can we communicate through another platform like Discord, Telegram or other? GitHub issues is not the best way for me…

Hi, did you ever figure this out further? I'm trying with a m3 96gig but I kept running out of memory as well, wondering what I'm missing

nope. running this on apple silicon would never work.

it doesn't have bf16 support and doesn't have autocast support or autograd

use nvidia gpu from runpod if you don't have nvidia powered pc

martintomov avatar Oct 08 '25 20:10 martintomov

I just edited the value in the yaml config from cuda:0 to mps and ran out of ram on my 16GB MBP. I did not take a look into the code yet but it really seems like you are right (and I believe so). I will try later this week (probably Monday or later as I am not available before) on a 32GB Mac. I did not really take the time yet to try editing the code but I am willing to help you on the fork. Can we communicate through another platform like Discord, Telegram or other? GitHub issues is not the best way for me…

Hi, did you ever figure this out further? I'm trying with a m3 96gig but I kept running out of memory as well, wondering what I'm missing

nope. running this on apple silicon would never work.

it doesn't have bf16 support and doesn't have autocast support or autograd

use nvidia gpu from runpod if you don't have nvidia powered pc

Dang. Thanks so much for the response!

Illumibatchi avatar Oct 08 '25 23:10 Illumibatchi