Fooocus
Fooocus copied to clipboard
mps not supported now on 2.1.66
Device mps:0 does not support the torch.fft functions used in the FreeU node, switching to CPU.
How can i get full MPS support on my silicon mac ?
is that showing if you do not use FreeU or you always see this?
if i use freeU i get this :
Device mps:0 does not support the torch.fft functions used in the FreeU node, switching to CPU.
without freeU doesn't see mps error
i am on 2.1.675 now
macos 14.0 macbookpro m1, 16g ram
[Fooocus Model Management] Moving model(s) has taken 61.51 seconds
[Sampler] Fooocus sampler is activated.
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
- Avoid using tokenizers
before the fork if possible
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
0%| | 0/30 [00:00<?, ?it/s]/Volumes/1TSSD/AI-project/fooocus/Fooocus/modules/anisotropic.py:132: UserWarning: The operator 'aten::std_mean.correction' is not currently supported on the MPS backend and will fall back to run on the CPU. This may have performance implications. (Triggered internally at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/mps/MPSFallback.mm:13.)
s, m = torch.std_mean(g, dim=(1, 2, 3), keepdim=True)
3%|█▍ | 1/30 [01:40<48:41, 100.73s/it]
App started successful. Use the app with http://127.0.0.1:7860/ or 127.0.0.1:7860 [Parameters] Adaptive CFG = 7 [Parameters] Sharpness = 2 [Parameters] ADM Scale = 1.5 : 0.8 : 0.3 [Parameters] CFG = 7.0 [Parameters] Seed = 4730616638981956459 [Parameters] Sampler = dpmpp_2m_sde_gpu - karras [Parameters] Steps = 30 - 20 [Fooocus] Initializing ... [Fooocus] Loading models ... [Fooocus] Processing prompts ... [Fooocus] Preparing Fooocus text #1 ... [Prompt Expansion] New suffix: intricate, highly detailed, digital painting, artstation, concept art, smooth, sharp focus, illustration, unreal engine 5, 8 k, art by artgerm and greg rutkowski and alphonse mucha [Fooocus] Preparing Fooocus text #2 ... [Prompt Expansion] New suffix: extremely detailed, artstation, 8 k, sensual lighting, incredible art, wlop, artgerm [Fooocus] Encoding positive #1 ... [Fooocus] Encoding positive #2 ... [Fooocus] Encoding negative #1 ... [Fooocus] Encoding negative #2 ... Preparation time: 3.79 seconds [Sampler] refiner_swap_method = joint [Sampler] sigma_min = 0.02916753850877285, sigma_max = 14.614643096923828 Requested to load SDXL Loading 1 new model [Fooocus Model Management] Moving model(s) has taken 60.01 seconds [Sampler] Fooocus sampler is activated. 0%| | 0/30 [00:00<?, ?it/s]/Applications/Fooocus/modules/anisotropic.py:132: UserWarning: The operator 'aten::std_mean.correction' is not currently supported on the MPS backend and will fall back to run on the CPU. This may have performance implications. (Triggered internally at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/mps/MPSFallback.mm:13.) s, m = torch.std_mean(g, dim=(1, 2, 3), keepdim=True) 7%|██▋ | 2/30 [04:50<1:07:49, 145.34s/it] User stopped Total time: 355.80 seconds
if i use freeU i get this :
Device mps:0 does not support the torch.fft functions used in the FreeU node, switching to CPU.
without freeU doesn't see mps error i am on 2.1.675 now
same here :( i want to use fooocus on my macbook m1 so bad :)
yes - this is so sad :( @lllyasviel heeeeeeeeeelp
bump
Pay attention to the same problem
bump
same.
Same here, Fooocus v2.1.8241 Looks like Pytorch has added support: https://github.com/pytorch/pytorch/pull/110829
bump
bump
[Fooocus] Encoding positive #1 ...
[Fooocus] Encoding positive #2 ...
[Fooocus] Encoding negative #1 ...
[Fooocus] Encoding negative #2 ...
[Parameters] Denoising Strength = 1.0
[Parameters] Initial Latent shape: Image Space (1152, 896)
Preparation time: 20.42 seconds
[Sampler] refiner_swap_method = joint
[Sampler] sigma_min = 0.0291671771556139, sigma_max = 14.614643096923828
Requested to load SDXL
Loading 1 new model
[Fooocus Model Management] Moving model(s) has taken 113.59 seconds
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
- Avoid using tokenizers
before the fork if possible
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
0%| | 0/30 [00:00<?, ?it/s]/Users/ty/Desktop/Fooocus-main/modules/anisotropic.py:132: UserWarning: The operator 'aten::std_mean.correction' is not currently supported on the MPS backend and will fall back to run on the CPU. This may have performance implications. (Triggered internally at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/mps/MPSFallback.mm:13.)
s, m = torch.std_mean(g, dim=(1, 2, 3), keepdim=True)
7%|██████ | 2/30 [06:31<1:30:11, 193.25s/it]
The operator 'aten::std_mean.correction' is not currently supported on the MPS backend and will fall back to run on the CPU
The same error
193.25s/it
The same speed
Bump
If my understanding is correct, it looks like a compatibility issue with pytorch and apple silicon.
Does anyone know if there is a way to adjust the parameters in Fooocus to avoid using the 'aten:std_mean.coorection', but rather some other calculation that would be supported?
Bump, I'd love to get an update on that.
any update?
any update?
bumpedy bump bump - can we haz some mac arm love plz
UserWarning: The operator 'aten::std_mean.correction' is not currently supported on the MPS backend and will fall back to run on the CPU. This may have performance implications. (Triggered internally at /Users/runner/work/_temp/anaconda/conda-bld/pytorch_1701416305940/work/aten/src/ATen/mps/MPSFallback.mm:13.) s, m = torch.std_mean(g, dim=(1, 2, 3), keepdim=True)
same here
Same on Intel iMac 2020 with AMD Graphics?
Macbook M3 Pro suffers from the same problem - So i guess the whole lineup of Apples Silicon CPU's is affected
Same on MacBook Pro Apple M2 Pro, macOS Sonoma Version 14.1.
... UserWarning: The operator 'aten::std_mean.correction' is not currently supported on the MPS backend and will fall back to run on the CPU. This may have performance implications. (Triggered internally at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/mps/MPSFallback.mm:13.) ...
Same issue! Bump
Same
same
same M2 Pro, Fooocus version: 2.1.854
bump for visibility!
Sent from Proton Mail for iOS
On Thu, Dec 21, 2023 at 20:58, Dkray @.***(mailto:On Thu, Dec 21, 2023 at 20:58, Dkray < wrote:
same M2 Pro, Fooocus version: 2.1.854
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>