stable-diffusion-webui-forge icon indicating copy to clipboard operation
stable-diffusion-webui-forge copied to clipboard

Flux doesn't work on Macbook Pro M1 Max

Open achiever1984 opened this issue 6 months ago • 21 comments

Hello.

When I try to generate an image in Flux mode using the flux1-dev-bnb-nf4.safetensors model on my macbook, I get the following error:

0%| | 0/20 [00:00<?, ?it/s]huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks... To disable this warning, you can either:

  • Avoid using tokenizers before the fork if possible
  • Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false) 0%| | 0/20 [00:00<?, ?it/s] Traceback (most recent call last): File "/Users/vladimirkrutikov/stable-diffusion-webui-forge/modules_forge/main_thread.py", line 30, in work self.result = self.func(*self.args, **self.kwargs) File "/Users/vladimirkrutikov/stable-diffusion-webui-forge/modules/txt2img.py", line 110, in txt2img_function processed = processing.process_images(p) File "/Users/vladimirkrutikov/stable-diffusion-webui-forge/modules/processing.py", line 809, in process_images res = process_images_inner(p) File "/Users/vladimirkrutikov/stable-diffusion-webui-forge/modules/processing.py", line 952, in process_images_inner samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts) File "/Users/vladimirkrutikov/stable-diffusion-webui-forge/modules/processing.py", line 1323, in sample samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x)) File "/Users/vladimirkrutikov/stable-diffusion-webui-forge/modules/sd_samplers_kdiffusion.py", line 234, in sample samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs)) File "/Users/vladimirkrutikov/stable-diffusion-webui-forge/modules/sd_samplers_common.py", line 272, in launch_sampling return func() File "/Users/vladimirkrutikov/stable-diffusion-webui-forge/modules/sd_samplers_kdiffusion.py", line 234, in samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs)) File "/Users/vladimirkrutikov/stable-diffusion-webui-forge/.venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context return func(*args, **kwargs) File "/Users/vladimirkrutikov/stable-diffusion-webui-forge/k_diffusion/sampling.py", line 128, in sample_euler denoised = model(x, sigma_hat * s_in, **extra_args) File "/Users/vladimirkrutikov/stable-diffusion-webui-forge/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl return self.call_impl(*args, **kwargs) File "/Users/vladimirkrutikov/stable-diffusion-webui-forge/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in call_impl return forward_call(*args, **kwargs) File "/Users/vladimirkrutikov/stable-diffusion-webui-forge/modules/sd_samplers_cfg_denoiser.py", line 186, in forward denoised, cond_pred, uncond_pred = sampling_function(self, denoiser_params=denoiser_params, cond_scale=cond_scale, cond_composition=cond_composition) File "/Users/vladimirkrutikov/stable-diffusion-webui-forge/backend/sampling/sampling_function.py", line 339, in sampling_function denoised, cond_pred, uncond_pred = sampling_function_inner(model, x, timestep, uncond, cond, cond_scale, model_options, seed, return_full=True) File "/Users/vladimirkrutikov/stable-diffusion-webui-forge/backend/sampling/sampling_function.py", line 284, in sampling_function_inner cond_pred, uncond_pred = calc_cond_uncond_batch(model, cond, uncond, x, timestep, model_options) File "/Users/vladimirkrutikov/stable-diffusion-webui-forge/backend/sampling/sampling_function.py", line 254, in calc_cond_uncond_batch output = model.apply_model(input_x, timestep, **c).chunk(batch_chunks) File "/Users/vladimirkrutikov/stable-diffusion-webui-forge/backend/modules/k_model.py", line 45, in apply_model model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float() File "/Users/vladimirkrutikov/stable-diffusion-webui-forge/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/Users/vladimirkrutikov/stable-diffusion-webui-forge/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl return forward_call(*args, **kwargs) File "/Users/vladimirkrutikov/stable-diffusion-webui-forge/backend/nn/flux.py", line 393, in forward out = self.inner_forward(img, img_ids, context, txt_ids, timestep, y, guidance) File "/Users/vladimirkrutikov/stable-diffusion-webui-forge/backend/nn/flux.py", line 350, in inner_forward img = self.img_in(img) File "/Users/vladimirkrutikov/stable-diffusion-webui-forge/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/Users/vladimirkrutikov/stable-diffusion-webui-forge/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl return forward_call(*args, **kwargs) File "/Users/vladimirkrutikov/stable-diffusion-webui-forge/backend/operations.py", line 112, in forward return torch.nn.functional.linear(x, self.weight, self.bias) RuntimeError: linear(): input and weight.T shapes cannot be multiplied (4032x64 and 1x98304) linear(): input and weight.T shapes cannot be multiplied (4032x64 and 1x98304)

What can I do to fix this?

achiever1984 avatar Aug 14 '24 18:08 achiever1984