Fooocus
Fooocus copied to clipboard
[Bug]: Can't generate image
Checklist
- [x] The issue has not been resolved by following the troubleshooting guide
- [x] The issue exists on a clean installation of Fooocus
- [x] The issue exists in the current version of Fooocus
- [x] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
What happened?
I am using foooocus inside docker. I have tried to run it. Everything compiles, but when I tried to generate image, error occured
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument index in method wrapper_CUDA__index_select)
Steps to reproduce the problem
- download latest master(8da1d3ff68942e2d976675939fe72c95746e366e)
- fresh install docker
- run
docker compose up
- run
docker run -p 7865:7865 -v fooocus-data:/content/data -it \ --gpus all \ -e CMDARGS=--listen \ -e DATADIR=/content/data \ -e config_path=/content/data/config.txt \ -e config_example_path=/content/data/config_modification_tutorial.txt \ -e path_checkpoints=/content/data/models/checkpoints/ \ -e path_loras=/content/data/models/loras/ \ -e path_embeddings=/content/data/models/embeddings/ \ -e path_vae_approx=/content/data/models/vae_approx/ \ -e path_upscale_models=/content/data/models/upscale_models/ \ -e path_inpaint=/content/data/models/inpaint/ \ -e path_controlnet=/content/data/models/controlnet/ \ -e path_clip_vision=/content/data/models/clip_vision/ \ -e path_fooocus_expansion=/content/data/models/prompt_expansion/fooocus_expansion/ \ -e path_outputs=/content/app/outputs/ \ ghcr.io/lllyasviel/fooocus
- try to generate some image
What should have happened?
Image should be generated with no errors
What browsers do you use to access Fooocus?
Mozilla Firefox
Where are you running Fooocus?
Locally with virtualization (e.g. Docker)
What operating system are you using?
Pop_os alpha
Console logs
$ sudo docker run -p 7865:7865 -v fooocus-data:/content/data -it --gpus all -e CMDARGS=--listen -e DATADIR=/content/data -e config_path=/content/data/config.txt -e config_example_path=/content/data/config_modification_tutorial.txt -e path_checkpoints=/content/data/models/checkpoints/ -e path_loras=/content/data/models/loras/ -e path_embeddings=/content/data/models/embeddings/ -e path_vae_approx=/content/data/models/vae_approx/ -e path_upscale_models=/content/data/models/upscale_models/ -e path_inpaint=/content/data/models/inpaint/ -e path_controlnet=/content/data/models/controlnet/ -e path_clip_vision=/content/data/models/clip_vision/ -e path_fooocus_expansion=/content/data/models/prompt_expansion/fooocus_expansion/ -e path_outputs=/content/app/outputs/ ghcr.io/lllyasviel/fooocus
[System ARGV] ['launch.py', '--listen']
Python 3.10.12 (main, Jul 29 2024, 16:56:48) [GCC 11.4.0]
Fooocus version: 2.5.5
Environment: config_path = /content/data/config.txt
Environment: config_example_path = /content/data/config_modification_tutorial.txt
Environment: path_checkpoints = /content/data/models/checkpoints/
Environment: path_loras = /content/data/models/loras/
Environment: path_embeddings = /content/data/models/embeddings/
Environment: path_vae_approx = /content/data/models/vae_approx/
Environment: path_upscale_models = /content/data/models/upscale_models/
Environment: path_inpaint = /content/data/models/inpaint/
Environment: path_controlnet = /content/data/models/controlnet/
Environment: path_clip_vision = /content/data/models/clip_vision/
Environment: path_fooocus_expansion = /content/data/models/prompt_expansion/fooocus_expansion/
Environment: path_outputs = /content/app/outputs/
[Cleanup] Attempting to delete content of temp dir /tmp/fooocus
[Cleanup] Cleanup successful
Total VRAM 7878 MB, total RAM 31797 MB
xformers version: 0.0.23
Set vram state to: NORMAL_VRAM
Always offload VRAM
Device: cuda:0 NVIDIA GeForce RTX 3070 Ti Laptop GPU : native
VAE dtype: torch.bfloat16
Using xformers cross attention
Refiner unloaded.
[System ARGV] ['launch.py', '--listen']
Python 3.10.12 (main, Jul 29 2024, 16:56:48) [GCC 11.4.0]
Fooocus version: 2.5.5
[Cleanup] Attempting to delete content of temp dir /tmp/fooocus
[Cleanup] Cleanup successful
Running on local URL: http://0.0.0.0:7865
To create a public link, set `share=True` in `launch()`.
model_type EPS
UNet ADM Dimension 2816
Using xformers attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using xformers attention in VAE
IMPORTANT: You are using gradio version 3.41.2, however version 4.29.0 is available, please upgrade.
--------
extra {'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_l.logit_scale'}
left over keys: dict_keys(['cond_stage_model.clip_l.transformer.text_model.embeddings.position_ids'])
Base model loaded: /content/data/models/checkpoints/juggernautXL_v8Rundiffusion.safetensors
VAE loaded: None
Request to load LoRAs [('sd_xl_offset_example-lora_1.0.safetensors', 0.1)] for model [/content/data/models/checkpoints/juggernautXL_v8Rundiffusion.safetensors].
Loaded LoRA [/content/data/models/loras/sd_xl_offset_example-lora_1.0.safetensors] for UNet [/content/data/models/checkpoints/juggernautXL_v8Rundiffusion.safetensors] with 788 keys at weight 0.1.
Fooocus V2 Expansion: Vocab with 642 words.
/usr/local/lib/python3.10/dist-packages/torch/_utils.py:831: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
return self.fget.__get__(instance, owner)()
Fooocus Expansion engine loaded for cuda:0, use_fp16 = True.
Requested to load SDXLClipModel
Requested to load GPT2LMHeadModel
Loading 2 new models
loading in lowvram mode 1302.019229888916
lowvram: loaded module regularly Embedding(49408, 768)
lowvram: loaded module regularly Embedding(77, 768)
lowvram: loaded module regularly Embedding(49408, 1280)
lowvram: loaded module regularly Embedding(77, 1280)
[Fooocus Model Management] Moving model(s) has taken 0.28 seconds
Started worker with PID 15
App started successful. Use the app with http://localhost:7865/ or 0.0.0.0:7865
[Parameters] Adaptive CFG = 7
[Parameters] CLIP Skip = 2
[Parameters] Sharpness = 2
[Parameters] ControlNet Softness = 0.25
[Parameters] ADM Scale = 1.5 : 0.8 : 0.3
[Parameters] Seed = 538919795618640707
[Parameters] CFG = 4
[Fooocus] Loading control models ...
[Parameters] Sampler = dpmpp_2m_sde_gpu - karras
[Parameters] Steps = 30 - 15
[Fooocus] Initializing ...
[Fooocus] Loading models ...
Refiner unloaded.
[Fooocus] Processing prompts ...
[Fooocus] Preparing Fooocus text #1 ...
[Prompt Expansion] beach, dynamic dramatic bright light, atmosphere, gorgeous, intricate, elegant, highly detailed, extremely color balanced, cinematic, sharp focus, perfect composition, innocent, beautiful, inspired, rich deep colors, open background, joyful, thought, iconic, epic, stunning, brave, full detail, cool, great fine, awesome, creative, passionate, inspiring, amazing, fabulous
[Fooocus] Preparing Fooocus text #2 ...
[Prompt Expansion] beach, dramatic warm color, highly detailed, incredible quality, very inspirational, inspiring, thought, rich vivid colors, winning bright artistic aesthetic, perfect cinematic atmosphere, beautiful fine detail, full intricate, elegant, creative, positive light, relaxed, joyful, unique, awesome, symmetry, iconic, complex, vibrant, brilliant, shiny, colorful background, illuminated, professional, best
[Fooocus] Encoding positive #1 ...
Traceback (most recent call last):
File "/content/app/modules/async_worker.py", line 1471, in worker
handler(task)
File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/content/app/modules/async_worker.py", line 1160, in handler
tasks, use_expansion, loras, current_progress = process_prompt(async_task, async_task.prompt, async_task.negative_prompt,
File "/content/app/modules/async_worker.py", line 746, in process_prompt
t['c'] = pipeline.clip_encode(texts=t['positive'], pool_top_k=t['positive_top_k'])
File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/content/app/modules/default_pipeline.py", line 196, in clip_encode
cond, pooled = clip_encode_single(final_clip, text)
File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/content/app/modules/default_pipeline.py", line 154, in clip_encode_single
result = clip.encode_from_tokens(tokens, return_pooled=True)
File "/content/app/ldm_patched/modules/sd.py", line 128, in encode_from_tokens
cond, pooled = self.cond_stage_model.encode_token_weights(tokens)
File "/content/app/ldm_patched/modules/sdxl_clip.py", line 54, in encode_token_weights
g_out, g_pooled = self.clip_g.encode_token_weights(token_weight_pairs_g)
File "/content/app/modules/patch_clip.py", line 39, in patched_encode_token_weights
out, pooled = self.encode(to_encode)
File "/content/app/ldm_patched/modules/sd1_clip.py", line 190, in encode
return self(tokens)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/content/app/modules/patch_clip.py", line 125, in patched_SDClipModel_forward
outputs = self.transformer(input_ids=tokens, attention_mask=attention_mask,
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/transformers/models/clip/modeling_clip.py", line 806, in forward
return self.text_model(
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/transformers/models/clip/modeling_clip.py", line 698, in forward
hidden_states = self.embeddings(input_ids=input_ids, position_ids=position_ids)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/transformers/models/clip/modeling_clip.py", line 220, in forward
position_embeddings = self.position_embedding(position_ids)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/sparse.py", line 162, in forward
return F.embedding(
File "/usr/local/lib/python3.10/dist-packages/torch/nn/functional.py", line 2233, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument index in method wrapper_CUDA__index_select)
Total time: 0.76 seconds
Additional information
No response