Fooocus icon indicating copy to clipboard operation
Fooocus copied to clipboard

[Bug]: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!

Open Asianfleet opened this issue 9 months ago • 2 comments

Checklist

  • [X] The issue has not been resolved by following the troubleshooting guide
  • [X] The issue exists on a clean installation of Fooocus
  • [X] The issue exists in the current version of Fooocus
  • [ ] The issue has not been reported before recently
  • [ ] The issue has been reported before but has not been fixed yet

What happened?

First run after installation→RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!

Steps to reproduce the problem

once the gui opened,input "a girl" ,click the generate botton,and I get the error

What should have happened?

generate image

What browsers do you use to access Fooocus?

Microsoft Edge

Where are you running Fooocus?

Cloud (other)

What operating system are you using?

Ubuntu

Console logs

(fooocus) XXXXX:/XXX/Fooocus$ python entry_with_update.py
Update failed.
SSL error: received early EOF
Update succeeded.
[System ARGV] ['entry_with_update.py']
Python 3.10.14 (main, May  6 2024, 19:42:50) [GCC 11.2.0]
Fooocus version: 2.3.1
[Cleanup] Attempting to delete content of temp dir /tmp/fooocus
[Cleanup] Cleanup successful
Total VRAM 40956 MB, total RAM 128804 MB
Set vram state to: NORMAL_VRAM
Always offload VRAM
Device: cuda:0 GRID A100D-40C : 
VAE dtype: torch.float32
Using sub quadratic optimization for cross attention, if you have memory or speed issues try using: --attention-split
Refiner unloaded.
Running on local URL:  http://127.0.0.1:7865

To create a public link, set `share=True` in `launch()`.
IMPORTANT: You are using gradio version 3.41.2, however version 4.29.0 is available, please upgrade.
--------
model_type EPS
UNet ADM Dimension 2816
Using split attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using split attention in VAE
extra {'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids', 'cond_stage_model.clip_l.text_projection'}
Base model loaded: /data/AIMH/pkgs/ImageProc/Generative/stable_diffusion_webui/models/Stable-diffusion/juggernautXL_v8Rundiffusion.safetensors
Request to load LoRAs [['sd_xl_offset_example-lora_1.0.safetensors', 0.1], ['None', 1.0], ['None', 1.0], ['None', 1.0], ['None', 1.0]] for model [/data/AIMH/pkgs/ImageProc/Generative/stable_diffusion_webui/models/Stable-diffusion/juggernautXL_v8Rundiffusion.safetensors].
Loaded LoRA [/data/AIMH/pkgs/ImageProc/Generative/stable_diffusion_webui/models/Lora/sd_xl_offset_example-lora_1.0.safetensors] for UNet [/data/AIMH/pkgs/ImageProc/Generative/stable_diffusion_webui/models/Stable-diffusion/juggernautXL_v8Rundiffusion.safetensors] with 788 keys at weight 0.1.
Fooocus V2 Expansion: Vocab with 642 words.
Fooocus Expansion engine loaded for cuda:0, use_fp16 = True.
Requested to load SDXLClipModel
Requested to load GPT2LMHeadModel
Loading 2 new models
loading in lowvram mode 972.259614944458
lowvram: loaded module regularly Embedding(49408, 768)
lowvram: loaded module regularly Embedding(77, 768)
lowvram: loaded module regularly Embedding(49408, 1280)
lowvram: loaded module regularly Embedding(77, 1280)
loading in lowvram mode 64.0
lowvram: loaded module regularly Embedding(50257, 768)
lowvram: loaded module regularly Embedding(1024, 768)
lowvram: loaded module regularly LayerNorm((768,), eps=1e-05, elementwise_affine=True)
lowvram: loaded module regularly Conv1D()
lowvram: loaded module regularly Conv1D()
lowvram: loaded module regularly LayerNorm((768,), eps=1e-05, elementwise_affine=True)
lowvram: loaded module regularly Conv1D()
lowvram: loaded module regularly Conv1D()
lowvram: loaded module regularly LayerNorm((768,), eps=1e-05, elementwise_affine=True)
lowvram: loaded module regularly Conv1D()
lowvram: loaded module regularly Conv1D()
lowvram: loaded module regularly LayerNorm((768,), eps=1e-05, elementwise_affine=True)
lowvram: loaded module regularly Conv1D()
lowvram: loaded module regularly Conv1D()
lowvram: loaded module regularly LayerNorm((768,), eps=1e-05, elementwise_affine=True)
lowvram: loaded module regularly Conv1D()
lowvram: loaded module regularly Conv1D()
lowvram: loaded module regularly LayerNorm((768,), eps=1e-05, elementwise_affine=True)
lowvram: loaded module regularly Conv1D()
lowvram: loaded module regularly Conv1D()
lowvram: loaded module regularly LayerNorm((768,), eps=1e-05, elementwise_affine=True)
lowvram: loaded module regularly Conv1D()
lowvram: loaded module regularly Conv1D()
lowvram: loaded module regularly LayerNorm((768,), eps=1e-05, elementwise_affine=True)
lowvram: loaded module regularly Conv1D()
lowvram: loaded module regularly Conv1D()
lowvram: loaded module regularly LayerNorm((768,), eps=1e-05, elementwise_affine=True)
lowvram: loaded module regularly Conv1D()
lowvram: loaded module regularly Conv1D()
lowvram: loaded module regularly LayerNorm((768,), eps=1e-05, elementwise_affine=True)
lowvram: loaded module regularly Conv1D()
lowvram: loaded module regularly Conv1D()
lowvram: loaded module regularly LayerNorm((768,), eps=1e-05, elementwise_affine=True)
lowvram: loaded module regularly Conv1D()
lowvram: loaded module regularly Conv1D()
lowvram: loaded module regularly LayerNorm((768,), eps=1e-05, elementwise_affine=True)
lowvram: loaded module regularly Conv1D()
lowvram: loaded module regularly Conv1D()
lowvram: loaded module regularly LayerNorm((768,), eps=1e-05, elementwise_affine=True)
lowvram: loaded module regularly Conv1D()
lowvram: loaded module regularly Conv1D()
lowvram: loaded module regularly LayerNorm((768,), eps=1e-05, elementwise_affine=True)
lowvram: loaded module regularly Conv1D()
lowvram: loaded module regularly Conv1D()
lowvram: loaded module regularly LayerNorm((768,), eps=1e-05, elementwise_affine=True)
lowvram: loaded module regularly Conv1D()
lowvram: loaded module regularly Conv1D()
lowvram: loaded module regularly LayerNorm((768,), eps=1e-05, elementwise_affine=True)
lowvram: loaded module regularly Conv1D()
lowvram: loaded module regularly Conv1D()
lowvram: loaded module regularly LayerNorm((768,), eps=1e-05, elementwise_affine=True)
lowvram: loaded module regularly Conv1D()
lowvram: loaded module regularly Conv1D()
lowvram: loaded module regularly LayerNorm((768,), eps=1e-05, elementwise_affine=True)
lowvram: loaded module regularly Conv1D()
lowvram: loaded module regularly Conv1D()
lowvram: loaded module regularly LayerNorm((768,), eps=1e-05, elementwise_affine=True)
lowvram: loaded module regularly Conv1D()
lowvram: loaded module regularly Conv1D()
lowvram: loaded module regularly LayerNorm((768,), eps=1e-05, elementwise_affine=True)
lowvram: loaded module regularly Conv1D()
lowvram: loaded module regularly Conv1D()
lowvram: loaded module regularly LayerNorm((768,), eps=1e-05, elementwise_affine=True)
lowvram: loaded module regularly Conv1D()
lowvram: loaded module regularly Conv1D()
lowvram: loaded module regularly LayerNorm((768,), eps=1e-05, elementwise_affine=True)
lowvram: loaded module regularly Conv1D()
lowvram: loaded module regularly Conv1D()
lowvram: loaded module regularly LayerNorm((768,), eps=1e-05, elementwise_affine=True)
lowvram: loaded module regularly Conv1D()
lowvram: loaded module regularly Conv1D()
lowvram: loaded module regularly LayerNorm((768,), eps=1e-05, elementwise_affine=True)
lowvram: loaded module regularly Conv1D()
lowvram: loaded module regularly Conv1D()
lowvram: loaded module regularly LayerNorm((768,), eps=1e-05, elementwise_affine=True)
lowvram: loaded module regularly Linear(in_features=768, out_features=50257, bias=False)
[Fooocus Model Management] Moving model(s) has taken 1.38 seconds
Started worker with PID 281086
App started successful. Use the app with http://127.0.0.1:7865/ or 127.0.0.1:7865
[Parameters] Adaptive CFG = 7
[Parameters] Sharpness = 2
[Parameters] ControlNet Softness = 0.25
[Parameters] ADM Scale = 1.5 : 0.8 : 0.3
[Parameters] CFG = 4.0
[Parameters] Seed = 3456273753726066305
[Parameters] Sampler = dpmpp_2m_sde_gpu - karras
[Parameters] Steps = 30 - 15
[Fooocus] Initializing ...
[Fooocus] Loading models ...
Refiner unloaded.
[Fooocus] Processing prompts ...
[Fooocus] Preparing Fooocus text #1 ...
Fooocus Expansion loaded by itself.
[Fooocus Model Management] Moving model(s) has taken 0.42 seconds
Traceback (most recent call last):
  File "/data/AIMH/pkgs/ImageProc/Generative/Fooocus/modules/async_worker.py", line 913, in worker
    handler(task)
  File "/data/AIMH/envs/fooocus/lib/python3.10/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "/data/AIMH/envs/fooocus/lib/python3.10/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "/data/AIMH/pkgs/ImageProc/Generative/Fooocus/modules/async_worker.py", line 486, in handler
    expansion = pipeline.final_expansion(t['task_prompt'], t['task_seed'])
  File "/data/AIMH/envs/fooocus/lib/python3.10/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "/data/AIMH/envs/fooocus/lib/python3.10/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "/data/AIMH/pkgs/ImageProc/Generative/Fooocus/extras/expansion.py", line 120, in __call__
    features = self.model.generate(**tokenized_kwargs,
  File "/data/AIMH/envs/fooocus/lib/python3.10/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "/data/AIMH/envs/fooocus/lib/python3.10/site-packages/transformers/generation/utils.py", line 1572, in generate
    return self.sample(
  File "/data/AIMH/envs/fooocus/lib/python3.10/site-packages/transformers/generation/utils.py", line 2619, in sample
    outputs = self(
  File "/data/AIMH/envs/fooocus/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/data/AIMH/envs/fooocus/lib/python3.10/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1080, in forward
    transformer_outputs = self.transformer(
  File "/data/AIMH/envs/fooocus/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/data/AIMH/envs/fooocus/lib/python3.10/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 903, in forward
    outputs = block(
  File "/data/AIMH/envs/fooocus/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/data/AIMH/envs/fooocus/lib/python3.10/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 391, in forward
    attn_outputs = self.attn(
  File "/data/AIMH/envs/fooocus/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/data/AIMH/envs/fooocus/lib/python3.10/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 332, in forward
    attn_output, attn_weights = self._attn(query, key, value, attention_mask, head_mask)
  File "/data/AIMH/envs/fooocus/lib/python3.10/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 203, in _attn
    attn_weights = torch.where(causal_mask, attn_weights.to(attn_weights.dtype), mask_value)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
Total time: 1.08 seconds

Additional information

The Version of torch is 1.13.1 +cu117 because the cuda driver of my server is 11.4 and can not be allowed to updated.

Asianfleet avatar May 09 '24 09:05 Asianfleet

@Asianfleet I assume you're running Ubuntu in a virtualized environment with virtualized GRID A100D-40C. It is possible that you don't have access to the full power / VRAM of this GPU, which is why Fooocus enters low vram mode as you can see in the logs. You can force another vram mode by providing the options --always-high-vram or --always-gpu and try again + report your findings. Please ensure you have sufficient swap set up as described in the troubleshooting guide.

mashb1t avatar May 09 '24 14:05 mashb1t

@Asianfleet bump, any results / findings?

mashb1t avatar May 12 '24 13:05 mashb1t

Closing as stale

mashb1t avatar May 14 '24 22:05 mashb1t

I have the same problem. It worked until yesterday, I didn't change anything. any help?

HorsemanDj avatar Jun 01 '24 12:06 HorsemanDj

@HorsemanDj no changes. What were you doing exactly and which hardware do you have? Please provide the full console log, from startup to error.

mashb1t avatar Jun 01 '24 12:06 mashb1t

@HorsemanDj no changes. What were you doing exactly and which hardware do you have? Please provide the full console log, from startup to error.

I started Fooocus on my PC as usual, but when generating the image, it stopped working and doesn't create images anymore. I also tried redownloading Fooocus and placing it in a different location, but the problem wasn't resolved. I have a 13th Gen Intel i9-13900K processor, 32GB RAM, and an Nvidia RTX 4070 Ti graphics card.

Full console log: H:\CreaFoto>.\python_embeded\python.exe -s Fooocus\entry_with_update.py Already up-to-date Update succeeded. [System ARGV] ['Fooocus\entry_with_update.py'] Python 3.10.9 (tags/v3.10.9:1dd9be6, Dec 6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)] Fooocus version: 2.4.1 [Cleanup] Attempting to delete content of temp dir C:\Users\urca\AppData\Local\Temp\fooocus [Cleanup] Cleanup successful Total VRAM 12282 MB, total RAM 32535 MB Set vram state to: NORMAL_VRAM Always offload VRAM Device: cuda:0 NVIDIA GeForce RTX 4070 Ti : native VAE dtype: torch.bfloat16 Using pytorch cross attention Refiner unloaded. Running on local URL: http://127.0.0.1:7865

To create a public link, set share=True in launch(). IMPORTANT: You are using gradio version 3.41.2, however version 4.29.0 is available, please upgrade.

model_type EPS UNet ADM Dimension 2816 Using pytorch attention in VAE Working with z of shape (1, 4, 32, 32) = 4096 dimensions. Using pytorch attention in VAE extra {'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids'} Base model loaded: H:\CreaFoto\Fooocus\models\checkpoints\juggernautXL_v8Rundiffusion.safetensors VAE loaded: None Request to load LoRAs [('sd_xl_offset_example-lora_1.0.safetensors', 0.1)] for model [H:\CreaFoto\Fooocus\models\checkpoints\juggernautXL_v8Rundiffusion.safetensors]. Loaded LoRA [H:\CreaFoto\Fooocus\models\loras\sd_xl_offset_example-lora_1.0.safetensors] for UNet [H:\CreaFoto\Fooocus\models\checkpoints\juggernautXL_v8Rundiffusion.safetensors] with 788 keys at weight 0.1. Fooocus V2 Expansion: Vocab with 642 words. Fooocus Expansion engine loaded for cuda:0, use_fp16 = True. Requested to load SDXLClipModel Requested to load GPT2LMHeadModel Loading 2 new models loading in lowvram mode 447.653244972229 lowvram: loaded module regularly Embedding(49408, 768) lowvram: loaded module regularly Embedding(77, 768) lowvram: loaded module regularly Embedding(49408, 1280) lowvram: loaded module regularly Embedding(77, 1280) loading in lowvram mode 64.0 lowvram: loaded module regularly Embedding(50257, 768) lowvram: loaded module regularly Embedding(1024, 768) lowvram: loaded module regularly LayerNorm((768,), eps=1e-05, elementwise_affine=True) lowvram: loaded module regularly Conv1D() lowvram: loaded module regularly Conv1D() lowvram: loaded module regularly LayerNorm((768,), eps=1e-05, elementwise_affine=True) lowvram: loaded module regularly Conv1D() lowvram: loaded module regularly Conv1D() lowvram: loaded module regularly LayerNorm((768,), eps=1e-05, elementwise_affine=True) lowvram: loaded module regularly Conv1D() lowvram: loaded module regularly Conv1D() lowvram: loaded module regularly LayerNorm((768,), eps=1e-05, elementwise_affine=True) lowvram: loaded module regularly Conv1D() lowvram: loaded module regularly Conv1D() lowvram: loaded module regularly LayerNorm((768,), eps=1e-05, elementwise_affine=True) lowvram: loaded module regularly Conv1D() lowvram: loaded module regularly Conv1D() lowvram: loaded module regularly LayerNorm((768,), eps=1e-05, elementwise_affine=True) lowvram: loaded module regularly Conv1D() lowvram: loaded module regularly Conv1D() lowvram: loaded module regularly LayerNorm((768,), eps=1e-05, elementwise_affine=True) lowvram: loaded module regularly Conv1D() lowvram: loaded module regularly Conv1D() lowvram: loaded module regularly LayerNorm((768,), eps=1e-05, elementwise_affine=True) lowvram: loaded module regularly Conv1D() lowvram: loaded module regularly Conv1D() lowvram: loaded module regularly LayerNorm((768,), eps=1e-05, elementwise_affine=True) lowvram: loaded module regularly Conv1D() lowvram: loaded module regularly Conv1D() lowvram: loaded module regularly LayerNorm((768,), eps=1e-05, elementwise_affine=True) lowvram: loaded module regularly Conv1D() lowvram: loaded module regularly Conv1D() lowvram: loaded module regularly LayerNorm((768,), eps=1e-05, elementwise_affine=True) lowvram: loaded module regularly Conv1D() lowvram: loaded module regularly Conv1D() lowvram: loaded module regularly LayerNorm((768,), eps=1e-05, elementwise_affine=True) lowvram: loaded module regularly Conv1D() lowvram: loaded module regularly Conv1D() lowvram: loaded module regularly LayerNorm((768,), eps=1e-05, elementwise_affine=True) lowvram: loaded module regularly Conv1D() lowvram: loaded module regularly Conv1D() lowvram: loaded module regularly LayerNorm((768,), eps=1e-05, elementwise_affine=True) lowvram: loaded module regularly Conv1D() lowvram: loaded module regularly Conv1D() lowvram: loaded module regularly LayerNorm((768,), eps=1e-05, elementwise_affine=True) lowvram: loaded module regularly Conv1D() lowvram: loaded module regularly Conv1D() lowvram: loaded module regularly LayerNorm((768,), eps=1e-05, elementwise_affine=True) lowvram: loaded module regularly Conv1D() lowvram: loaded module regularly Conv1D() lowvram: loaded module regularly LayerNorm((768,), eps=1e-05, elementwise_affine=True) lowvram: loaded module regularly Conv1D() lowvram: loaded module regularly Conv1D() lowvram: loaded module regularly LayerNorm((768,), eps=1e-05, elementwise_affine=True) lowvram: loaded module regularly Conv1D() lowvram: loaded module regularly Conv1D() lowvram: loaded module regularly LayerNorm((768,), eps=1e-05, elementwise_affine=True) lowvram: loaded module regularly Conv1D() lowvram: loaded module regularly Conv1D() lowvram: loaded module regularly LayerNorm((768,), eps=1e-05, elementwise_affine=True) lowvram: loaded module regularly Conv1D() lowvram: loaded module regularly Conv1D() lowvram: loaded module regularly LayerNorm((768,), eps=1e-05, elementwise_affine=True) lowvram: loaded module regularly Conv1D() lowvram: loaded module regularly Conv1D() lowvram: loaded module regularly LayerNorm((768,), eps=1e-05, elementwise_affine=True) lowvram: loaded module regularly Conv1D() lowvram: loaded module regularly Conv1D() lowvram: loaded module regularly LayerNorm((768,), eps=1e-05, elementwise_affine=True) lowvram: loaded module regularly Conv1D() lowvram: loaded module regularly Conv1D() lowvram: loaded module regularly LayerNorm((768,), eps=1e-05, elementwise_affine=True) lowvram: loaded module regularly Conv1D() lowvram: loaded module regularly Conv1D() lowvram: loaded module regularly LayerNorm((768,), eps=1e-05, elementwise_affine=True) lowvram: loaded module regularly Linear(in_features=768, out_features=50257, bias=False) [Fooocus Model Management] Moving model(s) has taken 0.31 seconds Started worker with PID 31008 App started successful. Use the app with http://127.0.0.1:7865/ or 127.0.0.1:7865 [Parameters] Adaptive CFG = 7 [Parameters] CLIP Skip = 2 [Parameters] Sharpness = 2 [Parameters] ControlNet Softness = 0.25 [Parameters] ADM Scale = 1.5 : 0.8 : 0.3 [Parameters] CFG = 4.0 [Parameters] Seed = 7557847389892053102 [Parameters] Sampler = dpmpp_2m_sde_gpu - karras [Parameters] Steps = 30 - 15 [Fooocus] Initializing ... [Fooocus] Loading models ... Refiner unloaded. [Fooocus] Processing prompts ... [Fooocus] Preparing Fooocus text #1 ... Fooocus Expansion loaded by itself. Traceback (most recent call last): File "H:\CreaFoto\Fooocus\modules\async_worker.py", line 979, in worker handler(task) File "H:\CreaFoto\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "H:\CreaFoto\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "H:\CreaFoto\Fooocus\modules\async_worker.py", line 533, in handler expansion = pipeline.final_expansion(t['task_prompt'], t['task_seed']) File "H:\CreaFoto\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "H:\CreaFoto\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "H:\CreaFoto\Fooocus\extras\expansion.py", line 120, in call features = self.model.generate(**tokenized_kwargs, File "H:\CreaFoto\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "H:\CreaFoto\python_embeded\lib\site-packages\transformers\generation\utils.py", line 1572, in generate return self.sample( File "H:\CreaFoto\python_embeded\lib\site-packages\transformers\generation\utils.py", line 2619, in sample outputs = self( File "H:\CreaFoto\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "H:\CreaFoto\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "H:\CreaFoto\python_embeded\lib\site-packages\transformers\models\gpt2\modeling_gpt2.py", line 1080, in forward transformer_outputs = self.transformer( File "H:\CreaFoto\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "H:\CreaFoto\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "H:\CreaFoto\python_embeded\lib\site-packages\transformers\models\gpt2\modeling_gpt2.py", line 903, in forward outputs = block( File "H:\CreaFoto\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "H:\CreaFoto\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "H:\CreaFoto\python_embeded\lib\site-packages\transformers\models\gpt2\modeling_gpt2.py", line 391, in forward attn_outputs = self.attn( File "H:\CreaFoto\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "H:\CreaFoto\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "H:\CreaFoto\python_embeded\lib\site-packages\transformers\models\gpt2\modeling_gpt2.py", line 332, in forward attn_output, attn_weights = self._attn(query, key, value, attention_mask, head_mask) File "H:\CreaFoto\python_embeded\lib\site-packages\transformers\models\gpt2\modeling_gpt2.py", line 203, in _attn attn_weights = torch.where(causal_mask, attn_weights.to(attn_weights.dtype), mask_value) RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! Total time: 2.28 seconds

HorsemanDj avatar Jun 01 '24 12:06 HorsemanDj

I also started having this issue just recently. are the switches mentioned supposed to be added to prompts, or am I supposed to edit a file? Also why, if nothing has changed, am I having this issue all of a sudden. Restarting the computer does not help.

[Parameters] Adaptive CFG = 7 [Parameters] CLIP Skip = 2 [Parameters] Sharpness = 2 [Parameters] ControlNet Softness = 0.25 [Parameters] ADM Scale = 1.5 : 0.8 : 0.3 [Parameters] Seed = 4113825726887343578 [Parameters] CFG = 4 [Fooocus] Loading control models ... [Parameters] Sampler = dpmpp_2m_sde_gpu - karras [Parameters] Steps = 30 - 15 [Fooocus] Initializing ... [Fooocus] Loading models ... Refiner unloaded. Requested to load SDXLClipModel Loading 1 new model [Fooocus Model Management] Moving model(s) has taken 0.38 seconds [Fooocus] Processing prompts ... [Fooocus] Preparing Fooocus text #1 ... Fooocus Expansion loaded by itself. [Fooocus Model Management] Moving model(s) has taken 0.22 seconds Traceback (most recent call last): File "C:\AI Art\Fooocus\modules\async_worker.py", line 1440, in worker handler(task) File "C:\AI Art\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "C:\AI Art\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "C:\AI Art\Fooocus\modules\async_worker.py", line 1150, in handler tasks, use_expansion, loras, current_progress = process_prompt(async_task, async_task.prompt, async_task.negative_prompt, File "C:\AI Art\Fooocus\modules\async_worker.py", line 729, in process_prompt expansion = pipeline.final_expansion(t['task_prompt'], t['task_seed']) File "C:\AI Art\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "C:\AI Art\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "C:\AI Art\Fooocus\extras\expansion.py", line 120, in __call__ features = self.model.generate(**tokenized_kwargs, File "C:\AI Art\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "C:\AI Art\python_embeded\lib\site-packages\transformers\generation\utils.py", line 1914, in generate result = self._sample( File "C:\AI Art\python_embeded\lib\site-packages\transformers\generation\utils.py", line 2651, in _sample outputs = self( File "C:\AI Art\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "C:\AI Art\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "C:\AI Art\python_embeded\lib\site-packages\transformers\models\gpt2\modeling_gpt2.py", line 1421, in forward transformer_outputs = self.transformer( File "C:\AI Art\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "C:\AI Art\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "C:\AI Art\python_embeded\lib\site-packages\transformers\models\gpt2\modeling_gpt2.py", line 1235, in forward outputs = block( File "C:\AI Art\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "C:\AI Art\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "C:\AI Art\python_embeded\lib\site-packages\transformers\models\gpt2\modeling_gpt2.py", line 720, in forward attn_outputs = self.attn( File "C:\AI Art\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "C:\AI Art\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "C:\AI Art\python_embeded\lib\site-packages\transformers\models\gpt2\modeling_gpt2.py", line 346, in forward attn_output, attn_weights = self._attn(query, key, value, attention_mask, head_mask) File "C:\AI Art\python_embeded\lib\site-packages\transformers\models\gpt2\modeling_gpt2.py", line 217, in _attn attn_weights = torch.where(causal_mask, attn_weights.to(attn_weights.dtype), mask_value) RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! Total time: 0.64 seconds

codemonkey2k5 avatar Jul 24 '24 23:07 codemonkey2k5