InvokeAI icon indicating copy to clipboard operation
InvokeAI copied to clipboard

[bug]: unable to use second GPU `cuda:1`

Open notdanilo opened this issue 1 year ago • 7 comments
trafficstars

Is there an existing issue for this problem?

  • [X] I have searched the existing issues

Operating system

Windows

GPU vendor

Nvidia (CUDA)

GPU model

cuda:0 RTX 4090 cuda:1 RTX 3080

GPU VRAM

cuda:0 24GB cuda:1 10GB

Version number

3.7.0

Browser

Chrome

Python dependencies

No response

What happened

Running it with --device "cuda:1" isn't working.

[2024-03-20 19:59:39,781]::[uvicorn.access]::INFO --> 127.0.0.1:51656 - "GET /api/v1/queue/default/status HTTP/1.1" 200
  5%|████▏                                                                              | 1/20 [00:00<00:07,  2.64it/s]
[2024-03-20 19:59:44,088]::[InvokeAI]::ERROR --> Traceback (most recent call last):
  File "D:\AI\Text-to-Image\Stable Diffusion\Applications\InvokeAI\.venv\Lib\site-packages\invokeai\app\services\invocation_processor\invocation_processor_default.py", line 134, in __process
    outputs = invocation.invoke_internal(
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\Text-to-Image\Stable Diffusion\Applications\InvokeAI\.venv\Lib\site-packages\invokeai\app\invocations\baseinvocation.py", line 669, in invoke_internal
    output = self.invoke(context)
             ^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\Text-to-Image\Stable Diffusion\Applications\InvokeAI\.venv\Lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\Text-to-Image\Stable Diffusion\Applications\InvokeAI\.venv\Lib\site-packages\invokeai\app\invocations\latent.py", line 773, in invoke
    ) = pipeline.latents_from_embeddings(
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\Text-to-Image\Stable Diffusion\Applications\InvokeAI\.venv\Lib\site-packages\invokeai\backend\stable_diffusion\diffusers_pipeline.py", line 381, in latents_from_embeddings
    latents, attention_map_saver = self.generate_latents_from_embeddings(
                                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\Text-to-Image\Stable Diffusion\Applications\InvokeAI\.venv\Lib\site-packages\invokeai\backend\stable_diffusion\diffusers_pipeline.py", line 454, in generate_latents_from_embeddings
    step_output = self.step(
                  ^^^^^^^^^^
  File "D:\AI\Text-to-Image\Stable Diffusion\Applications\InvokeAI\.venv\Lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\Text-to-Image\Stable Diffusion\Applications\InvokeAI\.venv\Lib\site-packages\invokeai\backend\stable_diffusion\diffusers_pipeline.py", line 587, in step
    uc_noise_pred, c_noise_pred = self.invokeai_diffuser.do_unet_step(
                                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\Text-to-Image\Stable Diffusion\Applications\InvokeAI\.venv\Lib\site-packages\invokeai\backend\stable_diffusion\diffusion\shared_invokeai_diffusion.py", line 257, in do_unet_step
    ) = self._apply_standard_conditioning_sequentially(
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\Text-to-Image\Stable Diffusion\Applications\InvokeAI\.venv\Lib\site-packages\invokeai\backend\stable_diffusion\diffusion\shared_invokeai_diffusion.py", line 445, in _apply_standard_conditioning_sequentially
    unconditioned_next_x = self.model_forward_callback(
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\Text-to-Image\Stable Diffusion\Applications\InvokeAI\.venv\Lib\site-packages\invokeai\backend\stable_diffusion\diffusers_pipeline.py", line 664, in _unet_forward
    return self.unet(
           ^^^^^^^^^^
  File "D:\AI\Text-to-Image\Stable Diffusion\Applications\InvokeAI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\Text-to-Image\Stable Diffusion\Applications\InvokeAI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\Text-to-Image\Stable Diffusion\Applications\InvokeAI\.venv\Lib\site-packages\diffusers\models\unets\unet_2d_condition.py", line 1081, in forward
    sample = self.conv_in(sample)
             ^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\Text-to-Image\Stable Diffusion\Applications\InvokeAI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\Text-to-Image\Stable Diffusion\Applications\InvokeAI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\Text-to-Image\Stable Diffusion\Applications\InvokeAI\.venv\Lib\site-packages\torch\nn\modules\conv.py", line 460, in forward
    return self._conv_forward(input, self.weight, self.bias)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\Text-to-Image\Stable Diffusion\Applications\InvokeAI\.venv\Lib\site-packages\invokeai\backend\model_management\seamless.py", line 17, in _conv_forward_asymmetric
    return nn.functional.conv2d(
           ^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:1! (when checking argument for argument weight in method wrapper_CUDA__cudnn_convolution)

[2024-03-20 19:59:44,088]::[InvokeAI]::ERROR --> Error while invoking:
Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:1! (when checking argument for argument weight in method wrapper_CUDA__cudnn_convolution)

What you expected to happen

To be able to run it on cuda:1

How to reproduce the problem

No response

Additional context

It looks like both cuda:0 and cuda:1 are being used, even if I try to select cuda:0 Maybe some code parts of the code is harcoded to cuda:0?

Discord username

not.danilo

notdanilo avatar Mar 20 '24 23:03 notdanilo

@lstein I suspect this will still be a problem on v4.0.0. Not sure how to approach this myself...

psychedelicious avatar Mar 25 '24 07:03 psychedelicious

There's a partial fix in #6076, which will be in v4.0.0 or v4.0.1. You should be able to generate without seamless enabled with this fix, but if you enable seamless, I'd expect the same error.

psychedelicious avatar Mar 28 '24 07:03 psychedelicious

Sorry, GitHub closed this but I didn't mean to.

psychedelicious avatar Mar 29 '24 21:03 psychedelicious

What happens if you set the CUDA_VISIBLE_DEVICES environment variable to cuda:1 instead of using —device?

CUDA_VISIBLE_DEVICES=“cuda:1” invokeai-web

lstein avatar Mar 29 '24 21:03 lstein

I am unable to test it for the following weeks.

notdanilo avatar Mar 30 '24 00:03 notdanilo

Changing

python .venv\Scripts\invokeai-web.exe %*

to

set CUDA_VISIBLE_DEVICES=1 & python .venv\Scripts\invokeai-web.exe %*

worked on Windows.

notdanilo avatar May 01 '24 02:05 notdanilo

Multiple GPU and multiple CUDA does not work using both cuda:0 and cuda:1 is not possible.

jameswan avatar Jul 28 '24 05:07 jameswan