Fooocus icon indicating copy to clipboard operation
Fooocus copied to clipboard

AMD GPU (5700xt) "RuntimeError: No CUDA GPUs are available"

Open Giger22 opened this issue 2 years ago • 2 comments

Read Troubleshoot

[x] I admit that I have read the Troubleshoot before making this issue.

Describe the problem I have an AMD Radeon 5700 XT 8GB and i get this error upon running a command "python entry_with_update.py": "RuntimeError: No CUDA GPUs are available"

Full Console Log (fooocus_env) artix:[artix]:~/Applications/fooocus$ python entry_with_update.py Already up-to-date Update succeeded. [System ARGV] ['entry_with_update.py'] Python 3.11.6 (main, Nov 14 2023, 18:04:26) [GCC 13.2.1 20230801] Fooocus version: 2.1.859 Running on local URL: http://127.0.0.1:7865

To create a public link, set share=True in launch(). Exception in thread Thread-2 (worker): Traceback (most recent call last): File "/usr/lib/python3.11/threading.py", line 1045, in _bootstrap_inner self.run() File "/usr/lib/python3.11/threading.py", line 982, in run self._target(*self._args, **self._kwargs) File "/home/artix/Applications/fooocus/modules/async_worker.py", line 25, in worker import modules.default_pipeline as pipeline File "/home/artix/Applications/fooocus/modules/default_pipeline.py", line 1, in import modules.core as core File "/home/artix/Applications/fooocus/modules/core.py", line 1, in from modules.patch import patch_all File "/home/artix/Applications/fooocus/modules/patch.py", line 5, in import ldm_patched.modules.model_base File "/home/artix/Applications/fooocus/ldm_patched/modules/model_base.py", line 2, in from ldm_patched.ldm.modules.diffusionmodules.openaimodel import UNetModel File "/home/artix/Applications/fooocus/ldm_patched/ldm/modules/diffusionmodules/openaimodel.py", line 18, in from ..attention import SpatialTransformer, SpatialVideoTransformer, default File "/home/artix/Applications/fooocus/ldm_patched/ldm/modules/attention.py", line 12, in from .sub_quadratic_attention import efficient_dot_product_attention File "/home/artix/Applications/fooocus/ldm_patched/ldm/modules/sub_quadratic_attention.py", line 27, in from ldm_patched.modules import model_management File "/home/artix/Applications/fooocus/ldm_patched/modules/model_management.py", line 118, in total_vram = get_total_memory(get_torch_device()) / (1024 * 1024) ^^^^^^^^^^^^^^^^^^ File "/home/artix/Applications/fooocus/ldm_patched/modules/model_management.py", line 87, in get_torch_device return torch.device(torch.cuda.current_device()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/artix/Applications/fooocus/fooocus_env/lib/python3.11/site-packages/torch/cuda/init.py", line 769, in current_device _lazy_init() File "/home/artix/Applications/fooocus/fooocus_env/lib/python3.11/site-packages/torch/cuda/init.py", line 298, in _lazy_init torch._C._cuda_init() RuntimeError: No CUDA GPUs are available Traceback (most recent call last): File "/home/artix/Applications/fooocus/fooocus_env/lib/python3.11/site-packages/gradio/routes.py", line 488, in run_predict output = await app.get_blocks().process_api( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/artix/Applications/fooocus/fooocus_env/lib/python3.11/site-packages/gradio/blocks.py", line 1431, in process_api result = await self.call_function( ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/artix/Applications/fooocus/fooocus_env/lib/python3.11/site-packages/gradio/blocks.py", line 1117, in call_function prediction = await utils.async_iteration(iterator) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/artix/Applications/fooocus/fooocus_env/lib/python3.11/site-packages/gradio/utils.py", line 350, in async_iteration return await iterator.anext() ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/artix/Applications/fooocus/fooocus_env/lib/python3.11/site-packages/gradio/utils.py", line 343, in anext return await anyio.to_thread.run_sync( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/artix/Applications/fooocus/fooocus_env/lib/python3.11/site-packages/anyio/to_thread.py", line 56, in run_sync return await get_async_backend().run_sync_in_worker_thread( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/artix/Applications/fooocus/fooocus_env/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 2134, in run_sync_in_worker_thread return await future ^^^^^^^^^^^^ File "/home/artix/Applications/fooocus/fooocus_env/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 851, in run result = context.run(func, *args) ^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/artix/Applications/fooocus/fooocus_env/lib/python3.11/site-packages/gradio/utils.py", line 326, in run_sync_iterator_async return next(iterator) ^^^^^^^^^^^^^^ File "/home/artix/Applications/fooocus/fooocus_env/lib/python3.11/site-packages/gradio/utils.py", line 695, in gen_wrapper yield from f(*args, **kwargs) File "/home/artix/Applications/fooocus/webui.py", line 27, in generate_clicked import ldm_patched.modules.model_management as model_management File "/home/artix/Applications/fooocus/ldm_patched/modules/model_management.py", line 118, in total_vram = get_total_memory(get_torch_device()) / (1024 * 1024) ^^^^^^^^^^^^^^^^^^ File "/home/artix/Applications/fooocus/ldm_patched/modules/model_management.py", line 87, in get_torch_device return torch.device(torch.cuda.current_device()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/artix/Applications/fooocus/fooocus_env/lib/python3.11/site-packages/torch/cuda/init.py", line 769, in current_device _lazy_init() File "/home/artix/Applications/fooocus/fooocus_env/lib/python3.11/site-packages/torch/cuda/init.py", line 298, in _lazy_init torch._C._cuda_init() RuntimeError: No CUDA GPUs are available

Giger22 avatar Dec 31 '23 13:12 Giger22

Please follow the official installation instructions and use --directml after setup adjustments. See https://github.com/lllyasviel/Fooocus?tab=readme-ov-file#windowsamd-gpus, closed as duplicate.

mashb1t avatar Dec 31 '23 14:12 mashb1t

Ok, but instruction is bad because this paragraph: Linux (AMD GPUs)

Note that the minimal requirement for different platforms is different.

Same with the above instructions. You need to change torch to the AMD version

pip uninstall torch torchvision torchaudio torchtext functorch xformers pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/rocm5.6 is after command to run it.

Giger22 avatar Dec 31 '23 16:12 Giger22

Check if you ROCM works. My 5700XT can run Fooocus without issue. Although it's slow (2 minutes an image for extreme mode, 3 minutes an image for Speed mode). I also made a video at https://youtu.be/HgGZyNRA1Ns

ttio2tech avatar Feb 11 '24 14:02 ttio2tech