stable-diffusion-webui-amdgpu icon indicating copy to clipboard operation
stable-diffusion-webui-amdgpu copied to clipboard

[Bug]: RuntimeError: Could not allocate tensor with 4915840 bytes. There is not enough GPU video memory available!

Open dmitr503 opened this issue 9 months ago • 2 comments

Checklist

  • [x] The issue exists after disabling all extensions
  • [x] The issue exists on a clean installation of webui
  • [x] The issue is caused by an extension, but I believe it is caused by a bug in the webui
  • [x] The issue exists in the current version of the webui
  • [ ] The issue has not been reported before recently
  • [x] The issue has been reported before but has not been fixed yet

What happened?

During generetion of any promp: RuntimeError: Could not allocate tensor with 4915840 bytes. There is not enough GPU video memory available!

Image

Intel(R) Core(TM) i5-4440 CPU @ 3.10GHz 3.10 GHz Videocard AMD RX 570 Series 16,0 ГБ Osu

Amuse from AMD works well

Steps to reproduce the problem

  1. Start user-webui.bat
  2. enter any promt bigger then 300x300
  3. click Generate

What should have happened?

It shoul genarate image (may,e slow as I have only 8 Gb on videocard)/ Amuse from AMD generate 1500x960 in 70-180 sec

What browsers do you use to access the UI ?

Mosilla Firefox

Sysinfo

sysinfo-2025-03-17-09-19.json

Console logs

venv "F:\1\stable-diffusion-webui-amdgpu\venv\Scripts\Python.exe"
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: v1.10.1-amd-25-g04bf93f1
Commit hash: 04bf93f1e8276526e695577df59fe37dd9bfaaee
F:\1\stable-diffusion-webui-amdgpu\venv\lib\site-packages\timm\models\layers\__init__.py:48: FutureWarning: Importing from timm.models.layers is deprecated, please import via timm.layers
  warnings.warn(f"Importing from {__name__} is deprecated, please import via timm.layers", FutureWarning)
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
F:\1\stable-diffusion-webui-amdgpu\venv\lib\site-packages\pytorch_lightning\utilities\distributed.py:258: LightningDeprecationWarning: `pytorch_lightning.utilities.distributed.rank_zero_only` has been deprecated in v1.8.1 and will be removed in v2.0.0. You can import it from `pytorch_lightning.utilities` instead.
  rank_zero_deprecation(
Launching Web UI with arguments: --use-directml
ONNX: version=1.21.0 provider=DmlExecutionProvider, available=['DmlExecutionProvider', 'CPUExecutionProvider']
Loading weights [6ce0161689] from F:\1\stable-diffusion-webui-amdgpu\models\Stable-diffusion\v1-5-pruned-emaonly.safetensors
Creating model from config: F:\1\stable-diffusion-webui-amdgpu\configs\v1-inference.yaml
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 12.4s (prepare environment: 17.5s, initialize shared: 1.3s, load scripts: 0.7s, create ui: 0.6s, gradio launch: 0.8s).
creating model quickly: OSError
Traceback (most recent call last):
  File "F:\1\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\utils\_http.py", line 409, in hf_raise_for_status
    response.raise_for_status()
  File "F:\1\stable-diffusion-webui-amdgpu\venv\lib\site-packages\requests\models.py", line 1024, in raise_for_status
    raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: https://huggingface.co/None/resolve/main/config.json

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "F:\1\stable-diffusion-webui-amdgpu\venv\lib\site-packages\transformers\utils\hub.py", line 342, in cached_file
    resolved_file = hf_hub_download(
  File "F:\1\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\utils\_validators.py", line 114, in _inner_fn
    return fn(*args, **kwargs)
  File "F:\1\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py", line 862, in hf_hub_download
    return _hf_hub_download_to_cache_dir(
  File "F:\1\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py", line 969, in _hf_hub_download_to_cache_dir
    _raise_on_head_call_error(head_call_error, force_download, local_files_only)
  File "F:\1\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py", line 1486, in _raise_on_head_call_error
    raise head_call_error
  File "F:\1\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py", line 1376, in _get_metadata_or_catch_error
    metadata = get_hf_file_metadata(
  File "F:\1\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\utils\_validators.py", line 114, in _inner_fn
    return fn(*args, **kwargs)
  File "F:\1\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py", line 1296, in get_hf_file_metadata
    r = _request_wrapper(
  File "F:\1\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py", line 280, in _request_wrapper
    response = _request_wrapper(
  File "F:\1\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py", line 304, in _request_wrapper
    hf_raise_for_status(response)
  File "F:\1\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\utils\_http.py", line 458, in hf_raise_for_status
    raise _format(RepositoryNotFoundError, message, response) from e
huggingface_hub.errors.RepositoryNotFoundError: 401 Client Error. (Request ID: Root=1-67d7e8a2-188ebcf43d29cc2c360baec2;82b25d55-b12e-4c7e-afb2-f03d7746c7be)

Repository Not Found for url: https://huggingface.co/None/resolve/main/config.json.
Please make sure you specified the correct `repo_id` and `repo_type`.
If you are trying to access a private or gated repo, make sure you are authenticated.
Invalid username or password.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "C:\Program Files\Python310\lib\threading.py", line 973, in _bootstrap
    self._bootstrap_inner()
  File "C:\Program Files\Python310\lib\threading.py", line 1016, in _bootstrap_inner
    self.run()
  File "C:\Program Files\Python310\lib\threading.py", line 953, in run
    self._target(*self._args, **self._kwargs)
  File "F:\1\stable-diffusion-webui-amdgpu\modules\initialize.py", line 149, in load_model
    shared.sd_model  # noqa: B018
  File "F:\1\stable-diffusion-webui-amdgpu\modules\shared_items.py", line 190, in sd_model
    return modules.sd_models.model_data.get_sd_model()
  File "F:\1\stable-diffusion-webui-amdgpu\modules\sd_models.py", line 693, in get_sd_model
    load_model()
  File "F:\1\stable-diffusion-webui-amdgpu\modules\sd_models.py", line 831, in load_model
    sd_model = instantiate_from_config(sd_config.model, state_dict)
  File "F:\1\stable-diffusion-webui-amdgpu\modules\sd_models.py", line 775, in instantiate_from_config
    return constructor(**params)
  File "F:\1\stable-diffusion-webui-amdgpu\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 563, in __init__
    self.instantiate_cond_stage(cond_stage_config)
  File "F:\1\stable-diffusion-webui-amdgpu\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 630, in instantiate_cond_stage
    model = instantiate_from_config(config)
  File "F:\1\stable-diffusion-webui-amdgpu\repositories\stable-diffusion-stability-ai\ldm\util.py", line 89, in instantiate_from_config
    return get_obj_from_str(config["target"])(**config.get("params", dict()))
  File "F:\1\stable-diffusion-webui-amdgpu\repositories\stable-diffusion-stability-ai\ldm\modules\encoders\modules.py", line 104, in __init__
    self.transformer = CLIPTextModel.from_pretrained(version)
  File "F:\1\stable-diffusion-webui-amdgpu\modules\sd_disable_initialization.py", line 68, in CLIPTextModel_from_pretrained
    res = self.CLIPTextModel_from_pretrained(None, *model_args, config=pretrained_model_name_or_path, state_dict={}, **kwargs)
  File "F:\1\stable-diffusion-webui-amdgpu\venv\lib\site-packages\transformers\modeling_utils.py", line 262, in _wrapper
    return func(*args, **kwargs)
  File "F:\1\stable-diffusion-webui-amdgpu\venv\lib\site-packages\transformers\modeling_utils.py", line 3540, in from_pretrained
    resolved_config_file = cached_file(
  File "F:\1\stable-diffusion-webui-amdgpu\venv\lib\site-packages\transformers\utils\hub.py", line 365, in cached_file
    raise EnvironmentError(
OSError: None is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'
If this is a private repository, make sure to pass a token having permission to this repo either by logging in with `huggingface-cli login` or by passing `token=<your_token>`

Failed to create model quickly; will retry using slow method.
Applying attention optimization: InvokeAI... done.
Model loaded in 7.6s (load weights from disk: 0.9s, create model: 1.9s, apply weights to model: 4.1s, apply half(): 0.3s, move model to device: 0.1s, calculate empty prompt: 0.2s).

Additional information

sysinfo-2025-03-17-09-19.json

dmitr503 avatar Mar 17 '25 09:03 dmitr503

Your using directml but not with onnx. You would have to add --onnx to the launch args. Or some other setting also enabled.

Zluda would work better than directml. But on older GPUs maybe not.

CS1o avatar Mar 22 '25 00:03 CS1o

Do you still need a help with your problem ? For anyone with this GPU i suggest you to use https://github.com/lshqqytiger/stable-diffusion-webui-amdgpu-forge I recently found files to make ZLUDA work better than DirectML on this GPU which is really suprising. Around 20% speed boost.

TheFerumn avatar Sep 14 '25 20:09 TheFerumn