Fooocus icon indicating copy to clipboard operation
Fooocus copied to clipboard

RuntimeError,not enough memory

Open mryuze opened this issue 2 years ago • 6 comments

When I run Fooocus, Did not run successfully,the following message appears in the terminal window:

File "E:\Fooocus\Fooocus\repositories\ComfyUI-from-StabilityAI-Official\comfy\sd.py", line 386, in patch_model
    temp_weight = weight.to(torch.float32, copy=True)
RuntimeError: [enforce fail at ..\c10\core\impl\alloc_cpu.cpp:72] data. DefaultCPUAllocator: not enough memory: you tried to allocate 52428800 bytes.

In fact, my computer configuration is sufficient, the configuration is as follows:

Windows 11 Version 22H2 

AMD Ryzen 7 7840HS w/ Radeon 780M Graphics
NVIDIA GeForce RTX 4050 Laptop GPU ( 6 GB  )
32GB  DDR5 5600MHz

It can be seen that my hardware configuration is sufficient, and I am not running other programs, it is impossible to have insufficient memory, so please help to see how to solve this problem

mryuze avatar Aug 18 '23 17:08 mryuze

it worked success! If the directory you installed is on the C drive, then set the cache to the C drive. If the directory you installed is on the D drive, install the cache on the D drive. 微信截图_20230820085627

ludashi6789 avatar Aug 20 '23 00:08 ludashi6789

I have AMD Ryzen 7 7840U with integrated 780M GPU, 32GB of RAM, 512GB SSD. Here's what I did to make it run:

  • Downloaded the most recent version (win64_2-1-791)
  • Executed all 3 run.bat files
  • Set VRAM to 16GB
  • Set Windows swap size to be at 32GB min and 64GB max (same drive C:\ for both the fooocus and pagefile)
  • Updated run.bat
.\python_embeded\python.exe -m pip uninstall torch torchvision torchaudio torchtext functorch xformers -y
.\python_embeded\python.exe -m pip install torch-directml
.\python_embeded\python.exe -s Fooocus\entry_with_update.py --preset realistic --directml
pause
  • Launched the script, localhost web UI was started
  • Picked Advanced tab and selected Extreme speed and 704x1408 resolution

This set-up was the 1st in a series of ~20 attempts that completed without errors. 2 images got generated in 352 seconds.

maxim-saplin avatar Dec 01 '23 10:12 maxim-saplin

I have AMD Ryzen 7 7840U with integrated 780M GPU, 32GB of RAM, 512GB SSD. Here's what I did to make it run:

  • Downloaded the most recent version (win64_2-1-791)
  • Executed all 3 run.bat files
  • Set VRAM to 16GB
  • Set Windows swap size to be at 32GB min and 64GB max (same drive C:\ for both the fooocus and pagefile)
  • Updated run.bat
.\python_embeded\python.exe -m pip uninstall torch torchvision torchaudio torchtext functorch xformers -y
.\python_embeded\python.exe -m pip install torch-directml
.\python_embeded\python.exe -s Fooocus\entry_with_update.py --preset realistic --directml
pause
  • Launched the script, localhost web UI was started
  • Picked Advanced tab and selected Extreme speed and 704x1408 resolution

This set-up was the 1st in a series of ~20 attempts that completed without errors. 2 images got generated in 352 seconds.

How can I change the VRAM? The script gives me 1024 MB, but my 5600XT has 6GB VRAM

ianhein avatar Dec 06 '23 22:12 ianhein

I have AMD Ryzen 7 7840U with integrated 780M GPU, 32GB of RAM, 512GB SSD. Here's what I did to make it run:

* Downloaded the most recent version (win64_2-1-791)

* Executed  all 3 run.bat files

* Set VRAM to 16GB

* Set Windows swap size to be at 32GB min and 64GB max (same drive C:\ for both the fooocus and pagefile)

* Updated `run.bat`
.\python_embeded\python.exe -m pip uninstall torch torchvision torchaudio torchtext functorch xformers -y
.\python_embeded\python.exe -m pip install torch-directml
.\python_embeded\python.exe -s Fooocus\entry_with_update.py --preset realistic --directml
pause
* Launched the script, localhost web UI was started

* Picked `Advanced` tab and selected `Extreme speed` and `704x1408` resolution

This set-up was the 1st in a series of ~20 attempts that completed without errors. 2 images got generated in 352 seconds.

I'm curious since I'm currently in the market for an AMD based laptop (probably the smaller Ryzen 5 with 6 cores and 12 threads for me. Albeit, I'll install a full 64GB of RAM). Is that 352 seconds per image or 352 seconds for both of those images? And was that on the default fast setting?

I've benchmarked Fooocus on an Apple M2 with 24GB before and here every image on fast settings takes pretty much exactly 6 minutes and 20 seconds PER image. I'd be very happy with a machine that can do it in half that time.

markusbkk avatar Dec 23 '23 16:12 markusbkk

I have AMD Ryzen 7 7840U with integrated 780M GPU, 32GB of RAM, 512GB SSD. Here's what I did to make it run:

  • Downloaded the most recent version (win64_2-1-791)
  • Executed all 3 run.bat files
  • Set VRAM to 16GB
  • Set Windows swap size to be at 32GB min and 64GB max (same drive C:\ for both the fooocus and pagefile)
  • Updated run.bat
.\python_embeded\python.exe -m pip uninstall torch torchvision torchaudio torchtext functorch xformers -y
.\python_embeded\python.exe -m pip install torch-directml
.\python_embeded\python.exe -s Fooocus\entry_with_update.py --preset realistic --directml
pause
  • Launched the script, localhost web UI was started
  • Picked Advanced tab and selected Extreme speed and 704x1408 resolution

This set-up was the 1st in a series of ~20 attempts that completed without errors. 2 images got generated in 352 seconds.

How can I change the VRAM? The script gives me 1024 MB, but my 5600XT has 6GB VRAM

There's a file, I can't remeber which one, .py extension, there's this 1024 value hardcoded. Found it in the issues doscussing AMD errors. Also changed another .py file (also don't remener now) which supposedly fixed some error with dependencies

maxim-saplin avatar Dec 23 '23 18:12 maxim-saplin

I have AMD Ryzen 7 7840U with integrated 780M GPU, 32GB of RAM, 512GB SSD. Here's what I did to make it run:

* Downloaded the most recent version (win64_2-1-791)

* Executed  all 3 run.bat files

* Set VRAM to 16GB

* Set Windows swap size to be at 32GB min and 64GB max (same drive C:\ for both the fooocus and pagefile)

* Updated `run.bat`
.\python_embeded\python.exe -m pip uninstall torch torchvision torchaudio torchtext functorch xformers -y
.\python_embeded\python.exe -m pip install torch-directml
.\python_embeded\python.exe -s Fooocus\entry_with_update.py --preset realistic --directml
pause
* Launched the script, localhost web UI was started

* Picked `Advanced` tab and selected `Extreme speed` and `704x1408` resolution

This set-up was the 1st in a series of ~20 attempts that completed without errors. 2 images got generated in 352 seconds.

I'm curious since I'm currently in the market for an AMD based laptop (probably the smaller Ryzen 5 with 6 cores and 12 threads for me. Albeit, I'll install a full 64GB of RAM). Is that 352 seconds per image or 352 seconds for both of those images? And was that on the default fast setting?

I've benchmarked Fooocus on an Apple M2 with 24GB before and here every image on fast settings takes pretty much exactly 6 minutes and 20 seconds PER image. I'd be very happy with a machine that can do it in half that time.

352s for both images. Also tried generating 4 images (1024x1024, 8 iterations) and it took ~11 minutes for the entire set of images

maxim-saplin avatar Dec 23 '23 18:12 maxim-saplin

Please make sure the generation speed is similar to the results we have in https://github.com/lllyasviel/Fooocus?tab=readme-ov-file#minimal-requirement and feel free to provide further insights what your setup / config is. Closing for now, initial issue can be solved by following the troubleshooting guide https://github.com/lllyasviel/Fooocus/blob/main/troubleshoot.md (expecially swap section) after looking up the minimum requirements in https://github.com/lllyasviel/Fooocus?tab=readme-ov-file#minimal-requirement

mashb1t avatar Jan 01 '24 19:01 mashb1t