Fooocus icon indicating copy to clipboard operation
Fooocus copied to clipboard

There is not enough GPU video memory available. RX580 8GB

Open DeimaD opened this issue 1 year ago • 4 comments

To create a public link, set share=True in launch(). Using directml with device: Total VRAM 4096 MB, total RAM 16304 MB Set vram state to: NORMAL_VRAM Disabling smart memory management Device: privateuseone VAE dtype: torch.float32 Using sub quadratic optimization for cross attention, if you have memory or speed issues try using: --use-split-cross-attention Refiner unloaded. model_type EPS adm 2816 Using split attention in VAE Working with z of shape (1, 4, 32, 32) = 4096 dimensions. Using split attention in VAE extra keys {'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids', 'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_l.logit_scale'} Base model loaded: E:\AiImage\Fooocus_win64_2-1-791\Fooocus\models\checkpoints\juggernautXL_version6Rundiffusion.safetensors Request to load LoRAs [['sd_xl_offset_example-lora_1.0.safetensors', 0.1], ['None', 1.0], ['None', 1.0], ['None', 1.0], ['None', 1.0]] for model [E:\AiImage\Fooocus_win64_2-1-791\Fooocus\models\checkpoints\juggernautXL_version6Rundiffusion.safetensors]. Loaded LoRA [E:\AiImage\Fooocus_win64_2-1-791\Fooocus\models\loras\sd_xl_offset_example-lora_1.0.safetensors] for UNet [E:\AiImage\Fooocus_win64_2-1-791\Fooocus\models\checkpoints\juggernautXL_version6Rundiffusion.safetensors] with 788 keys at weight 0.1. Fooocus V2 Expansion: Vocab with 642 words. Fooocus Expansion engine loaded for cpu, use_fp16 = False. Requested to load SDXLClipModel Requested to load GPT2LMHeadModel Loading 2 new models [Fooocus Model Management] Moving model(s) has taken 0.31 seconds App started successful. Use the app with http://127.0.0.1:7865/ or 127.0.0.1:7865 [Parameters] Adaptive CFG = 7 [Parameters] Sharpness = 2 [Parameters] ADM Scale = 1.5 : 0.8 : 0.3 [Parameters] CFG = 4.0 [Parameters] Seed = 2946334641726039087 [Parameters] Sampler = dpmpp_2m_sde_gpu - karras [Parameters] Steps = 30 - 15 [Fooocus] Initializing ... [Fooocus] Loading models ... Refiner unloaded. [Fooocus] Processing prompts ... [Fooocus] Preparing Fooocus text #1 ... [Prompt Expansion] A red car, cinematic, futuristic, stunning, highly detailed, elegant, intricate, light shining, sharp focus, composition, dramatic, fine detail, gentle professional still, beautiful, draped, designed, complex, background, ambient, composed, rich dynamic colors, vivid, incredible, inspiring, epic, artistic, true luxury, thoughtful, loving, generous, positive, vibrant [Fooocus] Preparing Fooocus text #2 ... [Prompt Expansion] A red car, cinematic, extremely detailed, color, intricate, elegant, epic, very coherent, colorful,, ambient, highly saturated colors, sharp focus, surreal, advanced, futuristic, professional,, creative, pure, positive, attractive, cute, best, beautiful, atmosphere, perfect, romantic, dynamic, artistic, calm, unique, awesome, illuminated, shiny [Fooocus] Encoding positive #1 ... [Fooocus] Encoding positive #2 ... [Fooocus] Encoding negative #1 ... [Fooocus] Encoding negative #2 ... [Parameters] Denoising Strength = 1.0 [Parameters] Initial Latent shape: Image Space (1408, 704) Preparation time: 13.54 seconds [Sampler] refiner_swap_method = joint [Sampler] sigma_min = 0.0291671771556139, sigma_max = 14.614643096923828 Requested to load SDXL Loading 1 new model ERROR diffusion_model.output_blocks.1.1.transformer_blocks.2.ff.net.0.proj.weight Could not allocate tensor with 52428800 bytes. There is not enough GPU video memory available! ERROR diffusion_model.output_blocks.1.1.transformer_blocks.2.ff.net.2.weight Could not allocate tensor with 26214400 bytes. There is not enough GPU video memory available! ERROR diffusion_model.output_blocks.1.1.transformer_blocks.2.attn2.to_k.weight Could not allocate tensor with 10485760 bytes. There is not enough GPU video memory available! Traceback (most recent call last): File "E:\AiImage\Fooocus_win64_2-1-791\Fooocus\modules\async_worker.py", line 803, in worker handler(task) File "E:\AiImage\Fooocus_win64_2-1-791\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "E:\AiImage\Fooocus_win64_2-1-791\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "E:\AiImage\Fooocus_win64_2-1-791\Fooocus\modules\async_worker.py", line 735, in handler imgs = pipeline.process_diffusion( File "E:\AiImage\Fooocus_win64_2-1-791\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "E:\AiImage\Fooocus_win64_2-1-791\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "E:\AiImage\Fooocus_win64_2-1-791\Fooocus\modules\default_pipeline.py", line 361, in process_diffusion sampled_latent = core.ksampler( File "E:\AiImage\Fooocus_win64_2-1-791\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "E:\AiImage\Fooocus_win64_2-1-791\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "E:\AiImage\Fooocus_win64_2-1-791\Fooocus\modules\core.py", line 315, in ksampler samples = fcbh.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, File "E:\AiImage\Fooocus_win64_2-1-791\Fooocus\backend\headless\fcbh\sample.py", line 93, in sample real_model, positive_copy, negative_copy, noise_mask, models = prepare_sampling(model, noise.shape, positive, negative, noise_mask) File "E:\AiImage\Fooocus_win64_2-1-791\Fooocus\backend\headless\fcbh\sample.py", line 86, in prepare_sampling fcbh.model_management.load_models_gpu([model] + models, model.memory_required(noise_shape) + inference_memory) File "E:\AiImage\Fooocus_win64_2-1-791\Fooocus\modules\patch.py", line 494, in patched_load_models_gpu y = fcbh.model_management.load_models_gpu_origin(*args, **kwargs) File "E:\AiImage\Fooocus_win64_2-1-791\Fooocus\backend\headless\fcbh\model_management.py", line 410, in load_models_gpu cur_loaded_model = loaded_model.model_load(lowvram_model_memory) File "E:\AiImage\Fooocus_win64_2-1-791\Fooocus\backend\headless\fcbh\model_management.py", line 293, in model_load raise e File "E:\AiImage\Fooocus_win64_2-1-791\Fooocus\backend\headless\fcbh\model_management.py", line 289, in model_load self.real_model = self.model.patch_model(device_to=patch_model_to) #TODO: do something with loras and offloading to CPU File "E:\AiImage\Fooocus_win64_2-1-791\Fooocus\backend\headless\fcbh\model_patcher.py", line 191, in patch_model temp_weight = fcbh.model_management.cast_to_device(weight, device_to, torch.float32, copy=True) File "E:\AiImage\Fooocus_win64_2-1-791\Fooocus\backend\headless\fcbh\model_management.py", line 532, in cast_to_device return tensor.to(device, copy=copy).to(dtype) RuntimeError: Could not allocate tensor with 10485760 bytes. There is not enough GPU video memory available! Total time: 256.15 seconds

DeimaD avatar Dec 05 '23 13:12 DeimaD

You said you have a RX580 with 8GB of VRAM, but the 3rd line in you terminal output suggests otherwise Total VRAM 4096 MB, total RAM 16304 MB

So this means your graphics card only has 4GB VRAM, in which case it is not possible to load the models as they all take about 6GB of VRAM.

Krupakar-Reddy-S avatar Dec 05 '23 19:12 Krupakar-Reddy-S

You said you have a RX580 with 8GB of VRAM, but the 3rd line in you terminal output suggests otherwise Total VRAM 4096 MB, total RAM 16304 MB

So this means your graphics card only has 4GB VRAM, in which case it is not possible to load the models as they all take about 6GB of VRAM.

That is because I've allocated only 4GB of VRAM, I tried allocating 6GB, but I still get the same error.

DeimaD avatar Dec 06 '23 18:12 DeimaD

Please try allocating the remaining VRAM and make sure you have swap enabled and sufficiently sized. See https://github.com/lllyasviel/Fooocus/blob/main/troubleshoot.md#system-swap Let us know if you require further assistance.

mashb1t avatar Dec 29 '23 16:12 mashb1t

As of https://github.com/lllyasviel/Fooocus/commit/8e62a72a63b30a3067d1a1bc3f8d226824bd9283 AMD with 8GB VRAM is now supported. Please try with min. 8GB VRAM allocated.

mashb1t avatar Dec 30 '23 16:12 mashb1t