Fooocus icon indicating copy to clipboard operation
Fooocus copied to clipboard

It just sits on "Waiting for task to start"

Open yellowskipants opened this issue 1 year ago • 15 comments

Having the same problem here, anybody got a fix for "Waiting for task to start"?

Originally posted by @GlennWoodward in https://github.com/lllyasviel/Fooocus/issues/129#issuecomment-1838380550

yellowskipants avatar Dec 05 '23 23:12 yellowskipants

Having the same issue, did you find a solution?

VictorJacques avatar Dec 06 '23 17:12 VictorJacques

same here

YancyFrySr avatar Dec 07 '23 15:12 YancyFrySr

I had the same(?) issue while I was trying this application for the first time.

I had downloaded the installation folder, unzipped it, then run the run.bat file. A console indicated that it was downloading the models, then "App started successful. Use the app with http://127.0.0.1:7865/ or 127.0.0.1:7865". At that moment, my web browser was displaying the application web page. I typed some key words then press Generate. It got stuck with the message "Waiting for task to start".

Well, the console was still opened, so I tried to just press Enter in that console and it unlocked the "tasks". => Problem solved for me. (I don't know if your problem was the same)

I would suggest to the developers to add a little message "Press Enter to continue" after the message "App started successful. [...]" :-).

akreil avatar Dec 09 '23 01:12 akreil

I am using LambdaLabs with public link on gradio.live and it is stuck on "waiting for task to start"

sonusingh avatar Dec 09 '23 08:12 sonusingh

Having the same problem here, anybody got a fix for "Waiting for task to start"?

Originally posted by @GlennWoodward in #129 (comment)

hey, i have the same problem, but using a custom model checkpoint, when i use any model that is not juggernaut it just sits on "Waiting for task to start", using juggernaut didnt have problems at all.

Virtuxdev avatar Dec 09 '23 22:12 Virtuxdev

Having the same problem here, anybody got a fix for "Waiting for task to start"? Originally posted by @GlennWoodward in #129 (comment)

hey, i have the same problem, but using a custom model checkpoint, when i use any model that is not juggernaut it just sits on "Waiting for task to start", using juggernaut didnt have problems at all.

I have this problem while using Juggernaut.

IamTirion avatar Dec 12 '23 22:12 IamTirion

hello, one may look at https://github.com/lllyasviel/Fooocus/blob/main/troubleshoot.md or if still does not work, paste full log for us to take a look

lllyasviel avatar Dec 12 '23 22:12 lllyasviel

hello, one may look at https://github.com/lllyasviel/Fooocus/blob/main/troubleshoot.md or if still does not work, paste full log for us to take a look

It does not work for me. I have already increased my System Swap to 40Gb. However, in Automatic1111, disabling the memmapping for safetensors worked for me. May I ask if such option exists in Fooocus?

Edit: Sorry, I should have mentioned. Eventually it does generate an image for me, and after that the generation speed is normal, and the same long loading time comes up again if I switch model, so I suspect this is a problem with loading a model.

Edit 2: Sorry, I was too sleepy. I forgot to paste the log. I'll do that soon.

IamTirion avatar Dec 12 '23 23:12 IamTirion

Enter LCM mode. [Fooocus] Downloading LCM components ... [Parameters] Adaptive CFG = 1.0 [Parameters] Sharpness = 0.0 [Parameters] ADM Scale = 1.0 : 1.0 : 0.0 [Parameters] CFG = 1.0 [Parameters] Seed = 1389215022623263936 [Parameters] Sampler = lcm - lcm [Parameters] Steps = 8 - 8 [Fooocus] Initializing ... [Fooocus] Loading models ... Refiner unloaded. model_type EPS UNet ADM Dimension 2816 Using pytorch attention in VAE Working with z of shape (1, 4, 32, 32) = 4096 dimensions. Using pytorch attention in VAE extra {'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_l.text_projection'} left over keys: dict_keys(['conditioner.embedders.0.logit_scale', 'conditioner.embedders.0.text_projection', 'conditioner.embedders.1.model.transformer.text_model.embeddings.position_ids', 'cond_stage_model.clip_l.transformer.text_model.embeddings.position_ids']) Base model loaded: D:\Fooocus\Fooocus\models\checkpoints\albedobaseXL_v13.safetensors Request to load LoRAs [['sd_xl_offset_example-lora_1.0.safetensors', 0.1], ['None', 1.0], ['None', 1.0], ['None', 1.0], ['None', 1.0], ('sdxl_lcm_lora.safetensors', 1.0)] for model [D:\Fooocus\Fooocus\models\checkpoints\albedobaseXL_v13.safetensors]. Loaded LoRA [D:\Fooocus\Fooocus\models\loras\sd_xl_offset_example-lora_1.0.safetensors] for UNet [D:\Fooocus\Fooocus\models\checkpoints\albedobaseXL_v13.safetensors] with 788 keys at weight 0.1. Loaded LoRA [D:\Fooocus\Fooocus\models\loras\sdxl_lcm_lora.safetensors] for UNet [D:\Fooocus\Fooocus\models\checkpoints\albedobaseXL_v13.safetensors] with 788 keys at weight 1.0. Requested to load SDXLClipModel Loading 1 new model [Fooocus Model Management] Moving model(s) has taken 1.65 seconds [Fooocus] Processing prompts ... [Fooocus] Preparing Fooocus text #1 ... [Prompt Expansion] Eva Green, very coherent, symmetry, charismatic, sharp focus, cinematic, highly detailed, elegant, creative, color deep background, light great composition, intricate, innocent, novel, romantic, stunning, aesthetic, fine, sublime, extremely inspirational, beautiful, epic, artistic, inspiring, thoughtful, vibrant, best, awesome, perfect, singular, cute, brilliant, inspired [Fooocus] Encoding positive #1 ... [Parameters] Denoising Strength = 1.0 [Parameters] Initial Latent shape: Image Space (1080, 1920) Preparation time: 231.73 seconds Using lcm scheduler. [Sampler] refiner_swap_method = joint [Sampler] sigma_min = 0.39970141649246216, sigma_max = 14.614643096923828 Requested to load SDXL Loading 1 new model [Fooocus Model Management] Moving model(s) has taken 2.02 seconds 100%|████████████████████████████████████████████████████████████████████████████████████| 8/8 [00:05<00:00, 1.39it/s] Requested to load AutoencoderKL Loading 1 new model [Fooocus Model Management] Moving model(s) has taken 0.18 seconds Image generated with private log at: D:\Fooocus\Fooocus\outputs\2023-12-13\log.html Generating and saving time: 12.05 seconds Requested to load SDXLClipModel Requested to load GPT2LMHeadModel Loading 2 new models [Fooocus Model Management] Moving model(s) has taken 0.64 seconds Total time: 245.78 seconds

IamTirion avatar Dec 13 '23 06:12 IamTirion

log indicates that the hdd disk is too old and may have some problems put the model on a healthy disk may help but the best is using ssd @IamTirion

lllyasviel avatar Dec 13 '23 06:12 lllyasviel

log indicates that the hdd disk is too old and may have some problems put the model on a healthy disk may help but the best is using ssd @IamTirion

Oh ok. Thank you so much. Indeed my entire computer is like 5 years old. Is that pretty old?

IamTirion avatar Dec 13 '23 07:12 IamTirion

Same issue, 100% disk activity when the "Waiting for task to start" message appears. I'm using an external SSD. This only occurs during the initial run and when switching models. Automatic1111 also takes a long time to load models for me. It's almost certainly a problem with my disk.

ronnyskog avatar Dec 14 '23 06:12 ronnyskog

i have the same issue it says " waiting for the task to start" i have nvedia gtx 1650 4gb vram and 16 gb ram here is my log

D:\Downloads\Fooocus_win64_2-1-831>.\python_embeded\python.exe -s Fooocus\entry_with_update.py Already up-to-date Update succeeded. [System ARGV] ['Fooocus\entry_with_update.py'] Python 3.10.9 (tags/v3.10.9:1dd9be6, Dec 6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)] Fooocus version: 2.1.855 Running on local URL: http://127.0.0.1:7865

To create a public link, set share=True in launch(). Total VRAM 4096 MB, total RAM 15791 MB Trying to enable lowvram mode because your GPU seems to have 4GB or less. If you don't want this use: --always-normal-vram Set vram state to: LOW_VRAM Always offload VRAM Device: cuda:0 NVIDIA GeForce GTX 1650 : native VAE dtype: torch.float32 Using pytorch cross attention Refiner unloaded. model_type EPS UNet ADM Dimension 2816

minchan-developer avatar Dec 25 '23 07:12 minchan-developer

@ronnyskog @minchan-developer The reason why your disks are highly being used is that your VRAM is most likely exhausted (4GB switching to low vram mode automatically) and your RAM is outsourced to swap. Please check if it works after a few minutes or if Fooocus crashes and keep us posted.

mashb1t avatar Dec 29 '23 16:12 mashb1t

Having same Issue with Mac M2 Pro

AnatoliW avatar Dec 30 '23 21:12 AnatoliW

Running Fooocus on my old i3 provessor with Nvidia MX130. 8 Gb RAM. Had the same issue of "witing for the process to start" and it waits for ever. Then I teied the --always-low-vram. This took it to next step to UNet and keeps halting there indefnitely. Waited a day and half to see if there could be any possible progress but no hope.

I was not ready to give up so tried to run it with AMD argument in the run.bat file

.\python_embeded\python.exe -m pip uninstall torch torchvision torchaudio torchtext functorch xformers -y .\python_embeded\python.exe -m pip install torch-directml .\python_embeded\python.exe -s Fooocus\entry_with_update.py --directml pause

and it worked. But its veeeeery veeeery slow.

kirannadukandi avatar Jan 11 '24 05:01 kirannadukandi

Gist: use GPU with sufficient VRAM / power and when using swap, best use an SSD.

mashb1t avatar Jan 11 '24 07:01 mashb1t

I have the same problem, stuck on Waiting for task to start ... In the console I have this error: Traceback (most recent call last): File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/Users/valery/Downloads/Projects/ai/Fooocus/modules/async_worker.py", line 25, in worker import modules.default_pipeline as pipeline File "/Users/valery/Downloads/Projects/ai/Fooocus/modules/default_pipeline.py", line 1, in import modules.core as core File "/Users/valery/Downloads/Projects/ai/Fooocus/modules/core.py", line 1, in from modules.patch import patch_all File "/Users/valery/Downloads/Projects/ai/Fooocus/modules/patch.py", line 8, in import modules.anisotropic as anisotropic File "/Users/valery/Downloads/Projects/ai/Fooocus/modules/anisotropic.py", line 10, in def _compute_zero_padding(kernel_size: tuple[int, int] | int) -> tuple[int, int]: TypeError: unsupported operand type(s) for |: 'types.GenericAlias' and 'type'

valerymihaylov avatar Jan 23 '24 07:01 valerymihaylov

@valerymihaylov you have to use min. Python 3.10

mashb1t avatar Jan 23 '24 07:01 mashb1t

I had the same(?) issue while I was trying this application for the first time.

I had downloaded the installation folder, unzipped it, then run the run.bat file. A console indicated that it was downloading the models, then "App started successful. Use the app with http://127.0.0.1:7865/ or 127.0.0.1:7865". At that moment, my web browser was displaying the application web page. I typed some key words then press Generate. It got stuck with the message "Waiting for task to start".

Well, the console was still opened, so I tried to just press Enter in that console and it unlocked the "tasks". => Problem solved for me. (I don't know if your problem was the same)

I would suggest to the developers to add a little message "Press Enter to continue" after the message "App started successful. [...]" :-).

I enter a ctrl + C and then it's suddenly starting to generate the image for me.

yaohuiwu avatar May 16 '24 09:05 yaohuiwu