CLIPTextEncodeFlux very slow
Your question
CLIPTextEncodeFlux its taking around 80-90 seconds, started a few days ago. Before that it didnt take so long. No vram issues or anything, latest version of comfy etc...
Logs
No response
Other
No response
Ok it got fixed with updating pytorch to 2.5.0 but then sampler got twice as slow instead.. from 1.1s/it to 2.2s/it...
Same issue here, CLIPTextEncodeFlux taking painfully long times in the last few days. Didn't update pytorch to 2.5.0...
What is your workflow and what is your hardware spec?
All right, I've attached the workflow and the comfyui log. The first run of the workflow took 444s, the next run just ~250s, because the cliptext was already encoded. This workflow has a Facedetailer in it, but the prompt I run generated an image without discernable faces, so this step was basically skipped. But I used to run this workflow (including the Facedetailer steps) in under 300s!
My hardware specs... i7-10700K, 32Gb RAM, RTX 3060 12Gb VRAM, Windows 11 Pro, Comfy and models are all on SSDs. Thank you! :)
I got it fixed. I updated torch to 2.5.0 then downgraded again to 2.4.1 and now my text encode takes like 6 seconds.
Den 7 sep. 2024 11:51, kI 11:51, Zdeto @.***> skrev:
All right, I've attached the workflow and the comfyui log. The first run of the workflow took 444s, the next run just ~250s, because the cliptext was already encoded. This workflow has a Facedetailer in it, but the prompt I run generated an image without discernable faces, so this step was basically skipped. But I used to run this workflow (including the Facedetailer steps) in under 300s!
My hardware specs... i7-10700K, 32Gb RAM, RTX 3060 12Gb VRAM, Windows 11 Pro, Comfy and models are all on SSDs. Thank you! :)
-- Reply to this email directly or view it on GitHub: https://github.com/comfyanonymous/ComfyUI/issues/4745#issuecomment-2335136325 You are receiving this because you authored the thread.
Message ID: @.***>
I got it fixed. I updated torch to 2.5.0 then downgraded again to 2.4.1 and now my text encode takes like 6 seconds.
Ok, how exactly did you do that? Because I get an error when I try to update torch to 2.5.0 "ERROR: No matching distribution found for torch==2.5.0"
https://pytorch.org/ Choose Preview (Nightly)
Den 7 sep. 2024 12:40, kI 12:40, Zdeto @.***> skrev:
I got it fixed. I updated torch to 2.5.0 then downgraded again to 2.4.1 and now my text encode takes like 6 seconds.
Ok, how exactly did you do that? Because I get an error when I try to update torch to 2.5.0 "ERROR: No matching distribution found for torch==2.5.0"
-- Reply to this email directly or view it on GitHub: https://github.com/comfyanonymous/ComfyUI/issues/4745#issuecomment-2335147843 You are receiving this because you authored the thread.
Message ID: @.***>
https://pytorch.org/ Choose Preview (Nightly) Den 7 sep. 2024 12:40, kI 12:40, Zdeto @.***> skrev: …
I got it fixed. I updated torch to 2.5.0 then downgraded again to 2.4.1 and now my text encode takes like 6 seconds. Ok, how exactly did you do that? Because I get an error when I try to update torch to 2.5.0 "ERROR: No matching distribution found for torch==2.5.0" -- Reply to this email directly or view it on GitHub: #4745 (comment) You are receiving this because you authored the thread. Message ID: @.***>
Please bare with me, I must be doing something wrong... So, I run this command and got errors about Requirement already satisfied.
e:\ComfyUI_windows_portable>python_embeded\python.exe -s -m pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu121 Looking in indexes: https://download.pytorch.org/whl/nightly/cu121 Requirement already satisfied: torch in e:\comfyui_windows_portable\python_embeded\lib\site-packages (2.4.0+cu121) Requirement already satisfied: torchvision in e:\comfyui_windows_portable\python_embeded\lib\site-packages (0.19.0+cu121) Requirement already satisfied: torchaudio in e:\comfyui_windows_portable\python_embeded\lib\site-packages (2.4.0+cu121) Requirement already satisfied: filelock in e:\comfyui_windows_portable\python_embeded\lib\site-packages (from torch) (3.15.4) Requirement already satisfied: typing-extensions>=4.8.0 in e:\comfyui_windows_portable\python_embeded\lib\site-packages (from torch) (4.12.2) Requirement already satisfied: sympy in e:\comfyui_windows_portable\python_embeded\lib\site-packages (from torch) (1.12.1) Requirement already satisfied: networkx in e:\comfyui_windows_portable\python_embeded\lib\site-packages (from torch) (3.3) Requirement already satisfied: jinja2 in e:\comfyui_windows_portable\python_embeded\lib\site-packages (from torch) (3.1.4) Requirement already satisfied: fsspec in e:\comfyui_windows_portable\python_embeded\lib\site-packages (from torch) (2024.5.0) Requirement already satisfied: numpy in e:\comfyui_windows_portable\python_embeded\lib\site-packages (from torchvision) (1.26.4) Requirement already satisfied: pillow!=8.3.*,>=5.3.0 in e:\comfyui_windows_portable\python_embeded\lib\site-packages (from torchvision) (10.4.0) Requirement already satisfied: MarkupSafe>=2.0 in e:\comfyui_windows_portable\python_embeded\lib\site-packages (from jinja2->torch) (2.1.5) Requirement already satisfied: mpmath<1.4.0,>=1.1.0 in e:\comfyui_windows_portable\python_embeded\lib\site-packages (from sympy->torch) (1.3.0)
All right, I had to uninstall first... stupid me :))) So, I uninstalled it python_embeded\python.exe -s -m pip uninstall torch torchvision torchaudio then run again python_embeded\python.exe -s -m pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu121
After that, I fired up Comfy... got some errors of course, but it started. Run the workflow and bam, ~3s for the CLIPTextEncodeFlux. But, as it happened to you, it doubled my sampler's s/it. Uninstalled again, then reinstalled torch 2.4.1. Ran the Comfy again, reload the workflow and now it taskes ~20s for the CLIPTextEncodeFlux and went back to the inference time. Still a bit long, but at least I got back to under 300s for this workflow to run :)
Hehe yea same here, sampler got alot slower with 2.5.0!
Den 7 sep. 2024 16:34, kI 16:34, Zdeto @.***> skrev:
All right, I had to uninstall first... stupid me :))) So, I uninstalled it python_embeded\python.exe -s -m pip uninstall torch torchvision torchaudio then run again python_embeded\python.exe -s -m pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu121
After that, I fired up Comfy... got some errors of course, but it started. Run the workflow and bam, ~3s for the CLIPTextEncodeFlux. But, as it happened to you, it doubled my sampler's it/s. Uninstalled again, then reinstalled torch 2.4.1. Ran the Comfy again, reload the workflow and now it taskes ~20s for the CLIPTextEncodeFlux and went back to the inference time. Still a bit long, but at least I got back to under 300s for this workflow to run :)
-- Reply to this email directly or view it on GitHub: https://github.com/comfyanonymous/ComfyUI/issues/4745#issuecomment-2335271548 You are receiving this because you authored the thread.
Message ID: @.***>
i updated comfy and now its damn slow again... 80-90s
Yeah, made the same mistake :(
"#112 [CLIPTextEncodeFlux]: 143.94s".... Getting sick of it haha
Den 13 sep. 2024 16:13, kI 16:13, Zdeto @.***> skrev:
Yeah, made the same mistake :(
-- Reply to this email directly or view it on GitHub: https://github.com/comfyanonymous/ComfyUI/issues/4745#issuecomment-2349063085 You are receiving this because you authored the thread.
Message ID: @.***>
eventually I solved it as before, but it's annoying...
Yea but it will come back ;)
Den 13 sep. 2024 20:02, kI 20:02, Zdeto @.***> skrev:
eventually I solved it as before, but it's annoying...
-- Reply to this email directly or view it on GitHub: https://github.com/comfyanonymous/ComfyUI/issues/4745#issuecomment-2349701214 You are receiving this because you authored the thread.
Message ID: @.***>
I can't believe we're the only lucky ones to encounter this. Nobody else has this issue?! :))
Yea i know, its unbelievable... You should read about it everywhere! What hardware do you have?
Den 14 sep. 2024 10:25, kI 10:25, Zdeto @.***> skrev:
I can't believe we're the only lucky ones to encounter this. Nobody else has this issue?! :))
-- Reply to this email directly or view it on GitHub: https://github.com/comfyanonymous/ComfyUI/issues/4745#issuecomment-2350911297 You are receiving this because you authored the thread.
Message ID: @.***>
Look above, at the first part of this thread. I have listed my hardware and comfy.log (there are hardware specs in it)
Look above, at the first part of this thread. I have listed my hardware and comfy.log (there are hardware specs in it)
Ah i see.. not same as me so nothing to do with that. Everyone i asked have never had problem with it so idk wtf is going on, annoying as hell anyway :D
I have the problem too. it takes the same amount of time as the sampler does.
For those experiencing this issue, please check the following:
- Does the same phenomenon occur when using a normal clip model instead of gguf?
- When checking memory usage through task manager, is swapping not occurring?
got same issue, and I found GPU is not using during CLIPTextEncodeFlux runing
I found my problem. It was when i restarted comfy from the manager. Then it doesnt start with my args.
Den 11 okt. 2024 21:25, kI 21:25, Xiangyang You @.***> skrev:
got same issue, and I found GPU is not using during CLIPTextEncodeFlux runing
-- Reply to this email directly or view it on GitHub: https://github.com/comfyanonymous/ComfyUI/issues/4745#issuecomment-2408003863 You are receiving this because you authored the thread.
Message ID: @.***>
got same issue, and I found GPU is not using during CLIPTextEncodeFlux runing
I fixed by following steps, I have no idea which one works, pls try 3rd step firstly:
- run
update_comfyui_and_python_dependencies.bat - install pytorch
2.4.0manually - remove the
--lowvraminrun_nvidia_gpu.bat
I found my problem. It was when i restarted comfy from the manager. Then it doesnt start with my args. Den 11 okt. 2024 21:25, kI 21:25, Xiangyang You @.> skrev: … got same issue, and I found GPU is not using during CLIPTextEncodeFlux runing -- Reply to this email directly or view it on GitHub: #4745 (comment) You are receiving this because you authored the thread. Message ID: @.>
oh...... I need to check that. edit: As I tested it, when rebooting through the manager, all args are being passed correctly without any omissions.
This issue is being marked stale because it has not had any activity for 30 days. Reply below within 7 days if your issue still isn't solved, and it will be left open. Otherwise, the issue will be closed automatically.