SOS!After the update, the picture that comes out is all black
m1 max
/Users/weiwei/ComfyUI/nodes.py:1408: RuntimeWarning: invalid value encountered in cast
img = Image.fromarray(np.clip(i, 0, 255).astype(np.uint8))
Try updating your pytorch to the latest pytorch nightly. If that doesn't work you can try: --force-upcast-attention
Thanks for the great reply, I just tried updating the latest pytorch nightly and still couldn't solve the problem. Then tried --force-upcast-attention which showed that: usage: main.py [-h] [--listen [IP]] [--port PORT] [--tls-keyfile TLS_KEYFILE] [--tls-certfile TLS_CERTFILE] [--enable-cors-header [ORIGIN]] [--max-upload-size MAX_UPLOAD_SIZE] [--extra-model-paths-config PATH [PATH ...]] [--output-directory OUTPUT_DIRECTORY] [--temp-directory TEMP_DIRECTORY] [--input-directory INPUT_DIRECTORY] [--auto-launch] [--disable-auto-launch] [--cuda-device DEVICE_ID] [--cuda-malloc | --disable-cuda-malloc] [--force-fp32 | --force-fp16] [--bf16-unet | --fp16-unet | --fp8_e4m3fn-unet | --fp8_e5m2-unet] [--fp16-vae | --fp32-vae | --bf16-vae] [--cpu-vae] [--fp8_e4m3fn-text-enc | --fp8_e5m2-text-enc | --fp16-text-enc | --fp32-text-enc] [--directml [DIRECTML_DEVICE]] [--disable-ipex-optimize] [--preview-method [none,auto,latent2rgb,taesd]] [--use-split-cross-attention | --use-quad-cross-attention | --use-pytorch-cross-attention] [--disable-xformers] [--gpu-only | --highvram | --normalvram | --lowvram | --novram | --cpu] [--disable-smart-memory] [--deterministic] [--dont-print-server] [--quick-test-for-ci] [--windows-standalone-build] [--disable-metadata] [--multi-user] [--verbose] main.py: error: unrecognized arguments: --force-upcast-attention It still failed.
Update ComfyUI.
Thanks again. It's been updated. It's still the same.
/Users/weiwei/Envs/comfyui/lib/python3.10/site-packages/diffusers/models/resnet.py:328: FutureWarning:
scale is deprecated and will be removed in version 1.0.0. The scale argument is deprecated and will be ignored. Please remove it, as passing it will raise an error in the future. scale should directly be passed while calling the underlying pipeline component i.e., via cross_attention_kwargs.
deprecate("scale", "1.0.0", deprecation_message)
100%|█████████████████████████████████████████████████████████████████████| 20/20 [00:28<00:00, 1.44s/it]
Requested to load AutoencoderKL
Loading 1 new model
/Users/weiwei/ComfyUI/nodes.py:1408: RuntimeWarning: invalid value encountered in cast
img = Image.fromarray(np.clip(i, 0, 255).astype(np.uint8))
Prompt executed in 40.54 seconds
got prompt
[rgthree] Using rgthree's optimized recursive execution.
model_type EPS
Using split attention in VAE
Using split attention in VAE
Requested to load SDXLClipModel
Loading 1 new model
Requested to load SDXL
Loading 1 new model
100%|███████████████████████████████████████████████████████████████████████| 5/5 [00:21<00:00, 4.28s/it]
Requested to load AutoencoderKL
Loading 1 new model
/Users/weiwei/ComfyUI/nodes.py:1408: RuntimeWarning: invalid value encountered in cast
img = Image.fromarray(np.clip(i, 0, 255).astype(np.uint8))
Prompt executed in 32.90 seconds
I'll be backing up to this version now that I've got a normal picture. version number:b0ab31d
I'll be backing up to this version now that I've got a normal picture. version number:b0ab31d
what version do you mean? I encountered the same problems, but I once ran it well. I will never ran it out again. how to keep a specified version?
what version do you mean? I encountered the same problems, but I once ran it well. I will never ran it out again. how to keep a specified version? git reset --hard b0ab31d06c5df98b094d8f38db5cda4e5aec47eb
what version do you mean? I encountered the same problems, but I once ran it well. I will never ran it out again. how to keep a specified version? git reset --hard b0ab31d
run " git reset --hard b0ab31d" in /path/to/ComfyUI ?
Just replace the version number with a hash
you mean I can reset to any version of the followings? git log --oneline -20 1900e51 (HEAD -> master, origin/master, origin/HEAD) Fix potential issue. 276f8fc Print error when node is missing. 4bc1884 Provide a better error message when attempting to execute the workflow with a missing node. (#3517) 09e069a Log the pytorch version. 11a2ad5 Fix controlnet not upcasting on models that have it enabled. 4ae1515 Slightly faster latent2rgb previews. f37a471 Make --preview-method auto default to the fast latent2rgb previews. 0bdc2b1 Cleanup. 98f828f Remove unnecessary code. 1c4af59 Better error message if the webcam node doesn't work. 91590ad Add webcam node (#3497) 1930065 Don't automatically switch to lowvram mode on GPUs with low memory. 46daf0a Add debug options to force on and off attention upcasting. 58f8388 More proper fix for #3484. 2d41642 Fix lowvram dora issue. ec6f16a Fix SAG. bb4940d Only enable attention upcasting on models that actually need it. b0ab31d Refactor attention upcasting code part 1. 2de3b69 Support saving some more modelspec types. cf6e1ef Show message on error when loading wf from file (works on drag and drop) (#3466)
this is ok ?
git reset --hard b0ab31d HEAD is now at b0ab31d Refactor attention upcasting code part 1.
Naturally, version backtracking via git constitutes a fundamental operation
done。 It works well。
Thank you so much!
Updating Pytorch worked for me but only for certain schedulers. Karras isn't working with any of the samplers.
Me too, it is linked to denoise value, at certain values it throws the error, change the value up or down and it might work again. I wasn't too worried as my usual model and sampler were fine, but after the last update the curse has spread. I'm all updated. I get: ComfyUI/nodes.py:1408: RuntimeWarning: invalid value encountered in cast
I'm getting the same issue on the latest ComfyUI commit on master. Updating Pytorch to the latest nightly doesn't work either.
I'm getting the same issue on the latest ComfyUI commit on master
getting the same issue after first generation(first generation is ok)
hi guys ! i got the same issue when i use for IPAdapter faceId Plus V2 i'm running on macos and amd tx5500 is this issue be affected by these environment?
Same issue after fresh install with latest build and commits... On Windows, Nvidia, using Flux.
same
Same issue here since last update. I think I narrowed it down to the parser used...
See code dump below - note the first gen is a low step gen I use in my workflow. The second through to 5th gens are the actual full generations. In the first execution it runs fine using the comfy++ parser. In the second execution it fails on generations 2 to 5 due to the error mentioned above and all outputs go black. This is using the 'fixed attention' parser.
got prompt 100%|████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:00<00:00, 4.12it/s] torch.Size([1, 3, 112, 144]) torch.Size([1, 3, 112, 144]) 100%|██████████████████████████████████████████████████████████████████████████████████| 21/21 [00:05<00:00, 4.01it/s] torch.Size([1, 3, 144, 112]) torch.Size([1, 3, 144, 112]) 100%|██████████████████████████████████████████████████████████████████████████████████| 21/21 [00:05<00:00, 3.99it/s] torch.Size([1, 3, 112, 144]) torch.Size([1, 3, 112, 144]) 100%|██████████████████████████████████████████████████████████████████████████████████| 21/21 [00:05<00:00, 4.01it/s] torch.Size([1, 3, 144, 112]) torch.Size([1, 3, 144, 112]) 100%|██████████████████████████████████████████████████████████████████████████████████| 21/21 [00:05<00:00, 4.04it/s] Prompt executed in 38.17 seconds got prompt 100%|████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:00<00:00, 4.02it/s] torch.Size([1, 3, 112, 144]) torch.Size([1, 3, 112, 144]) 100%|██████████████████████████████████████████████████████████████████████████████████| 21/21 [00:02<00:00, 7.28it/s] D:\SDInstalls\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\nodes.py:1498: RuntimeWarning: invalid value encountered in cast img = Image.fromarray(np.clip(i, 0, 255).astype(np.uint8)) torch.Size([1, 3, 144, 112]) torch.Size([1, 3, 144, 112]) 100%|██████████████████████████████████████████████████████████████████████████████████| 21/21 [00:02<00:00, 7.10it/s] torch.Size([1, 3, 112, 144]) torch.Size([1, 3, 112, 144]) 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 21/21 [00:02<00:00, 7.28it/s] torch.Size([1, 3, 144, 112]) torch.Size([1, 3, 144, 112]) 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 21/21 [00:02<00:00, 7.33it/s] Prompt executed in 28.00 seconds
This is on a fresh install so shouldn't have any conflicts.
EDIT: OK I partially figured it out here, it seems prompts in the latest version using those parsers cannot use square brackets [] if you put them in it just fails to validate due to above error
Update ComfyUI.
i still encounter this issue..i am using torch=2.4.0+cu11.8 while running in local environment it runs smoothly...but i converted that into docker image ..while running docker image i encounter this issue....
# Base image
FROM runpod/base:0.4.1-cuda12.1.0
# Upgrade pip and install Python dependencies from requirements.txt
COPY builder/requirements.txt /requirements.txt
RUN python3.11 -m pip install --upgrade pip && \
python3.11 -m pip install --ignore-installed --upgrade -r /requirements.txt --no-cache-dir
# Create a symlink for python (if needed)
RUN ln -s /usr/bin/python3.11 /usr/bin/python
# Add source files
ADD src ./
# Run the main script with --force-upcast-attention flag
CMD python3.11 -u /handler.py --force-upcast-attention
this is my docker file @comfyanonymous @yiwangsimple
Run command : python handler.py
Run command : docker run --gpus all cog-flux-dev-realism