invalid cast on intel gpus with z image and lora
Custom Node Testing
- [x] I have tried disabling custom nodes and the issue persists (see how to disable custom nodes if you need help)
Expected Behavior
getting a actual image generated with a lora on z model
Actual Behavior
when using the z image fp8_e4m3fn scaled with a lora i get an invalid cast
/workspace/ComfyUI/nodes.py:1604: RuntimeWarning: invalid value encountered in cast
img = Image.fromarray(np.clip(i, 0, 255).astype(np.uint8))
Steps to Reproduce
just generate a z image with a lora( lora output connected to auraflowsampler also tried without same error) i feel like this is only an issue on intel gpu;s but still
Debug Logs
/workspace/ComfyUI/nodes.py:1604: RuntimeWarning: invalid value encountered in cast
img = Image.fromarray(np.clip(i, 0, 255).astype(np.uint8))
Other
No response
I filed similar on comfy impact but it's probably not that code using NVIDIA RTX PRO 6000 Blackwell Max-Qs.
https://github.com/issues/created?issue=ltdrdata%7CComfyUI-Impact-Pack%7C1144
that look like your issue? happens on qwen and z-image for me
Use the z image files from our huggingface repo and this likely won't be a problem: https://huggingface.co/Comfy-Org/z_image_turbo
I verified with the QWEN image model that I'm using the same sha256 versions as the Comfy-Org repo
Qwen image model from Hugging Face: https://huggingface.co/Comfy-Org/Qwen-Image_ComfyUI/blob/main/split_files/diffusion_models/qwen_image_bf16.safetensors SHA256: d08fb5d68026c0d87325f9a7b3ad6454061113a2bc73cc883114dae172937ae7
My Local Version sha256sum qwen_image_bf16.safetensors d08fb5d68026c0d87325f9a7b3ad6454061113a2bc73cc883114dae172937ae7 qwen_image_bf16.safetensors
Qwen text_encoder model from Hugging Face: https://huggingface.co/Comfy-Org/Qwen-Image_ComfyUI/blob/main/split_files/text_encoders/qwen_2.5_vl_7b.safetensors SHA256: cfafd739459bc86257397259f612a9aee88e5b98e85b5c0d0d1717e898b3463a
My Local Version sha256sum qwen_2.5_vl_7b.safetensors cfafd739459bc86257397259f612a9aee88e5b98e85b5c0d0d1717e898b3463a qwen_2.5_vl_7b.safetensors
Qwen VAE model from Hugging Face: https://huggingface.co/Comfy-Org/Qwen-Image_ComfyUI/blob/main/split_files/vae/qwen_image_vae.safetensors SHA256: a70580f0213e67967ee9c95f05bb400e8fb08307e017a924bf3441223e023d1f
My Local Version sha256sum qwen_image_vae.safetensors a70580f0213e67967ee9c95f05bb400e8fb08307e017a924bf3441223e023d1f qwen_image_vae.safetensors
Same on 5090 when changing promt , updating lora str fixes that Disabling loras entirely does not raise this error (when changing prompts after)