ComfyUI
ComfyUI copied to clipboard
Loading Graph with missing models/invalid inputs breaks the whole graph
Expected Behavior
The Graph should load (connected) and the node should show an error (or error when run).
Actual Behavior
If any node has an error the entire graph is disconnected
ESC[91mFailed to validate prompt for output image_saver:ESC[0m
ESC[91m* CheckpointLoaderSimple checkpoint_loader:ESC[0m
ESC[91m - Value not in list: ckpt_name: 'models/stable-diffusion-xl-base-1.0/last.safetensors' not in (list of length 111)ESC[0m
ESC[91m* IPAdapterModelLoader ip_adapter_loader:ESC[0m
ESC[91m - Value not in list: ipadapter_file: 'ip-adapter_sdxl_vit-h.safetensors' not in []ESC[0m
ESC[91m* CLIPVisionLoader clip_vision_encoder:ESC[0m
ESC[91m - Value not in list: clip_name: 'CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors' not in []ESC[0m
ESC[91m* ControlNetLoader controlnet_loader:ESC[0m
ESC[91m - Value not in list: control_net_name: 'control_sdxl_canny.safetensors' not in []ESC[0m
ESC[91mOutput will be ignoredESC[0m
ESC[91mFailed to validate prompt for output safety_image_saver:ESC[0m
ESC[91mOutput will be ignoredESC[0m
ESC[91minvalid prompt: {'type': 'prompt_outputs_failed_validation', 'message': 'Prompt outputs failed validation', 'details': '', 'extra_info': {}}ESC[0m
Steps to Reproduce
Debug Logs
got prompt
ESC[91minvalid prompt: {'type': 'prompt_no_outputs', 'message': 'Prompt has no outputs', 'details': '', 'extra_info': {}}ESC[0m
### Other
_No response_
Same problem here.
Since updating a day or so ago, same issue. If I load a workflow with missing nodes, rather than loading the flow and showing the missing nodes red, I get... nothing. A blank canvas. Repeatable for me on any workflow where I don't have the necessary nodes installed. Couple of days ago, this did not happen.
Can you try running it with: --disable-all-custom-nodes to see if the issue is related to a custom node?
Sorry to be dense, but where does this go? cli_args.py ? server.py?
Sorry to be dense, but where does this go? cli_args.py ? server.py?
python main.py --disable-all-custom-nodes
Just... anywhere in there? I can't find where in that file flags would go, maybe because I have no other custom flags in that file.
Honestly it would be better if startup args / flags could go in same place as the extra paths yaml, similar to Auto1111 - all custom options (paths and args) in one place.
While we're talking about that, I had a heck of a time trying to find a definitive answer on where to tell Comfy to use GPU1 not GPU0. So yeah having ONE place for common flags / debug flags that is EASY to find without maybe messing up the MAIN py file could be good for people? IE apparently CUDA DEVICE goes in cli_args but now you're telling me that a different flag goes in a different file. Happy to try to debug this but it's not straightforward if different flags need to go in different files.
Just... anywhere in there? I can't find where in that file flags would go, maybe because I have no other custom flags in that file.
Honestly it would be better if startup args / flags could go in same place as the extra paths yaml, similar to Auto1111 - all custom options (paths and args) in one place.
While we're talking about that, I had a heck of a time trying to find a definitive answer on where to tell Comfy to use GPU1 not GPU0. So yeah having ONE place for common flags / debug flags that is EASY to find without maybe messing up the MAIN py file could be good for people? IE apparently CUDA DEVICE goes in cli_args but now you're telling me that a different flag goes in a different file. Happy to try to debug this but it's not straightforward if different flags need to go in different files.
You have to edit run_nvidia_gpu.bat if you are using portable version.
I don't understand why you're mentioning cli_args.py. That's a module for implementing command line option processing functionality, not something the user should be concerned with.
The only file that the user needs to know about is the main.py file for execution, and users should not modify any .py files, including the main.py file.
I had to edit the cli_args to use the 2nd GPU, according to what I found via Google that was the only way to do it. So I mentioned it because what I would have expected was one easy-to-find place to add startup flags, like --CUDA_DEVICE=1 or --disable-all-custom-nodes as you said to add, and the original question was where to add that, which brought up the topic of where to add any flags at all. main.py is a complex python file. Not the best place for users to add custom flags I agree!
So what you are saying is the --disable-all-custom-nodes should actually go in run_nvidia_gpu.bat, not main.py?
Thanks for the help. It must be some dud nodes, because as of later yesterday I have been loading some other workflows, and the flow loads with red (missing) nodes. But then some other workflows seem to load, but I have a blank canvas and no visible nodes anywhere (zooming out, panning around... nothing).
Thanks
I had to edit the cli_args to use the 2nd GPU, according to what I found via Google that was the only way to do it. So I mentioned it because what I would have expected was one easy-to-find place to add startup flags, like --CUDA_DEVICE=1 or --disable-all-custom-nodes as you said to add, and the original question was where to add that, which brought up the topic of where to add any flags at all. main.py is a complex python file. Not the best place for users to add custom flags I agree!
So what you are saying is the --disable-all-custom-nodes should actually go in run_nvidia_gpu.bat, not main.py?
Thanks for the help. It must be some dud nodes, because as of later yesterday I have been loading some other workflows, and the flow loads with red (missing) nodes. But then some other workflows seem to load, but I have a blank canvas and no visible nodes anywhere (zooming out, panning around... nothing).
Thanks
yup.
cli_args.py is used to add options as new features.
Users can simply add options next to the --windows-standalone-build option in run_nvidia_gpu.bat.
Hello, I'm having the same problem using the latest version in a clean install on ubuntu.
I'm running with thus python main.py --listen --front-end-version Comfy-Org/ComfyUI_frontend@latest
log:
[email protected]:/dejav/ComfyUI$ python main.py --listen --front-end-version Comfy-Org/ComfyUI_frontend@latest
[START] Security scan
[DONE] Security scan
## ComfyUI-Manager: installing dependencies done.
** ComfyUI startup time: 2024-09-14 02:39:51.650919
** Platform: Linux
** Python version: 3.11.9 (main, Apr 19 2024, 16:48:06) [GCC 11.2.0]
** Python executable: /opt/conda/bin/python
** ComfyUI Path: /dejav/ComfyUI
** Log path: /dejav/ComfyUI/comfyui.log
Prestartup times for custom nodes:
0.8 seconds: /dejav/ComfyUI/custom_nodes/ComfyUI-Manager
Total VRAM 24260 MB, total RAM 64248 MB
pytorch version: 2.4.0
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 3090 : cudaMallocAsync
Using pytorch cross attention
[Prompt Server] web root: /dejav/ComfyUI/web_custom_versions/Comfy-Org_ComfyUI_frontend/1.2.53
/opt/conda/lib/python3.11/site-packages/kornia/feature/lightglue.py:44: FutureWarning: `torch.cuda.amp.custom_fwd(args...)` is deprecated. Please use `torch.amp.custom_fwd(args..., device_type='cuda')` instead.
@torch.cuda.amp.custom_fwd(cast_inputs=torch.float32)
### Loading: ComfyUI-Manager (V2.50.3)
### ComfyUI Revision: 2689 [cf80d286] | Released on '2024-09-13'
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/github-stats.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json
generated new fontManager
Import times for custom nodes:
0.0 seconds: /dejav/ComfyUI/custom_nodes/websocket_image_save.py
0.1 seconds: /dejav/ComfyUI/custom_nodes/ComfyUI-Manager
0.6 seconds: /dejav/ComfyUI/custom_nodes/ComfyUI-AdvancedLivePortrait
Starting server
To see the GUI go to: http://0.0.0.0:8188
FETCH DATA from: /dejav/ComfyUI/custom_nodes/ComfyUI-Manager/extension-node-map.json [DONE]
I drop different workflows with missing nodes and it shows the warning but nothing else, nothing in the logs (I don't have any extra loggin enabled)
Same behaviour if running with --disable-all-custom-nodes, shows the warning but nothing after
Same problem here - none of my old workflows can be properly opened and my only change was updating Comfy to the latest version.
The first time I try I get a javascript popup box "Invalid workflow against zod schema: Validation error: Required at nodes[2].order; Required at nodes[2].order; Required at nodes[4].order; ...."
After that I am seeing this error message showing in the ComfyUI error box, but I am not sure if that is actually what causes the error:
TypeError: Cannot read properties of null (reading 'split')
at nodeType.onConfigure (http://127.0.0.1:8188/extensions/ComfyUI_tinyterraNodes/ttN.js:641:56)
at ComfyNode.configure (http://127.0.0.1:8188/assets/index-Dfv2aLsq.js:60776:14)
at _LGraph2.configure (http://127.0.0.1:8188/assets/index-Dfv2aLsq.js:60559:20)
at LGraph.configure (http://127.0.0.1:8188/assets/index-Dfv2aLsq.js:80185:26)
at LGraph.configure (http://127.0.0.1:8188/extensions/ComfyUI-Custom-Scripts/js/reroutePrimitive.js:14:29)
at LGraph.configure (http://127.0.0.1:8188/extensions/ComfyUI-Custom-Scripts/js/snapToGrid.js:160:21)
at ComfyApp.loadGraphData (http://127.0.0.1:8188/assets/index-Dfv2aLsq.js:80641:18)
at async app2.loadGraphData (http://127.0.0.1:8188/assets/index-Dfv2aLsq.js:74290:18)
at async app.loadGraphData (http://127.0.0.1:8188/extensions/ComfyUI-Manager/components-manager.js:771:9)
at async app.handleFile (http://127.0.0.1:8188/extensions/ComfyUI_smZNodes/js/metadata.js:48:29)
It loads the nodes onto the workspace, some are connected, some are not, some seem to be at their right positions, others are all stacked in a heap and disconnected. It the same with all old workflows and disabling the custom nodes package that is mentioned in the error does not change the outcome.
I wonder if this is connected with litegraph - a similar issue has been closed 2 days ago in the ComfUI_frontend repo - it would just be nice if that fix also made it into the release: https://github.com/Comfy-Org/ComfyUI_frontend/issues/792
In my case I was able to load the graph again after patching the error inside the custom node (so the initial error message was actually correct). Nevertheless, it is not great that a single node can break an entire workflow (given there was a way to handle errors like this before).
this would be a frontend issue https://github.com/Comfy-Org/ComfyUI_frontend
Also note that API Workflows are different from normal workflows. You should expect some unusual behavior with importing API workflows to the frontend, especially if that workflow has errors in it. Use normal (user) workflows for being loadable in the graph view
I don't think I was experiencing this issue with an API workflow/PNG, it came from openart.ai. BUT it could have been
How do you tell, from the JSON or PNG, if it was?
It happened with a few but there were no other comments on them about things not working, they were highly rated. I can't find the exact one right now, I will follow up
Mine wasn't with API workflow. I had to rollback to frontend v1.2.47 to have all working again. The problem started in v1.2.48
Em ter., 17 de set. de 2024 às 08:47, eWeb @.***> escreveu:
I don't think I was experiencing this issue with an API workflow/PNG, it came from openart.ai. BUT it could have been
How do you tell, from the JSON or PNG, if it was?
It happened with a few but there were no other comments on them about things not working, they were highly rated. I can't find the exact one right now, I will follow up
— Reply to this email directly, view it on GitHub https://github.com/comfyanonymous/ComfyUI/issues/4877#issuecomment-2355475514, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABUXTGQJOI7MAKI6ZOHTJ5TZXAJE7AVCNFSM6AAAAABN7MBED6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGNJVGQ3TKNJRGQ . You are receiving this because you commented.Message ID: @.***>
I noted the API thing because the OP references workflow_api (5).json as the replication file.
Regardless, please post the issue on the frontend repo, not the core repo
There's a separate repo for frontend? I installed from here, https://github.com/comfyanonymous/ComfyUI/, which is where we are now, right?
EDIT Ah I see, they moved to separate repo in August. Which was just recently, and I installed some time ago. They really bury that info at the bottom of the page though. OK