sd-webui-infinite-image-browsing
sd-webui-infinite-image-browsing copied to clipboard
Feature request: Comfyui generational data.
It's probably a lot to ask, but might as well try. Basically whenever you get an image and its generated with comfyui, the generational data isnt shown in this extension. I would like a way for it to be shown.
Again, probably asking a lot but it cant hurt to ask.
I'll take a look at how to implement it when I have some time. If it turns out that many people are interested in this feature, I'll do my best to prioritize it and implement it as soon as possible
Trying Comfy lately due to sdxl and would love to have this feature as well!
I tried ImageGlass/ExifGlass to view the generation data. It shows that Comfyui adds 2 Tags into png: Prompt and Workflow, both are just super long json string.. Workflow is info about all the nodes used in the graph. And the Prompt if for every input in the nodes, much more that just positive/negative 'prompt'.
I guess that would be difficult to parse.
Actually I wonder if it would be easier to create a custom node in ComfyUI that would save generation data in a more traditional fasion that is compatible with IIB directly.
@Puddz @pto2k
Could someone supply me with some sample images that were generated using ComfyUI, so that I can experiment with them?
Hi,
Here is one sample image for reference:
And here is the screenshot of the graph used.
As you can see, there are 5 text boxes for generating 4 images. (2 for positive/negative prompt of 1st image and 3 other for image2image generations) And they are embedded in every one of generated images.
Please let me know if more info I could provide.
Thank you!
I have implemented basic support for comfyui ~which is currently only available in standalone mode~.
usage
First, follow the steps below to start IIB.
git clone https://github.com/zanllp/sd-webui-infinite-image-browsing.git
cd sd-webui-infinite-image-browsing
pip install -r requirements.txt
python app.py --port=7888
Recommend adding the output folder of comfyui to the search path."
known issues (ComfyUI Only)
However, there is a significant issue at the moment. The EXIF data for an image may contain information from multiple images, making it difficult to determine which information belongs to the current image. I will reach out to their repository to inquire about this.
This is wonderful. Thank you so much!
When there are multiple images' generation information in the EXIF of an image, the displayed generation information may be inconsistent with the actual image, and this issue may not be resolved temporarily. However, if there is any update later, I will continue to follow up on it. For more details, please refer to https://github.com/comfyanonymous/ComfyUI/discussions/1007
Thanks for adding it either way
For some reasons. It doesn't process and show any prompt & metadata for my recent workflow. Some very early workflow generations work somehow. Perhaps some custom nodes may affect how your script extract the information.
I think one possible solution is that let's required the users to name the PROMPT node (CLIPTextEncode) properly. You see, in ComfyUI, there's a so-called "Node name for S&R" which we can name a node so that later on we can refered to it in other nodes to extract its data. eg: %Ksampler.noise_seed% , or in the screen above the BASE prompts can be referred to as %Prompt.BASE.text% / %PromptNeg.BASE.text%
by enforcing this, there won't be any conflict while collecting prompts as it will always extracting the root prompts or the one specific by users (by naming the nodes in the workflow)
my 2cents. Please let me know if there's anything i can do to help.
Thanks a lot for keeping this tool better .
For some reasons. It doesn't process and show any prompt & metadata for my recent workflow. Some very early workflow generations work somehow. Perhaps some custom nodes may affect how your script extract the information.
![]()
I think one possible solution is that let's required the users to name the PROMPT node (CLIPTextEncode) properly. You see, in ComfyUI, there's a so-called "Node name for S&R" which we can name a node so that later on we can refered to it in other nodes to extract its data. eg: %Ksampler.noise_seed% , or in the screen above the BASE prompts can be referred to as %Prompt.BASE.text% / %PromptNeg.BASE.text%
by enforcing this, there won't be any conflict while collecting prompts as it will always extracting the root prompts or the one specific by users (by naming the nodes in the workflow)
my 2cents. Please let me know if there's anything i can do to help.
Thanks a lot for keeping this tool better .
It seems like there is a parsing error. Could you provide the image so that I can take a look at how to handle it?
By the way, @Chubbly mentioned a custom node in #379 that provides webUI-like EXIF data. I think you can give it a try.
For other solutions, I don't want to make things too complicated or the usage steps overly cumbersome at the moment. I feel it would be better to wait for the official human-readable prompt to be available before further discussion.
For some reasons. It doesn't process and show any prompt & metadata for my recent workflow. Some very early workflow generations work somehow. Perhaps some custom nodes may affect how your script extract the information.
![]()
I think one possible solution is that let's required the users to name the PROMPT node (CLIPTextEncode) properly. You see, in ComfyUI, there's a so-called "Node name for S&R" which we can name a node so that later on we can refered to it in other nodes to extract its data. eg: %Ksampler.noise_seed% , or in the screen above the BASE prompts can be referred to as %Prompt.BASE.text% / %PromptNeg.BASE.text% by enforcing this, there won't be any conflict while collecting prompts as it will always extracting the root prompts or the one specific by users (by naming the nodes in the workflow) my 2cents. Please let me know if there's anything i can do to help. Thanks a lot for keeping this tool better .
It seems like there is a parsing error. Could you provide the image so that I can take a look at how to handle it?
By the way, @Chubbly mentioned a custom node in #379 that provides webUI-like EXIF data. I think you can give it a try.
For other solutions, I don't want to make things too complicated or the usage steps overly cumbersome at the moment. I feel it would be better to wait for the official human-readable prompt to be available before further discussion.
sure. pls take a look at the image: https://drive.google.com/file/d/1rIOHg5xYGDijs82nEAzmInyXy1o214oq/view?usp=sharing
我已经实现了对 comfyui 的基本支持,目前仅在独立模式下可用。
用法
建议将 comfyui 的输出文件夹添加到搜索路径。”
或者
![]()
已知的问题
然而,目前有一个重大问题。图像的 EXIF 数据可能包含来自多个图像的信息,因此很难确定哪些信息属于当前图像。我将联系他们的存储库来询问此事。
thank you for suporting comfyui!! I need it!
Another solution is to embed this node into your workflow. This node will write metadata in the A1111 format into the images, and then you'll be able to manage ComfyUI's images just as you would with images from A1111, no matter how complex your workflow is or how many custom nodes you use. https://github.com/receyuki/comfyui-prompt-reader-node
And quick one: Use this to start the python script, than it waits 5 sec and opens the browser. You do not need to change the path oranything, it finds where the .bat is located and from there it starts the Infinite Image Browsing, i set it to 5 sec, think that should be enough time for most computers, to have started the .py script.
You can make a Shortcut in windows, and than drag that to anywhere you like.
@echo off
cd /d "%~dp0"
start "" python app.py --port=7888
timeout /t 5 /nobreak >nul
start "" "http://127.0.0.1:7888"
我有一个思路,用一个Iframe展示ComfyUI网址,触发拖拽事件 (ComfyUI 可以直接拖入图片打开工作流),从而用ComfyUI本身来展示工作流
The comfyui metadata is still not read by IIB. I can provide some exemples if needed.