ComfyUI icon indicating copy to clipboard operation
ComfyUI copied to clipboard

I hope comfyui will support sdxs soon

Open wibur0620 opened this issue 3 months ago • 18 comments

https://idkiro.github.io/sdxs/

This model can achieve a speed of 100FPS on a single GPU

wibur0620 avatar Mar 26 '24 22:03 wibur0620

workflow_sdxs_0.9.json

The unet file is: https://huggingface.co/IDKiro/sdxs-512-0.9/tree/main/unet

comfyanonymous avatar Mar 28 '24 04:03 comfyanonymous

@comfyanonymous I get this error. Any idea what I'm doing wrong here?

Screenshot 2024-03-28 at 10 14 32

eliganim avatar Mar 28 '24 08:03 eliganim

workflow_sdxs_0.9.json

The unet file is: https://huggingface.co/IDKiro/sdxs-512-0.9/tree/main/unet

Thank you very much.

wibur0620 avatar Mar 28 '24 16:03 wibur0620

the workflow requieres clip_h.safetensors, any idea where to find this ? tnx!

fogostudio avatar Mar 29 '24 03:03 fogostudio

@comfyanonymous I get this error. Any idea what I'm doing wrong here?

Update ComfyUI.

the workflow requieres clip_h.safetensors, any idea where to find this ? tnx!

https://huggingface.co/IDKiro/sdxs-512-0.9/blob/main/text_encoder/model.safetensors

comfyanonymous avatar Mar 29 '24 04:03 comfyanonymous

I got that error either with 512 width/height/resolution or 1024

Error occurred when executing KSampler Adv. (Efficient):
'Downsample' object has no attribute 'emb_layers'

when connecting this piece of workflow

image

with that workflow (the AnyBus I use is not supporting the GetSet Node yet)

tw-bbq-wf-sdsx

if I deactivate this node, everything is fine

image


blender image to load

tw-bbq-depthmap


Is there a better place to ask help on that ?

MaraScott avatar Mar 29 '24 09:03 MaraScott

with the clues from here I was able to get SDXS running in comfy but I noticed that only good results come from using 1.5 VAE's, which is a bit strange to me.

unphased avatar Mar 30 '24 08:03 unphased

with the clues from here I was able to get SDXS running in comfy but I noticed that only good results come from using 1.5 VAE's, which is a bit strange to me.

The SDXS model they released was trained on a 512 resolution, which is the same resolution SD1.5 has. There is a 1024 resolution version but it hasn't been released yet.

Source: https://huggingface.co/IDKiro/sdxs-512-0.9#sdxs-512-09

SDXS-512-0.9 is a old version of SDXS-512. For some reasons, we are only releasing this version for the time being, and will gradually release other versions.

Wraithnaut avatar Mar 30 '24 20:03 Wraithnaut

with the clues from here I was able to get SDXS running in comfy but I noticed that only good results come from using 1.5 VAE's, which is a bit strange to me.

The SDXS model they released was trained on a 512 resolution, which is the same resolution SD1.5 has. There is a 1024 resolution version but it hasn't been released yet.

Source: https://huggingface.co/IDKiro/sdxs-512-0.9#sdxs-512-09

SDXS-512-0.9 is a old version of SDXS-512. For some reasons, we are only releasing this version for the time being, and will gradually release other versions.

I noticed the fairly poor quality of the images when I was playing with it and for some reason it never even crossed my mind to run a 1.5 vae due to the resolution, which I did have set to 512 in my simple test workflow.

edwardsdigital avatar Apr 01 '24 18:04 edwardsdigital

I am the author of SDXS. A new version of SDXS-512 is uploaded. Maybe try it.

https://huggingface.co/IDKiro/sdxs-512-dreamshaper

IDKiro avatar Apr 11 '24 12:04 IDKiro

I was able to run the UNET of 0.9, but the one from SDXS-512-Dreamshaper does not work.

Error occurred when executing UNETLoader:

ERROR: Could not detect model type of: /archive/shared/comfyui-krita/ComfyUI/models/unet/sdxs-0.9-deamshaper.safetensors

  File "/archive/shared/comfyui-krita/ComfyUI/execution.py", line 151, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/archive/shared/comfyui-krita/ComfyUI/execution.py", line 81, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/archive/shared/comfyui-krita/ComfyUI/execution.py", line 74, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/archive/shared/comfyui-krita/ComfyUI/nodes.py", line 814, in load_unet
    model = comfy.sd.load_unet(unet_path)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/archive/shared/comfyui-krita/ComfyUI/comfy/sd.py", line 600, in load_unet
    raise RuntimeError("ERROR: Could not detect model type of: {}".format(unet_path))

The file downloaded is this one.

KaruroChori avatar Apr 12 '24 23:04 KaruroChori

https://github.com/comfyanonymous/ComfyUI/commit/58812ab8ca601cc2dd9dbe64c1f3ffd4929fd0ca

That new model should work now. Just a note that this one needs clip_l instead of clip_h, you can download clip_l from: https://huggingface.co/IDKiro/sdxs-512-dreamshaper/blob/main/text_encoder/model.safetensors but other than that the above workflow should work now.

comfyanonymous avatar Apr 13 '24 02:04 comfyanonymous

Thanks!

KaruroChori avatar Apr 13 '24 04:04 KaruroChori

Just a quick word of feedback for others interested in this model:

  • It is fast, but it is also really baked-in. Even in a huge batch generation, different images are only minimally different. To be honest it feels more like a db of pre-generated images being retrieved than a full generative model.
  • Negative prompts as for most of these fast models do not work. This can very much limit the kind of generations
  • The direct generation of images which are not exactly 512x512 will just not work.

ComfyUI_temp_cptza_00273_ ComfyUI_temp_cptza_00305_

possible.

image image

KaruroChori avatar Apr 13 '24 05:04 KaruroChori

Yes, I adjusted the training hyperparameters in order to improve the quality of the image generation, but this also resulted in a lower diversity of image generation. We will be updating later this month with a version that allows for multi-step sampling, which will effectively improve the diversity of the images generated. If the open source application is approved, we will release the finetune training code, and then hope to provide SDXS with different tendencies (diversity, quality, style) through the community.

IDKiro avatar Apr 13 '24 07:04 IDKiro

@IDKiro thanks a lot for your generosity in making this open-source and available for everyone to use ❤️ I use this when I teach SD and ComfyUI, to quickly generate images and explain image generation concepts.

eliganim avatar Apr 13 '24 07:04 eliganim

Here, I wrapped up all of this thread into something a bit easier to understand, with some other features just for fun: https://openart.ai/workflows/-/-/fUxFDJrPkuSshjFyTl7F

halr9000 avatar Apr 15 '24 00:04 halr9000