ComfyUI_smZNodes icon indicating copy to clipboard operation
ComfyUI_smZNodes copied to clipboard

Custom nodes for ComfyUI such as CLIP Text Encode++

smZNodes

A selection of custom nodes for ComfyUI.

  1. CLIP Text Encode++
  2. Settings

Contents

  • Tips to get reproducible results on both UIs
  • FAQs
  • Installation

CLIP Text Encode++

Clip Text Encode++ – Default settings on stable-diffusion-webui

CLIP Text Encode++ can generate identical embeddings from stable-diffusion-webui for ComfyUI.

This means you can reproduce the same images generated from stable-diffusion-webui on ComfyUI.

Simple prompts generate identical images. More complex prompts with complex attention/emphasis/weighting may generate images with slight differences. In that case, you can try using the Settings node to match outputs.

Features

Comparisons

These images can be dragged into ComfyUI to load their workflows. Each image is done using the Silicon29 (in SD v1.5) checkpoint with 18 steps using the Heun sampler.

stable-diffusion-webui A1111 parser Comfy parser
00008-0-cinematic wide shot of the ocean, beach, (palmtrees_1 5), at sunset, milkyway A1111 parser comparison 1 Comfy parser comparison 1
00007-0-a photo of an astronaut riding a horse on mars, ((palmtrees_1 2) on water) A1111 parser comparison 2 Comfy parser comparison 2

Image slider links:

  • https://imgsli.com/MTkxMjE0
  • https://imgsli.com/MTkxMjEy

Options

Name Description
parser The parser to parse prompts into tokens and then transformed (encoded) into embeddings. Taken from SD.Next.
mean_normalization Whether to take the mean of your prompt weights. It's true by default on stable-diffusion-webui.
This is implemented according to how it is in stable-diffusion-webui.
multi_conditioning
For each prompt, the list is obtained by splitting the prompt using the AND separator.
See: Compositional Visual Generation with Composable Diffusion Models
  • a way to use multiple prompts at once
  • allows AND in the negative prompt as well
  • supports weights for prompts: a cat :1.2 AND a dog AND a penguin :2.2. The weights default to 1
  • each prompt gets a cfg value of cfg * weight / N, where N is the number of positive prompts. In stable-diffusion-webui, each prompt gets a cfg value of cfg * weight. To match their behaviour, you can add a weight of :N to every prompt or simply set a cfg value of cfg * N
This uses CFGDenoiser internally, so if it's disabled in the Settings node, the prompts will act like it came from the ConditioningCombine node and go through the default behaviour in ComfyUI.
use_old_emphasis_implementation
Use old emphasis implementation. Can be useful to reproduce old seeds.

[!TIP]
You can right click the node to show/hide some of the widgets. E.g. the with_SDXL option.


Parser Description
comfy The default way ComfyUI handles everything
comfy++ Uses ComfyUI's parser but encodes tokens the way stable-diffusion-webui does, allowing to take the mean as they do.
A1111 The default parser used in stable-diffusion-webui
full Same as A1111 but whitespaces, newlines, and special characters are stripped
compel Uses compel
fixed attention Prompt is untampered with

[!IMPORTANT]
Every parser except comfy uses stable-diffusion-webui's encoding pipeline.

[!WARNING]
LoRA syntax (<lora:name:1.0>) is not suppprted.

Settings

Settings-node-showcase

Settings node showcase

The Settings node is a dynamic node functioning similar to the Reroute node and is used to fine-tune results during sampling or tokenization. The inputs can be replaced with another input type even after it's been connected. CLIP inputs only apply settings to CLIP Text Encode++. Settings apply locally based on its links just like nodes that do model patches. I made this node to explore the various settings found in stable-diffusion-webui.

This node can change whenever it is updated, so you may have to recreate it to prevent issues. Settings can be overridden by using another Settings node somewhere past a previous one. Right click the node for the Hide/show all descriptions menu option.

Tips to get reproducible results on both UIs

  • Use the same seed, sampler settings, RNG (CPU or GPU), clip skip (CLIP Set Last Layer), etc.
  • Ancestral and SDE samplers may not be deterministic.
  • If you're using DDIM as your sampler, use the ddim_uniform scheduler.
  • There are different unipc configurations. Adjust accordingly on both UIs.

FAQs

  • How does this differ from ComfyUI_ADV_CLIP_emb?
    • While the weights are normalized in the same manner, the tokenization and encoding pipeline that's taken from stable-diffusion-webui differs from ComfyUI's. These small changes add up and ultimately produces different results.
  • Where can I learn more about how ComfyUI interprets weights?
    • https://comfyanonymous.github.io/ComfyUI_examples/faq/
    • https://blenderneko.github.io/ComfyUI-docs/Interface/Textprompts/
    • comfyui.creamlab.net)

Installation

Three methods are available for installation:

  1. Load via ComfyUI Manager
  2. Clone the repository directly into the extensions directory.
  3. Download the project manually.

Load via ComfyUI Manager

ComfyUI Manager

Install via ComfyUI Manager

Clone Repository

cd path/to/your/ComfyUI/custom_nodes
git clone https://github.com/shiimizu/ComfyUI_smZNodes.git

Download Manually

  1. Download the project archive from here.
  2. Extract the downloaded zip file.
  3. Move the extracted files to path/to/your/ComfyUI/custom_nodes.
  4. Restart ComfyUI

The folder structure should resemble: path/to/your/ComfyUI/custom_nodes/ComfyUI_smZNodes.

Update

To update the extension, update via ComfyUI Manager or pull the latest changes from the repository:

cd path/to/your/ComfyUI/custom_nodes/ComfyUI_smZNodes
git pull

Credits