ComfyUI_ExtraModels icon indicating copy to clipboard operation
ComfyUI_ExtraModels copied to clipboard

GEMMA models are not downloading for SANA workflow.

Open parthwagh9999 opened this issue 1 year ago • 15 comments

image

I tried downloading without comfy Ui, means, I downloaded through hugging face manually. Even though it is not recognizing existing download. It is downloading again.

Don't know how to add access token. When added access token through import method it is breaking the nodes.

parthwagh9999 avatar Dec 12 '24 15:12 parthwagh9999

where do you place the gemma model inside comfy ui? models/... ?

Raphael-Screenworks avatar Dec 12 '24 16:12 Raphael-Screenworks

where do you place the gemma model inside comfy ui? models/... ?

H:\ComfyUI_windows_portable\ComfyUI\models\text_encoders\models--google--gemma-2-2b-it

I made changes to gemma.py file by adding access token in snapshot download function. Then it started downloading this model, but issue is persistent without it and not checking even when models are downloaded there.

Thank you

parthwagh9999 avatar Dec 13 '24 05:12 parthwagh9999

where do you place the gemma model inside comfy ui? models/... ?

H:\ComfyUI_windows_portable\ComfyUI\models\text_encoders\models--google--gemma-2-2b-it

I made changes to gemma.py file by adding access token in snapshot download function. Then it started downloading this model, but issue is persistent without it and not checking even when models are downloaded there.

Thank you

it again giving me error after some time, even using access token.

parthwagh9999 avatar Dec 13 '24 05:12 parthwagh9999

Sana team uploaded a copy of the gemma2 weights that don't need an access token to download, it should be available in the dropdown now. https://github.com/city96/ComfyUI_ExtraModels/commit/4a770ac22b85e70de5bafd5f6b68118dddcef2fd

city96 avatar Dec 13 '24 16:12 city96

Sana team uploaded a copy of the gemma2 weights that don't need an access token to download, it should be available in the dropdown now. 4a770ac

image gave this error, because there is no path mentioned in gemma/node.py file.

add like this

image

Now this started downloading model for gemma.

parthwagh9999 avatar Dec 13 '24 17:12 parthwagh9999

That's what I get for editing stuff on 4 hours of sleep lol https://github.com/city96/ComfyUI_ExtraModels/commit/6e5ad55f400e3fcbf5d85fe54b0aa926a2d95a11

At least I got the refactor halfway working which just lets you select actual local files instead of the auto download logic.

image

city96 avatar Dec 13 '24 18:12 city96

That's what I get for editing stuff on 4 hours of sleep lol 6e5ad55

At least I got the refactor halfway working which just lets you select actual local files instead of the auto download logic.

image

Thank you so much for your valuable time. I understand, it happens to everyone.

parthwagh9999 avatar Dec 14 '24 05:12 parthwagh9999

Even these changes are not working. I tried running several times. Model is downloading each time. Even after downloading it, it is not running. I am getting black images. Your workflow is completely different from workflow provided in this page "https://github.com/city96/ComfyUI_ExtraModels?tab=readme-ov-file".
Also, after updating extra model node, nodes are totally different from yours. Nodes mentioned in this image image are not found on updated extra model node. image image image image image

Even manually downloading models is not working. making changes to code like this to not download model, and use downloaded model giving this result image

"import os import torch from transformers import AutoTokenizer, AutoModelForCausalLM

Define folder_paths for model directories

class folder_paths: models_dir = "./models" # Adjust to your model directory path folder_names_and_paths = { "text_encoders": os.path.join(models_dir, "text_encoders") }

from ..utils.dtype import string_to_dtype

Root directory for manually downloaded text encoder models

tenc_root = ( folder_paths.folder_names_and_paths.get( "text_encoders", folder_paths.folder_names_and_paths.get("clip", [[], set()]) ) )

dtypes = [ "default", "auto (comfy)", "BF16", "FP32", "FP16", ]

try: torch.float8_e5m2 except AttributeError: print("Torch version too old; FP8 not supported") else: dtypes += ["FP8 E4M3", "FP8 E5M2"]

class GemmaLoader: @classmethod def INPUT_TYPES(s): devices = ["auto", "cpu", "cuda"] # Support multiple GPUs for k in range(1, torch.cuda.device_count()): devices.append(f"cuda:{k}") return { "required": { "model_name": ([ "Efficient-Large-Model/gemma-2-2b-it", "google/gemma-2-2b-it", "unsloth/gemma-2-2b-it-bnb-4bit" ],), "device": (devices, {"default": "cpu"}), "dtype": (dtypes,), } }

RETURN_TYPES = ("GEMMA",)
FUNCTION = "load_model"
CATEGORY = "ExtraModels/Gemma"
TITLE = "Gemma Loader"

def load_model(self, model_name, device, dtype):
    dtype = string_to_dtype(dtype, "text_encoder")
    if device == "cpu":
        assert dtype in [None, torch.float32], f"Can't use dtype '{dtype}' with CPU! Set dtype to 'default'."

    # Define the local directory for the manually downloaded model
    if model_name == 'google/gemma-2-2b-it':
        text_encoder_dir = text_encoder_dir = r"H:\ComfyUI_windows_portable\ComfyUI\models\text_encoders\models--google--gemma-2-2b-it"
    elif model_name == 'unsloth/gemma-2-2b-it-bnb-4bit':
        text_encoder_dir = os.path.join(folder_paths.models_dir, 'text_encoders', 'models--unsloth--gemma-2-2b-it-bnb-4bit')
    elif model_name == 'Efficient-Large-Model/gemma-2-2b-it':
        text_encoder_dir = os.path.join(folder_paths.models_dir, 'text_encoders', 'models--Efficient-Large-Model--gemma-2-2b-it')
    else:
        raise ValueError('Model not implemented!')

    # Check if the manually downloaded model exists
    if not os.path.exists(text_encoder_dir) or not os.listdir(text_encoder_dir):
        raise FileNotFoundError(f"Manually downloaded model files not found in: {text_encoder_dir}")

    # Load tokenizer and model from the local directory
    tokenizer = AutoTokenizer.from_pretrained(text_encoder_dir)
    text_encoder_model = AutoModelForCausalLM.from_pretrained(text_encoder_dir, torch_dtype=dtype)
    tokenizer.padding_side = "right"
    text_encoder = text_encoder_model.get_decoder()

    if device != "cpu":
        text_encoder = text_encoder.to(device)

    return ({
        "tokenizer": tokenizer,
        "text_encoder": text_encoder,
        "text_encoder_model": text_encoder_model
    },)

class GemmaTextEncode: @classmethod def INPUT_TYPES(s): return { "required": { "text": ("STRING", {"multiline": True}), "GEMMA": ("GEMMA",), } }

RETURN_TYPES = ("CONDITIONING",)
FUNCTION = "encode"
CATEGORY = "ExtraModels/Gemma"
TITLE = "Gemma Text Encode"

def encode(self, text, GEMMA=None):
    print(text)
    tokenizer = GEMMA["tokenizer"]
    text_encoder = GEMMA["text_encoder"]

    with torch.no_grad():
        tokens = tokenizer(
            text,
            max_length=300,
            padding="max_length",
            truncation=True,
            return_tensors="pt"
        ).to(text_encoder.device)

        cond = text_encoder(tokens.input_ids, tokens.attention_mask)[0]
        emb_masks = tokens.attention_mask

    cond = cond * emb_masks.unsqueeze(-1)

    return ([[cond, {}]], )

NODE_CLASS_MAPPINGS = { "GemmaLoader": GemmaLoader, "GemmaTextEncode": GemmaTextEncode, }

NODE_DISPLAY_NAME_MAPPINGS = { "GemmaLoader": "Gemma Loader", "GemmaTextEncode": "Gemma Text Encode", } image "

parthwagh9999 avatar Dec 14 '24 07:12 parthwagh9999

401 Client Error. (Request ID: Root=1-675d8b33-60412433b6654831703760a42c;a38b7bcf-24dc-4b79-bc8b-eb68f613c5b2)

Cannot access gated repo for url https://huggingface.co/google/gemma-2-2b-it/resolve/299a8560bedf22ed1c72a8a11e7dce4a7f9f51f8/.gitattributes. Access to model google/gemma-2-2b-it is restricted. You must have access to it and be authenticated to access it. Please log in.

ApexArtist avatar Dec 14 '24 13:12 ApexArtist

That's what I get for editing stuff on 4 hours of sleep lol 6e5ad55 At least I got the refactor halfway working which just lets you select actual local files instead of the auto download logic. image

Thank you so much for your valuable time. I understand, it happens to everyone.

cannot recognize local path, is there a cliploader node just like in your picture

jasoncow007 avatar Dec 14 '24 13:12 jasoncow007

That image is from the refactor branch here, but it's not 1:1 to the reference version yet https://github.com/city96/ComfyUI_ExtraModels/pull/92

city96 avatar Dec 14 '24 15:12 city96

That image is from the refactor branch here, but it's not 1:1 to the reference version yet #92 image

swich to rewrite tag, the samperling is terribly slow, any suggestion?

jasoncow007 avatar Dec 15 '24 01:12 jasoncow007

@jasoncow007 you need the empty sana latent node for the latent, otherwise you're generating an image 4x the size of what you actually want, which explains why it would be extremely slow.

city96 avatar Dec 16 '24 07:12 city96

@city96 you need the empty sana latent node for the latent, otherwise you're generating an image 4x the size of what you actually want, which explains why it would be extremely slow. image

change to sana latent, it works,but only the sampling, got the black image at last, any suggestion

jasoncow007 avatar Dec 17 '24 10:12 jasoncow007

Sana team uploaded a copy of the gemma2 weights that don't need an access token to download, it should be available in the dropdown now. 4a770ac

image gave this error, because there is no path mentioned in gemma/node.py file.

add like this

image

Now this started downloading model for gemma.

hi,buddy! have you done with this issue? it still cannot recognize local path here

jasoncow007 avatar Dec 24 '24 08:12 jasoncow007