Heasterian
Heasterian
As I have it working locally, but in not upstreamable way I'll write down what I figured out along the way. Files of LongClip from this repo by default comes...
You can download longClip in form that should work out of the box using this python code: ``` from transformers import CLIPTextModel, CLIPTokenizer tokenizer = CLIPTokenizer.from_pretrained("zer0int/LongCLIP-GmP-ViT-L-14") model = CLIPTextModel.from_pretrained("zer0int/LongCLIP-GmP-ViT-L-14") tokenizer.save_pretrained("./tokenizer")...
Well, you are loading model from single file, not diffusers format I mentioned. With safetensors code is falling back to 77 tokens as config does not include info about max...
You can convert model to diffusers format using tool from tools tab in Onetrainer.
Just overwrite text_encoder and tokenizer in resulting directory as I said here: https://github.com/Nerogar/OneTrainer/issues/624#issuecomment-2571331115
> I used Comfy to replace Clip to LongClip for one of my models. The combined checkpoint was successfully saved and then loaded, but I got an error trying to...
Seems like there are two separate issues. One affecting DDPM and one affecting rest of samplers. Samplers other than DDPM and DDIM work after simple check in `force_last_timestep` if last...
https://github.com/Nerogar/OneTrainer/pull/965 If anyone have time, you can test it.