Text-To-Video-Finetuning
Text-To-Video-Finetuning copied to clipboard
webui Lora Might be causing errors in checkpoint models.
Some weights of the model checkpoint were not used when initializing UNet3DConditionModel:
This IS expected if you are initializing CLIPTextModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing CLIPTextModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Has anyone else had similar issues. I believe it has to do with the Lora Training because I only notice such behavior on models created while also training the new webui lora. The most recent model did not use the Loras, and had no such issues.
Hello. I cannot reproduce this issue. I would check to see that your model path is correct. If it is, then could you please post the following? (You can remove any personally identifiable information).
- The
config.jsonin your model's directory. - The log leading up to this point (the one you have in your post).
- The
.yamlconfig you're using for training.
Here is a link to one of the problematic models...
https://www.dropbox.com/sh/ttvqyfddlq0mvjl/AAAjeXguhPXSanFA2x_--4xLa?dl=0
Which is based on this model...
https://www.dropbox.com/sh/247hj87lcvewsb5/AADeZsqTDTAE1mI2WlsclcU7a?dl=0
Which is based on the original diffuser model.
I'm not able to get to the yaml or log file at the moment, but maybe you will notice something here? But the error message occurred when loading the model for inference using inference.py .
I believe the error message could be related to this, since it's a similar error message, but in mine it shows a lot of the layers in the model.
This is a link to a config.json file that was created.
https://www.dropbox.com/scl/fi/eoq4byu2ap3f6k96ld396/config.yaml?rlkey=8izitbdc1vgvlbhzvf367sypw&dl=0
since disabling the lora training, I haven't had issues with that error message. It could be a possible glitch because of the version of the software I used. But I was hoping to make it known to confirm whether there was something truly going on. Is anyone else able to reproduce the error message with this model? Or is there something wrong in the models configuration that could be easily fixable so that I could use the model?