Kandinsky-2
Kandinsky-2 copied to clipboard
unclip training question
i trained and made an unclip checkpoint from 10 images ..... what do i do with the output file ? what file do i replace and where ? same question for the prior checkpoint . what file do i have to replace ? Thank you!
From what I see you need train models: first prior model that you will use in unclip model training. So far I'm in process of training own model so will let you know :)
From what I see you need train models: first prior model that you will use in unclip model training. So far I'm in process of training own model so will let you know :)
hey man, hope you're doing well.
i for example got 13 images to try and test the training with , did the captions etc ... how many epochs did you use for 1 image ?...
they have in their training notebook 1k epochs.. did you followed same rules as in stable diffusion training ? 100 epochs / img . did you changed anything else in the training_conf.yaml ? or the other config_prior.yaml ? please let me know.
Thanks a lot!
So far trained for about 151 epochs with dataset of 640 images. About using trained model to generate images just replace prio file with trained prio file and unclip model with decoder ckpt in main file.
Here are some sample from fintuned model:
gotcha ,thank you! atm im scraping the web for MJ images oil painting and impasto images then will start the main training. util then need to test with just a few. will let you know how if i made it work with just 13 atm .. thanks
dude, it works, i was able to train with 13 images and was so damn quick for both models. the outputs are very very close to training date ... this is going to be epic ...
On Sun, 14 May 2023 at 17:50, kodxana @.***> wrote:
So far trained for about 151 epochs with dataset of 640 images. About using trained model to generate images just replace prio file with trained prio file and unclip model with decoder ckpt in main file.
Here are some sample from fintuned model: [image: image] https://user-images.githubusercontent.com/16674412/238193391-f9f3b043-c0c3-4ca2-91cc-f8f7a303b65d.png
— Reply to this email directly, view it on GitHub https://github.com/ai-forever/Kandinsky-2/issues/63#issuecomment-1546917407, or unsubscribe https://github.com/notifications/unsubscribe-auth/AABNVRWWYJOYSZ4MUB4TUF3XGDWMZANCNFSM6AAAAAAX3PZNNE . You are receiving this because you authored the thread.Message ID: @.***>
dude, it works, i was able to train with 13 images and was so damn quick for both models. the outputs are very very close to training date ... this is going to be epic ...
On Sun, 14 May 2023 at 17:50, kodxana @.***> wrote:
So far trained for about 151 epochs with dataset of 640 images. About using trained model to generate images just replace prio file with trained prio file and unclip model with decoder ckpt in main file.
Here are some sample from fintuned model: [image: image] https://user-images.githubusercontent.com/16674412/238193391-f9f3b043-c0c3-4ca2-91cc-f8f7a303b65d.png
— Reply to this email directly, view it on GitHub https://github.com/ai-forever/Kandinsky-2/issues/63#issuecomment-1546917407, or unsubscribe https://github.com/notifications/unsubscribe-auth/AABNVRWWYJOYSZ4MUB4TUF3XGDWMZANCNFSM6AAAAAAX3PZNNE . You are receiving this because you authored the thread.Message ID: @.***>
Have you tried train a person into model? If yes what setting and caption you used and also how much images?
not yet no, i only tried and tested it for style, ill try tomorrow for a person. but I guess same idea from stable diffusion applies here 2 when it comes to captions. there was a guy who did it , check kubin repo, but he used the textual inversion notebooks.