giteeeeee
giteeeeee
> I'm curious as well. If you get a tpu on Collab, then it's going to be slower than a rtx card of the same level?
> Just tested on a local RTX2060 6G vs. on a collab T4 12G The 2060 appears to be ~25% faster when doing text2image and image2image. Training I can't test...
It'd be really nice.
Actually, what's the difference between the 1.4 and 1.5 dreambooth notebook? Is it just the model downloaded from hugging face being different? Has there been any other change to the...
> No, if you change the model from Huggingface everything else works fine. Good to know. Thanks
> Just a side question about learning rate. Is it a setting that you suppose to manually adjust like in textual inversion training? Or has it already been set up...
> avoid using sks as an instance name, SD knows it's a gun I had an instance named "a100". Then it was trained on a person... Glad I didn't see...
I think this is how it suppose to be. The inpainting method takes input from the masked area of the original image. If that area originally has nothing similar to...
> hopefully this will fix it : [fbb7dfc](https://github.com/TheLastBen/fast-stable-diffusion/commit/fbb7dfc34e6ae6e815715122d1e76e70738201a0) I'll test it now, waiting for the diffusers... speaking of which, could you maybe add a diffuser_path to the gdrive? So that...
no luck console says dumping ckpt to /Sessions/instance, but still ends up in /Sessions/