affableroots
affableroots
If I understand correctly, [this paper](https://arxiv.org/pdf/2208.12242.pdf) says they finetune the entire model, although in my experiments I agree and have kept the VAE frozen. I guess the following could be...
@jslegers Thanks for the follow up! I've had results look fairly similar, but I think everythings still degrading at least on my end. For instance: * "a photo of" type...
Oh, another symptom that I think indicates something is wrong, is I have not been able to **overtrain**. What I mean is, given a very high lr and/or high step...
Thank you for running that again! It seems we have similar results, though, your generalization attempts do look better than what I've been achieving, even given the stock params from...
I appreciate your attention on this @patil-suraj!
While we're exploring the tuning of DB from all angles, I'll mention some recently decent results. I'm using my modified Dreambooth + Textual Inversion: * 5 new tokens+embeddings * 2000...
@jslegers Forgive the mess: https://gist.github.com/affableroots/a36a74287c8eb2da438a459795b158d6 I flew through it to clean it up and haven't tested again since, so, hopefully I didn't mess anything up but let me know if...
@patil-suraj I know you're slammed, (I see you on every support ticket on this repo!), but any chance you've had an opportunity to ask the Dreambooth authors about the catastrophic...
@jslegers did you have a chance to test that script that adds Textual Inversion to the mix? I haven't cracked the code yet, but so far my successes are from:...
@jslegers Have you by chance tried the JoePenna repo? I'm still trying to pin down the difference, but, I think it works better, and I don't know why. They're starting...