Jack Qiao
Jack Qiao
ok I see. The first thing is that the model doesn't like hard edges - it sees the edge and thinks it's the beginning of a separate frame. I usually...
if you just want to use the inpainting model, you shouldn't have to do any training. Download the pretrained inpainting model and use the CLI commands in the readme. if...
I just added some code to load the weights from an uninitialized SD model, should work better now if you want to train from a base SD model. I trained...
the inpaint model isn't compatible with other SD tools unfortunately. The unet has a slightly different architecture.
with this repo you can do partial inpainting with the --skip_timesteps flag. So 90% denoising would be --steps 50 --skip_timesteps 5 this only works for inpainting though, and it would...
clip_proj should be removed. It was meant to project a (single) clip embedding to the DDPM timestep embedding dimension, to replicate GLIDE which was the original goal of this project....
hm.. are you looking at the ema checkpoint? at 0.9999 ema it can take more than 10000 steps to see a change this code implements data parallel training. Each gpu...
kind of, you can backprop the gradients through the vae but it uses a lot of vram and doesn't work that well in my experience. Ideally there should be a...
not sure but maybe have a look at https://github.com/energy-based-model/Compositional-Visual-Generation-with-Composable-Diffusion-Models-PyTorch
should be fixed now