Adalberto

Results 20 comments of Adalberto

comment the line with " trainer_config["distributed_backend"] = "ddp" " on main.py, worked for me

I have a PR with the code, you can test it if you want: https://github.com/huggingface/diffusers/pull/1091

I think that we need to specify the number of resnet in each block and also the number of attention heads rather than the attention dim. This means that we...

@jenkspt Yes, from 64 to 256 like in the imagen paper

That is exactly what I am doing, I am adding the streched low res image in the extra channels, this way I have 6 channels in the input and 3...

Hey guys, I'm thinking of adding the option to create the mask with clipseg instead of just using random masks, what do you think? I believe it could improve training...

Hey @patrickvonplaten, sure. I think I'll just have to make a few adjustments to support stable diffusion v2.

Hello @loboere I tryed with --use_8bit_adam and got bad results as well, but with different params my results were better. > accelerate launch dreambooth_inpaint.py ^ --pretrained_model_name_or_path="runwayml/stable-diffusion-inpainting" ^ --instance_data_dir="./toy_cat" ^ --output_dir="./dreambooth_ad_inpaint_toy_cat"...

Hello @kunalgoyal9, sometimes the loss doesn't decrease, but you can still get good results, did you check the outputs from your model?

can you try using just "toy cat" as prompt?