latent-diffusion icon indicating copy to clipboard operation
latent-diffusion copied to clipboard

how to train Inpainting model using our own datasets?

Open dreamlychina opened this issue 1 year ago • 6 comments

thanks for sharing this amazing work. I need to use your inpainting model on my datasets. But i can't find any description about how to train inpainting model using our own datasets?

dreamlychina avatar Jun 08 '23 08:06 dreamlychina

I have a similar question.

Looking at scripts/inpaint.py, the masked image is encoded and used as a condition. Isn't the autoencoder (VQModel, actually) trained as a full image rather than a masked image? Does it work well if I encode a masked image with an autoencoder trained as a full image?

bring728 avatar Jun 09 '23 04:06 bring728

resolved?

nickyisadog avatar Jul 05 '23 08:07 nickyisadog

The autoencoder will accept an masked image and gives an 128 features. Then it will concat with the mask (downsample to 128).

the input should be [batch_size,7,128,128] to the Unet.

where 7 = 3 (noise) + 3 (masked_image) + 1 (mask)

nickyisadog avatar Jul 18 '23 07:07 nickyisadog

Hello, may I ask if you are running"python scripts/inpaint.py --indir data/inpainting_examples/ --outdir outputs/inpainting_results". Did this error occur again while copying the file? Can you help me solve it?

回溯(最近一次调用最后一次): 文件“scripts/inpaint.py”,第 60 行, 模型 = instantiate_from_config(config.model) 文件“/root/autodl-tmp/latent-diffusion-main/scripts/ldm/util. py",第 85 行,在 instantiate_from_config 中 return get_obj_from_str(config["target"])(**config.get("params", dict())) 文件 "/root/autodl-tmp/latent-diffusion-main/scripts /ldm/models/diffusion/ddpm.py”,第 460 行,在init self.instantiate_first_stage(first_stage_config) 文件“/root/autodl-tmp/latent-diffusion-main/scripts/ldm/models/diffusion/ddpm.py” ,第 503 行,在 instantiate_first_stage model = instantiate_from_config(config) 文件 "/root/autodl-tmp/latent-diffusion-main/scripts/ldm/util.py”,第 85 行,在 instantiate_from_config 中 返回 get_obj_from_str(config["target"])(**config.get("params", dict())) 文件 "/root/autodl-tmp/latent-diffusion-main/scripts/ldm/models/autoencoder.py ",第 266 行,在init super() 中。init (embed_dim=embed_dim, *args, **kwargs) 文件“/root/autodl-tmp/latent-diffusion-main/scripts/ldm/models/autoencoder.py”,第 39 行,在init self.quantize = VectorQuantizer( n_embed、embed_dim、beta=0.25、 TypeError:init () 得到意外的关键字参数“remap”

fittiing avatar Aug 05 '23 07:08 fittiing

Hello, I have simplified the inpaint fine tuning and made some inference example in my repo. Feel free to check it

https://github.com/nickyisadog/latent-diffusion-inpainting/tree/main

nickyisadog avatar Sep 22 '23 05:09 nickyisadog

Okay, thank you very much  

疯狂便便 @.***

 

------------------ 原始邮件 ------------------ 发件人: @.>; 发送时间: 2023年9月22日(星期五) 中午1:51 收件人: @.>; 抄送: @.>; @.>; 主题: Re: [CompVis/latent-diffusion] how to train Inpainting model using our own datasets? (Issue #280)

Hello, I have simplified the inpaint fine tuning and made some inference example in my repo. Feel free to check it

https://github.com/nickyisadog/latent-diffusion-inpainting/tree/main

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>

fittiing avatar Sep 22 '23 06:09 fittiing