lama
lama copied to clipboard
Finetuning Big-Lama and what losses/validation metrics to focus on?
Dear authors, thank you for making this great work public.
I have been finetuning Big-LaMa on my own data and my own mask generation, and I would love to hear your advice on how to finetune it in the best way possible. Here is the training logs of two of my models A (link) and B (link). Currently, B is performing better as shown by its FID and LPIPS metrics (bottom part of the figure). Could you help answer a few questions below?
1. Training losses of generator: I'm looking at train_gen_fm and train_gen_resnet_pl. For my model A, these losses don't seem to decrease as training progresses at all. For my model B, it looks a bit better but also doesn't look like they decrease much. Does this look normal to you? If not, can you explain/guess what could be the reason?
2. Gradient penalty loss train_adv_discr_real_gp: How informative is this loss term? Is it just to make sure that training is progressing in a stable way?
3. Validation metrics. Is FID or LPIPS more helpful? I'm also looking at val_gen_resnet_pl because I guess the perceptual loss would provide something meaningful as well, is this correct? While it looks like it is improving for model B, model A doesn't seem to improve at all. I'm training both A and B on Places-Standard + Google Landmark, and the difference between them is just the mask generation algorithm. The way model A is trained is very similar to LaMa (e.g., use same mask), so I expect that the more I train the better A becomes as stated by you. My validation set contains only 200 images, is this too small for FID to be informative?
Thank you in advance!
@vkhoi The validation will mainly focused on lpips_fid100_f1
.
here is my log:
fid lpips lpips_fid100_f1 ssim
mean mean std mean mean std
0-10% 9.653877 0.021114 0.011458 NaN 0.978308 0.015153
10-20% 20.235556 0.056794 0.017373 NaN 0.935468 0.034939
20-30% 35.758307 0.096306 0.027018 NaN 0.895417 0.068059
total 15.506872 0.044931 0.027429 0.896133 0.950229 0.041231
@vkhoi hi, recently, I have also been training the big-lama model. My loss is similar to yours. How did your training turn out? Have you found the answers to your problems? If you could share some insights, I would greatly appreciate it.
@Sanster @vkhoi Hi, Have you found the answers to your problems? I appreciate sharing some insights.
I am windering which one is training loss?