Bringing-Old-Photos-Back-to-Life icon indicating copy to clipboard operation
Bringing-Old-Photos-Back-to-Life copied to clipboard

Can you release the training log ?

Open maryhh opened this issue 3 years ago • 8 comments

Can you release the training log (the loss log ),so i can track the training process.

maryhh avatar May 11 '21 07:05 maryhh

I'm also looking forward to the training log In my experiment, using the defaut model setting and my own dataset When training model A, G_featD is very high around 1.5-1.7 and featD_real featD_fake are very low around 0.08-0.12 When traning model Mapping, even though I used g_lr = x, d_lr = 0.6x, G_GAN will be around 1.7-2.0, and D_real D_fake around 0.03~0.06

I think this means the gan training procedure was already collapsed, which leads chessboard effect

Could you please release the traning log, that we can figure out the how those losses varies when everything goes well? @raywzy

MengXinChengXuYuan avatar May 17 '21 07:05 MengXinChengXuYuan

Hi @MengXinChengXuYuan What's your --k_size value?

syfbme avatar May 18 '21 01:05 syfbme

Hi @MengXinChengXuYuan What's your --k_size value?

I redesign the network, for the efficiency. Most conv kernels are 3*3, transposeConvs are 4 * 4. I don't think this will have a lot effect

MengXinChengXuYuan avatar May 21 '21 05:05 MengXinChengXuYuan

I'm also looking forward to the training log In my experiment, using the defaut model setting and my own dataset When training model A, G_featD is very high around 1.5-1.7 and featD_real featD_fake are very low around 0.08-0.12 When traning model Mapping, even though I used g_lr = x, d_lr = 0.6x, G_GAN will be around 1.7-2.0, and D_real D_fake around 0.03~0.06

I think this means the gan training procedure was already collapsed, which leads chessboard effect

Could you please release the traning log, that we can figure out the how those losses varies when everything goes well? @raywzy

Maybe you can try to replace deconv with upsample+conv to get rid of chessboard effect

syfbme avatar May 24 '21 07:05 syfbme

I'm also looking forward to the training log In my experiment, using the defaut model setting and my own dataset When training model A, G_featD is very high around 1.5-1.7 and featD_real featD_fake are very low around 0.08-0.12 When traning model Mapping, even though I used g_lr = x, d_lr = 0.6x, G_GAN will be around 1.7-2.0, and D_real D_fake around 0.03~0.06 I think this means the gan training procedure was already collapsed, which leads chessboard effect Could you please release the traning log, that we can figure out the how those losses varies when everything goes well? @raywzy

Maybe you can try to replace deconv with upsample+conv to get rid of chessboard effect

Thanks a lot for the advice! I tried, following https://distill.pub/2016/deconv-checkerboard/ But no matter using upsample+conv or ZOH, there are always other problms :( When the computational comlexity of G is low, this will get even worse

MengXinChengXuYuan avatar May 24 '21 08:05 MengXinChengXuYuan

Hi @MengXinChengXuYuan What do you mean by "there are always other problems". There is no chessboard effects but other issue? If so, what is that?

syfbme avatar May 24 '21 08:05 syfbme

Hi, @MengXinChengXuYuan May I ask what range your G_Feat_L2 loss maintains when training the mapping model?

Mr-doraemon avatar Jun 16 '21 08:06 Mr-doraemon

I'm also looking forward to the training log In my experiment, using the defaut model setting and my own dataset When training model A, G_featD is very high around 1.5-1.7 and featD_real featD_fake are very low around 0.08-0.12 When traning model Mapping, even though I used g_lr = x, d_lr = 0.6x, G_GAN will be around 1.7-2.0, and D_real D_fake around 0.03~0.06 I think this means the gan training procedure was already collapsed, which leads chessboard effect Could you please release the traning log, that we can figure out the how those losses varies when everything goes well? @raywzy

Maybe you can try to replace deconv with upsample+conv to get rid of chessboard effect

Thanks a lot for the advice! I tried, following https://distill.pub/2016/deconv-checkerboard/ But no matter using upsample+conv or ZOH, there are always other problms :( When the computational comlexity of G is low, this will get even worse

Dear, did you solve the problem(chessboard effect) at last?

Ssakura-go avatar Oct 09 '21 16:10 Ssakura-go