BeautyGAN_pytorch
BeautyGAN_pytorch copied to clipboard
makeup loss
Hi, thanks for your implement work, but i doubt wheater the implement of makeup loss is correct, the paper says that we should calculate the histogram matching between src image and ref image firstly, then calculate the l2 norm between fake image which generated by generator and matched image, i find that you just calculate makeuo loss between the generated image and reference image which dont normlization the value to [-1, 1], and use L1 norm, could you help me? thanks!
That may be a little difference with the statement in paper. You can also try to implement yourself and make a PR.
That may be a little difference with the statement in paper. You can also try to implement yourself and make a PR.
Hi, I have implemented makeuploss myself before, almost the same as your implement work. But it seems that the color of HM result is sometimes far away from the reference image.
That may be a little difference with the statement in paper. You can also try to implement yourself and make a PR.
Hi, I have implemented makeuploss myself before, almost the same as your implement work. But it seems that the color of HM result is sometimes far away from the reference image.
Yes, your observation is normal. Because HM is only a coarse gudiance, which need adv training to refine. If the results are perfect, there is no need to propose BeautyGAN.
That may be a little difference with the statement in paper. You can also try to implement yourself and make a PR.
Hi, I have implemented makeuploss myself before, almost the same as your implement work. But it seems that the color of HM result is sometimes far away from the reference image.
Yes, your observation is normal. Because HM is only a coarse gudiance, which need adv training to refine. If the results are perfect, there is no need to propose BeautyGAN.
Okay. Could you please tell me how to balance the weights of all losses? I have tried many times before...
That may be a little difference with the statement in paper. You can also try to implement yourself and make a PR.
Hi, I have implemented makeuploss myself before, almost the same as your implement work. But it seems that the color of HM result is sometimes far away from the reference image.
Yes, your observation is normal. Because HM is only a coarse gudiance, which need adv training to refine. If the results are perfect, there is no need to propose BeautyGAN.
Okay. Could you please tell me how to balance the weights of all losses? I have tried many times before...
lambda_his = 1 lambda_his_lip = 1 lambda_skin = 0.1 lambda_eye = 1 The default settings at 'train.py'
That may be a little difference with the statement in paper. You can also try to implement yourself and make a PR.
Hi, I have implemented makeuploss myself before, almost the same as your implement work. But it seems that the color of HM result is sometimes far away from the reference image.
Yes, your observation is normal. Because HM is only a coarse gudiance, which need adv training to refine. If the results are perfect, there is no need to propose BeautyGAN.
Okay. Could you please tell me how to balance the weights of all losses? I have tried many times before...
lambda_his = 1 lambda_his_lip = 1 lambda_skin = 0.1 lambda_eye = 1 The default settings at 'train.py'
Thank you. It is the same as the description in paper. And I wonder how you get that weights, and the weights of GANLoss.
Does it work? How do I import the data set? I have a problem importing it