cheersfate
cheersfate
Thank you for your great work. Should `p_ema.copy_(p_ema)` to be `p_ema.copy_(p)`? ` else: for p_ema, p in zip(G_ema.parameters(), G.parameters()): p_ema.copy_(p_ema) ` https://github.com/zheng-yuwei/enhanced-UGATIT/blob/5294a6ffd2d6f41f4b83ae901c6edb98f02cadc6/UGATIT.py#L811
In colab, the cartoon data is generated from pretrained models that are download from google drive. Could you share how you get the cartoon pretrained models? Because I saw your...
Thank you for your great work. When will you release training script?
Hi, Could you share how you get these 3 improvements that you mentioned in the readme? -------------------------- 1. Solve the problem of high-frequency artifacts in the generated image. 2. It...
When 'finetune_loc' > 0, mapping layer (mlp) is also fixed, right?
THank you for your great work. Is to possible to convert image to different domain and **_not using latent directions to change latect codes_**? For example, pSp can generate a...
Hi, I saw augpipe option in the train-help.txt. augpipe [blit|geom|color|filter|noise|cutout|bg|bgc|bgcf|bgcfn|bgcfnc] However, what is the detail of blit|geom|color|filter? geom means affine transform? color means color jitter? what is blit and filter?...
Thank you for your great work. After reading your paper and github codes, I have some confusions. Could you help me? 1. In the codes: self.illum_gt = self.real_B self.illum_pred =...
I saved the generated data for training VToonify-D and I found some generated portrait data are bad. The following 3 pictures are generated input, genertaed portrait, and inference reslut by...
Thank you for your great work Could you share data?