MobileStyleGAN.pytorch
MobileStyleGAN.pytorch copied to clipboard
About Pixel-Level Distillation Loss
Have you tried using gt['rgb'] instead of gt['img'] to distll the student network? Or the gt['rgb'] is useless.
https://github.com/bes-dev/MobileStyleGAN.pytorch/blob/2d18a80bed6be3ec0eec703cc9be50616f2401ee/core/loss/distiller_loss.py#L35
@zhongtao93 so, gt["rgb"] contain partial sums of the gt["img"], as we don't use aggregation of intermediate predictions like in StyleGAN2, it isn't correct to use gt["rgb"] here
Since I want use MobileStyleGAN to blend anime and real face models, like stylegan in toonify. But I found this feature become weakened, especially when I reduce the channels of model. Feature:
- style codes lying in lower layers control coarser attributes like facial shapes,
- middle layer codes control more localized facial features,
- high layer codes correspond to fine details such as reflectance and texture.
@zhongtao93 I didn't try toonify pipeline on top of MobileStyleGAN. But if you have some experimental results it will be great if you share it.