MobileStyleGAN.pytorch icon indicating copy to clipboard operation
MobileStyleGAN.pytorch copied to clipboard

About Pixel-Level Distillation Loss

Open zhongtao93 opened this issue 4 years ago • 3 comments

Have you tried using gt['rgb'] instead of gt['img'] to distll the student network? Or the gt['rgb'] is useless.

https://github.com/bes-dev/MobileStyleGAN.pytorch/blob/2d18a80bed6be3ec0eec703cc9be50616f2401ee/core/loss/distiller_loss.py#L35

zhongtao93 avatar Nov 03 '21 06:11 zhongtao93

@zhongtao93 so, gt["rgb"] contain partial sums of the gt["img"], as we don't use aggregation of intermediate predictions like in StyleGAN2, it isn't correct to use gt["rgb"] here

bes-dev avatar Nov 03 '21 08:11 bes-dev

Since I want use MobileStyleGAN to blend anime and real face models, like stylegan in toonify. But I found this feature become weakened, especially when I reduce the channels of model. Feature:

  1. style codes lying in lower layers control coarser attributes like facial shapes,
  2. middle layer codes control more localized facial features,
  3. high layer codes correspond to fine details such as reflectance and texture.

zhongtao93 avatar Nov 03 '21 08:11 zhongtao93

@zhongtao93 I didn't try toonify pipeline on top of MobileStyleGAN. But if you have some experimental results it will be great if you share it.

bes-dev avatar Nov 03 '21 12:11 bes-dev