swapping-autoencoder-pytorch
swapping-autoencoder-pytorch copied to clipboard
Texture Sticking Artifacts
I am seeing very impressive results when applying this to single images, but for videos it often produces "texture sticking" artifacts similar to those noted in Alias-Free GAN (StyleGAN3).
I am looking into porting the SG3 layers to substitute for SG2, but this seems like a non-trivial project. Do you have any suggestions for simpler ways to maybe mitigate this type of artifact?
Thanks!
Hi @eridgd, I agree that it will be a good project! I actually haven't seen a lot of sticking effect with Swapping Autoencoder on non-face datasets. I'd love to see some photos of the texture sticking artifacts. I thought the texture sticking effect is more pronounced on face datasets because of landmark alignment. To this end, maybe training with translation augmentation can mitigate this?