swapping-autoencoder-pytorch
swapping-autoencoder-pytorch copied to clipboard
Official Implementation of Swapping Autoencoder for Deep Image Manipulation (NeurIPS 2020)
Really impressive work and high-quality code release! I found several intriguing design choices while digging into the codebase, and looking for some clarifications or explanations of them: 1. **Blur in...
Hi! Can you please explain why if the training mode is 'generator' the discriminator is trained? Thanks. https://github.com/taesungp/swapping-autoencoder-pytorch/blob/d67e60f8a702868d08e5b943fc7f46908d78e48b/optimizers/swapping_autoencoder_optimizer.py#L61-L64
I'm trying to add mixed precision training support. I'm newbie at this~ What I have figured out so far is the `upfirdn2d` & `fused` modules are compiled at runtime. I...
Hello, I load the checkpoints and testphotos, and run the command `python -m experiments ffhq512_pretrained test swapping_grid` and get the output  The same as other commands. I didn't modified...
Hi, I find the work of this repo very interesting and I would like to try it, but the version of Python 3.6 referred to in the README is currently...
Hello,Thank you for your work.I hope others can notice that when you want dim>2, please make corresponding modifications ` style = self.modulation(style) if self.demodulate: style = style * torch.rsqrt(style.pow(2).mean([1], keepdim=True)...
Thank you very much for the code! It's really great! In the `compute_generator_losses` function, one can read: ```python if self.opt.lambda_PatchGAN > 0.0: real_feat = self.Dpatch.extract_features( self.get_random_crops(real), aggregate=self.opt.patch_use_aggregation).detach() mix_feat = self.Dpatch.extract_features(self.get_random_crops(mix))...