swapping-autoencoder-pytorch
swapping-autoencoder-pytorch copied to clipboard
Training on FFHQ - 1024px
Hi and thanks for sharing your implementation.
After successfully trained the model on FFHQ at 256 and 512 resolutions, i want to scale at 1024, but got some size mismatch in the discriminator, sounds like there is some issue on the Linear layer's dimensions.
Can you confirm that the size
arguments of both Discriminator and Coocur Discriminator should be 1024 as well ?
Could you explain why at this line the input channels are multiplied two times by 4 ?
Thanks a lot for your answer !
Because spatial dimension of the feature map is flattened. You will need to increase feat_size (https://github.com/rosinality/swapping-autoencoder-pytorch/blob/2c50ad602635423ebb87b218f052792c08e118b0/model.py#L368) in the cooccur discriminator to use 1024px.
Got it, thanks !
hi, would you please guide me how to set parameters for 512? I am using this command: python train.py --batch 4 --size 512 preprocessed I got this error: RuntimeError: mat1 dim 1 must match mat2 dim 0
Hi and thanks for sharing your implementation.
After successfully trained the model on FFHQ at 256 and 512 resolutions, i want to scale at 1024, but got some size mismatch in the discriminator, sounds like there is some issue on the Linear layer's dimensions.
Can you confirm that the
size
arguments of both Discriminator and Coocur Discriminator should be 1024 as well ? Could you explain why at this line the input channels are multiplied two times by 4 ?Thanks a lot for your answer !
@gilevir would you please help me to set parameters for training on 512 resolution?