style-based-gan-pytorch
style-based-gan-pytorch copied to clipboard
Constraining output to a particular size
Is it possible to specify the max size of the output image? I do not want to invest resources to train upto 1024x1024 images and am okay with just going upto 256x256 image output. Do I need to make any changes to the training
Also, I'm going to try training this for the celebA dataset and was wondering how the preprocessing (resizing) would work out of the box and if any changes were needed? I know that for the ProGAN paper, the authors provided deltas to reconstruct high quality images of larger dimensions, whereas here it seems like we are using a simple Lanczos interpolation with center crop. Is that usually enough?
You can use --max_size option to restrict maximum image sizes.
I didn't integrated codes for constructing celebA-HQ datasets. But it will be enough for generating interesting samples in lower resolutions.
Thank you for the input. Any comments on the techniques used for resizing that I was asking about earlier? It seems to me that since the aspect ratio of the dataset images is different, simply resizing it to a square image would likely result in warping and blurriness in output as we scale up.
As code uses resize function of torchvision, it will not squash images to different aspect ratios. (resize function conserves aspect ratios unless both height and width specified.) So it needs center crops.