pytorch-TP-GAN
pytorch-TP-GAN copied to clipboard
pytorch replicate of TP-GAN "Beyond Face Rotation: Global and Local Perception GAN for Photorealistic and Identity Preserving Frontal View Synthesis"
Hello. In pretrain.py, predicts.shape[0] is not equal to labels.shape[0]. The value of predicts.shape[0] is four times that of labels.shape[0]. What does it mean? How can I solve the problem?
Which dataset do you use when pre-train? Can you share pretrain_train.list and pretrain_val.list with us?
To run the code directly, maybe some other data files are required, for example pretrain_train.list or img.list?
Hello can you explain how you trained your network. I have observed that D_loss goes to zero rapidly and all my gradients "vanish" as i start training.
In the paper, the expression of Lip loss is  so I think in your code feature_frontal , fc_frontal = feature_extract_model( batch['img_frontal'] ) feature_predict , fc_predict = feature_extract_model( img128_fake )...
Can you please publish some of your results? Also what is the reason for your deviations to the original model? Did they improve the results?