infinityGAN
infinityGAN copied to clipboard
Code and training issues
https://github.com/hubert0527/infinityGAN/blob/86e4715cbea2cc0940954bf646d3957454d4e5ed/train.py#L365-L366
- Is this a mistake in L365-L366?
- NAN in my training. logs: WARNING:root:NaN or Inf found in input tensor. [*] EXP: InfinityGAN; d: nan; g: 2.42; r1: 0.01; ; mean path: 0.01: and i found diversity_z_loss = nan
- Oh, yes, it is a mistake, I never really read that values LoL
- Which configuration are you using? And what are your pytorch and cuda versions? I sometimes find certain versions have weird behaviors.
InfinityGAN.yaml I create environment with the environment.yml in your project. torch=1.6.0 cuda=10.1
That's weird. Could you run it a second or third time and see if that repeatedly happens?
And you may disable the diveristy_z_loss in the .yaml config to see if that indeed avoids the issue.
If the diversity loss indeed is the issue, you may replace the angular_similarity()
function in line 238, 240, 251, 253 in generator with L1 or L2 distance such as:
# L1 distance
z_dist = (local_latent[0::2] - local_latent[1::2]).abs().mean()
Note: We originally use the angular one since the scale of distance in local latent space and feature space is different, and we observe a slight improvement with such a design. But the L1/L2 implementation should achieve a similar goal without numerical issues.
OK,I will try it. tanks!
https://github.com/hubert0527/infinityGAN/blob/86e4715cbea2cc0940954bf646d3957454d4e5ed/models/ops.py#L580 I'm sorry, but I have to ask you about a question. I've debugged the code up to L580 (forward function), but I still don't understand what the implementation of Figure 3 (Padding-Free Generator) is.
https://github.com/hubert0527/infinityGAN/blob/86e4715cbea2cc0940954bf646d3957454d4e5ed/models/infinitygan_generator.py#L107 As I understand it, it's just concatenate feature and coords on channel dim. But, How does it achieve the goal of seamless?
The coords are only used in the structure synthesizer. In the paper Figure 3, if you use other types of padding, the generator learns to rely on that positional information, and therefore the independently generated patches have different positional information compared to synthesizing at once. The padding-free generator removes that incorrect positional info from padding (no padding in the generator, but we still make sure it still maintains a large receptive field in the local latent space), and explicitly provides consistent positional info via the coordinates.
As shown in the fig3(b), the feature map seems padded by coords(XYZ), but in the code, coords and feature map are concatenated along channel dim. This is my puzzle. 😭
As shown in the fig3(b), the feature map seems padded by coords(XYZ), but in the code, coords and feature map are concatenated along channel dim. This is my puzzle. 😭
Sorry, in fig3, XYZ is mistaken for coords. XYZ are actually features. But, how do you guarantee that patch A and B have the same XYZ? In eq. (1), Zs = Gs(Zg, Zl, C),different patches share the same the global latent vector Zg, and does all patches also have the same local latent vector Zl? coords C are different for each local patch, and does coords C of adjacent patch have overlap at test time?
Yes, you are mostly correct.
does all patches also have the same local latent vector Zl?
Remind that Z_local is of shape [B, C, H, W] (which has the same shape/size as the coordinates, so you can think they are working in a similar way, but carrying different types of information), so they have overlaps between adjacent patches.
Yes, you are mostly correct.
does all patches also have the same local latent vector Zl?
Remind that Z_local is of shape [B, C, H, W] (which has the same shape/size as the coordinates, so you can think they are working in a similar way, but carrying different types of information), so they have overlaps between adjacent patches.
OK,thanks!
Hi, Where is the code implementation for feature unfolding (eq.3)?