infinityGAN icon indicating copy to clipboard operation
infinityGAN copied to clipboard

Code and training issues

Open Amo5 opened this issue 3 years ago • 11 comments

https://github.com/hubert0527/infinityGAN/blob/86e4715cbea2cc0940954bf646d3957454d4e5ed/train.py#L365-L366

  1. Is this a mistake in L365-L366?
  2. NAN in my training. logs: WARNING:root:NaN or Inf found in input tensor. [*] EXP: InfinityGAN; d: nan; g: 2.42; r1: 0.01; ; mean path: 0.01: and i found diversity_z_loss = nan

Amo5 avatar Feb 15 '22 03:02 Amo5

  1. Oh, yes, it is a mistake, I never really read that values LoL
  2. Which configuration are you using? And what are your pytorch and cuda versions? I sometimes find certain versions have weird behaviors.

hubert0527 avatar Feb 15 '22 06:02 hubert0527

InfinityGAN.yaml I create environment with the environment.yml in your project. torch=1.6.0 cuda=10.1

Amo5 avatar Feb 15 '22 07:02 Amo5

That's weird. Could you run it a second or third time and see if that repeatedly happens?

And you may disable the diveristy_z_loss in the .yaml config to see if that indeed avoids the issue. If the diversity loss indeed is the issue, you may replace the angular_similarity() function in line 238, 240, 251, 253 in generator with L1 or L2 distance such as:

# L1 distance
z_dist = (local_latent[0::2] - local_latent[1::2]).abs().mean()

Note: We originally use the angular one since the scale of distance in local latent space and feature space is different, and we observe a slight improvement with such a design. But the L1/L2 implementation should achieve a similar goal without numerical issues.

hubert0527 avatar Feb 15 '22 10:02 hubert0527

OK,I will try it. tanks!

Amo5 avatar Feb 15 '22 11:02 Amo5

https://github.com/hubert0527/infinityGAN/blob/86e4715cbea2cc0940954bf646d3957454d4e5ed/models/ops.py#L580 I'm sorry, but I have to ask you about a question. I've debugged the code up to L580 (forward function), but I still don't understand what the implementation of Figure 3 (Padding-Free Generator) is.

https://github.com/hubert0527/infinityGAN/blob/86e4715cbea2cc0940954bf646d3957454d4e5ed/models/infinitygan_generator.py#L107 As I understand it, it's just concatenate feature and coords on channel dim. But, How does it achieve the goal of seamless?

Amo5 avatar Feb 15 '22 14:02 Amo5

The coords are only used in the structure synthesizer. In the paper Figure 3, if you use other types of padding, the generator learns to rely on that positional information, and therefore the independently generated patches have different positional information compared to synthesizing at once. The padding-free generator removes that incorrect positional info from padding (no padding in the generator, but we still make sure it still maintains a large receptive field in the local latent space), and explicitly provides consistent positional info via the coordinates.

hubert0527 avatar Feb 15 '22 22:02 hubert0527

As shown in the fig3(b), the feature map seems padded by coords(XYZ), but in the code, coords and feature map are concatenated along channel dim. This is my puzzle. 😭

Amo5 avatar Feb 16 '22 02:02 Amo5

As shown in the fig3(b), the feature map seems padded by coords(XYZ), but in the code, coords and feature map are concatenated along channel dim. This is my puzzle. 😭

Sorry, in fig3, XYZ is mistaken for coords. XYZ are actually features. But, how do you guarantee that patch A and B have the same XYZ? In eq. (1), Zs = Gs(Zg, Zl, C),different patches share the same the global latent vector Zg, and does all patches also have the same local latent vector Zl? coords C are different for each local patch, and does coords C of adjacent patch have overlap at test time?

Amo5 avatar Feb 16 '22 03:02 Amo5

Yes, you are mostly correct.

does all patches also have the same local latent vector Zl?

Remind that Z_local is of shape [B, C, H, W] (which has the same shape/size as the coordinates, so you can think they are working in a similar way, but carrying different types of information), so they have overlaps between adjacent patches.

hubert0527 avatar Feb 16 '22 04:02 hubert0527

Yes, you are mostly correct.

does all patches also have the same local latent vector Zl?

Remind that Z_local is of shape [B, C, H, W] (which has the same shape/size as the coordinates, so you can think they are working in a similar way, but carrying different types of information), so they have overlaps between adjacent patches.

OK,thanks!

Amo5 avatar Feb 16 '22 05:02 Amo5

Hi, Where is the code implementation for feature unfolding (eq.3)?

Amo5 avatar Mar 01 '22 08:03 Amo5