Muhammad Rizqi Nur

Results 139 comments of Muhammad Rizqi Nur

You're right. The gradient penalty is now stable and the generator loss isn't changing so fast. I prefer to just use tensor.norm though. ``` grad_norm = gradients.view(gradients.shape[0], -1).norm(2, dim=-1) gradient_penalty...

Ok, well, this adjustment actually adds artifacts to the generated images? Not sure why With adjustment: ![img_generatori_iter_800](https://github.com/Zeleni9/pytorch-wgan/assets/1442761/2a9d968c-7f1f-4475-9878-d07c02f2507d) Without: ![img_generatori_iter_800 (1)](https://github.com/Zeleni9/pytorch-wgan/assets/1442761/0487c18f-3529-4f60-804a-77456282a989)

Tried your code, and yeah, it has artifacts too. I wonder what's wrong. Perhaps this is one of those "if it's not broken don't fix it" moments.

Took a look at the official WGAN GP code and [it doesn't have batch-norm](https://github.com/igul222/improved_wgan_training/blob/fa66c574a54c4916d27c55441d33753dcc78f6bc/gan_mnist.py#L61C1-L61C1). So I removed it and the artifacts are gone. It doesn't seem to be better than...

Sorry I ran out of colab GPU hours. Maybe tomorrow Btw I posted the pictures one by one because downloading the grid doesn't work. Might be a bug with webui...

Yes, you can run it on colab. Though it will increase RAM usage, so it may crash on big checkpoints (unpruned ones, like about 7GB). Even with pruned fp32 checkpoints...

Or maybe what I need is just a function to "return" the objective value so I can just do the loop manually and "return" the value to the two studies.

Oh wow I just realized it exists now with study.ask and study.tell! Guess I'll just try this.