Lukas Mosser

Results 21 comments of Lukas Mosser

I ran into a similar issue with flatlining to zero. Setting ndf to something around ngf/2 or ngf/4 led to stable learning. (That is for 128^2)

@rjpeart Have you tried using any other "tricks" like label smoothing or injecting white noise into the input of the discriminator? That also helped stabilise training for me and is...

@rjpeart glad I could help! Also interesting that you add white noise after the first LeakyRelu, I added it before the first convolutional layer and it worked as well, although...

@kubmin you probably didn't import the dpnn torch package. https://github.com/Element-Research/dpnn/blob/master/WhiteNoise.lua Hope that helps!

@plugimi @rjpeart I'm sure you can work out the pattern. The computational side is another, if you manage to fit your models in a GPU, you may have to use...

Python 2.7 Compiled using windows build tools 2017 Installed via pip install. and python setup.py install Neither work. Any idea why?

Yes, I understand. I just wanted to point out that it seems like this doesn't converge though. After 2000 iterations: ![200](https://user-images.githubusercontent.com/4195648/34341482-56435620-e990-11e7-9ac8-c913aaad30b7.png) The loss of the generator is very high (18)...

I tried an 8 layer MLP for each network, as suggested in the paper with lr of 1e-5. Still no convergence. Would be nice to get a version working. If...

One Idea could be to have a secret test well. Another could be to perform the prediction on all but one well, retaining the others for training purposes and then...

Excellent Visualization! I agree, I've seen even higher score variation it seems when running any of my code.