Keras-GAN
Keras-GAN copied to clipboard
Number of epoch to trainnig
Hello,
I was using pix2pix to get style of one environment and pass to other, e.g., a land with snow to be converted to a land without snow.
I have a database with more or less 8000 images from 101 groups. I was selecting one image of each group to be the vanilla image and the network should be able to learn how to convert it in other images of same group.
I had add noise in encoder layers of U-net too like sugested in "Toward Multimodal Image-to-Image Translation" (https://arxiv.org/abs/1711.11586). This noise is a controlled noise. Number labeled by humans informing the weather condition of image.
` def build_generator(): """U-Net Generator"""
def conv2d(layer_input, filters, z, f_size=4, bn=True):
"""Layers used during downsampling"""
z_ = Reshape((1,1,qtd_noise))(z)
z_ = Lambda(K.tile, arguments={'n':(1,layer_input.shape[1],layer_input.shape[2],1)})(z_)
d0_ = Concatenate(axis=3)([layer_input, z_])
d = Conv2D(5, kernel_size=4, strides=2, padding='same')(d0_)
d = LeakyReLU(alpha=0.2)(d)
if bn:
d = BatchNormalization(momentum=0.8)(d)
return d
`
After training more than 500000 epoch the result is as bellow. The output loss was:
[Epoch 604000] [D loss: 0.000411, acc: 100%] [G loss: 1381.413574]
Can you give me some help?
@evertonaleixo 500,000 epochs is way too many. In the original pix2pix paper, they use 200 epochs at most, for small datasets.
If you're not getting reasonable results within 200 epochs, and you're using the base implementation of this code without changes, it's most likely your training data that is an issue.