SRGAN
SRGAN copied to clipboard
Bugs and suggestions
- Initialize learning G never breaks the loop currently.
- Image ouput is overwritten each step in an entire epoch if (epoch != 0) and (epoch % 10 == 0): tl.vis.save_images(fake_hr_patchs.numpy(), [ni, ni], save_dir_gan + '/train_g_init_{}.png'.format(epoch))
Maybe this is better:
if (epoch != 0) and (epoch % 10 == 0) and (step % 10 == 0):
tl.vis.save_images(fake_hr_patchs.numpy(), [ni, ni], save_dir_gan + '/train_g_init_{}_{}.png'.format(epoch, step))
That's the case for learn G and training.
- You can add to _map_fn_train: hr_patch = tf.image.random_flip_up_down(hr_patch) to increase the samplesize more.
- Got a complaint for missing random and time, so I now reimport them again after all other imports which fixes it strangely
My CPU side RAM is filled completely. Training crashes when doing other stuff. I believe
train_ds = train_ds.prefetch(buffer_size=4096)
can be lowered to fix this, but it's not much of an issue for me.
I see, this way could help: use the method in the following link to load images for each batch instead of reading the entire dataset into the RAM at the beginning.
https://github.com/tensorlayer/dcgan/blob/master/data.py#L28
and reduce the prefetch size to 2, remove the shuffle line