SRGAN icon indicating copy to clipboard operation
SRGAN copied to clipboard

Bugs and suggestions

Open Kjos opened this issue 6 years ago • 2 comments

  • Initialize learning G never breaks the loop currently.
  • Image ouput is overwritten each step in an entire epoch if (epoch != 0) and (epoch % 10 == 0): tl.vis.save_images(fake_hr_patchs.numpy(), [ni, ni], save_dir_gan + '/train_g_init_{}.png'.format(epoch))

Maybe this is better:

    if (epoch != 0) and (epoch % 10 == 0) and (step % 10 == 0):
        tl.vis.save_images(fake_hr_patchs.numpy(), [ni, ni], save_dir_gan + '/train_g_init_{}_{}.png'.format(epoch, step))

That's the case for learn G and training.

  • You can add to _map_fn_train: hr_patch = tf.image.random_flip_up_down(hr_patch) to increase the samplesize more.
  • Got a complaint for missing random and time, so I now reimport them again after all other imports which fixes it strangely

Kjos avatar Jul 02 '19 20:07 Kjos

My CPU side RAM is filled completely. Training crashes when doing other stuff. I believe

train_ds = train_ds.prefetch(buffer_size=4096)

can be lowered to fix this, but it's not much of an issue for me.

Kjos avatar Jul 02 '19 21:07 Kjos

I see, this way could help: use the method in the following link to load images for each batch instead of reading the entire dataset into the RAM at the beginning.

https://github.com/tensorlayer/dcgan/blob/master/data.py#L28

and reduce the prefetch size to 2, remove the shuffle line

zsdonghao avatar Jul 30 '19 17:07 zsdonghao