dcscn-super-resolution
dcscn-super-resolution copied to clipboard
Low GPU usage and slow training
Hi I have been training for 4 or 5 days in a good pc with tf-gpu but is so slow and the GPU usage is low. I'm training with bsd200+yang x8 dataset for better results as you @jiny2001 said in another thread but the PSNR value until now are bad
Something wrong with me?
Hi, I think the most of the CPU consumption is coming from reading data for each training step.
Please follow [Speeding up training] section of the readme. You can build batch images before the training. It will take some time to build image for the first (in my case 3-4 hours), but it would be much faster once it has done.
I don't know why the performance is too bad. If you use 'set5' for the test data, it should reach at least 27 for PSNR... Hum...
As you can see in the name the dataset I'm using a "_y" in the name so this means convert_y.py was already used. Also in train.py exec command the --build_batch True option
Hum.. There are lots of mysterious things. As you said, it looks like GPU is not used at all and also PSNR is very low. What tensorflow version are you using? I'll try it on my environment.
v 1.14.0
please how can i improve the value of SSIM ?? because i use my own dataset but the maximum value of SSIM is 0.84 i need to improve the value what is the parameters who is responsible to improve the SSIM please thank you a lot