dcscn-super-resolution icon indicating copy to clipboard operation
dcscn-super-resolution copied to clipboard

Low GPU usage and slow training

Open JeffSaa opened this issue 5 years ago • 5 comments

Hi I have been training for 4 or 5 days in a good pc with tf-gpu but is so slow and the GPU usage is low. I'm training with bsd200+yang x8 dataset for better results as you @jiny2001 said in another thread but the PSNR value until now are bad asd

Screenshot from 2019-10-21 12-55-45

Something wrong with me?

JeffSaa avatar Oct 21 '19 18:10 JeffSaa

Hi, I think the most of the CPU consumption is coming from reading data for each training step.

Please follow [Speeding up training] section of the readme. You can build batch images before the training. It will take some time to build image for the first (in my case 3-4 hours), but it would be much faster once it has done.

I don't know why the performance is too bad. If you use 'set5' for the test data, it should reach at least 27 for PSNR... Hum...

jiny2001 avatar Oct 22 '19 04:10 jiny2001

As you can see in the name the dataset I'm using a "_y" in the name so this means convert_y.py was already used. Also in train.py exec command the --build_batch True option

JeffSaa avatar Oct 22 '19 14:10 JeffSaa

Hum.. There are lots of mysterious things. As you said, it looks like GPU is not used at all and also PSNR is very low. What tensorflow version are you using? I'll try it on my environment.

jiny2001 avatar Oct 23 '19 15:10 jiny2001

v 1.14.0

JeffSaa avatar Oct 23 '19 18:10 JeffSaa

please how can i improve the value of SSIM ?? because i use my own dataset but the maximum value of SSIM is 0.84 i need to improve the value what is the parameters who is responsible to improve the SSIM please thank you a lot

hala3 avatar Oct 25 '19 13:10 hala3