PhotographicImageSynthesis
PhotographicImageSynthesis copied to clipboard
Run out of memery issue
4 GPU 8G for each . Run out of memery when I train 1024P demo . Could you please tell me how to deal with it?
I have Titan X (Pascal) which has 12Gb in GPU memory. My code only uses one GPU. You might reduce the number of feature channels so that the model can fit into your memory, but I have not tried that.
@CQFIO did you train this model using only 1 Titan X (Pascal)? Also, from your code, the batch size for the 1024p model is 1?
@jkschin Yes, I use a single Titan X (Pascal) GPU for training. I always use batch size 1.
@CQFIO I'm using your code demo_256p.py as is on the cityscapes data. I scaled down the images from 1024x2048 to 256x512, changed the paths, and simply changed is_training to True. My 1060 with 6GB RAM throws "Out of Memory".
It doesn't make sense to me that a Titan X with 12GB memory can work on 1024x2048 but 1060 with 6GB memory can't work on 256x512. Any insight to share?
@jkschin I have a similar problem with you. The model doesn't work with same error 'Out of Memory' when the demo_256p.py is run. Do you fix or you try high memory?
@yaxingwang You need 12Gb GPU memory to run the code.
@CQFIO Thanks. I can run your code by just using output loss instead of six losses corresponding to Vgg layers, for the Vgg needs much memory. I feel confused why the tf.reduce_min contained in content_loss is selected and multiplied by 0.999, here I am running demo_256p.py
1 1841 86.55 20.67 18.13 16.67 16.57 16.30 15.53 30.11
1 1842 86.55 17.14 13.90 12.61 14.12 15.32 15.14 29.37
1 1843 86.56 20.29 15.57 13.95 14.83 15.16 15.28 35.23
Traceback (most recent call last):
File "demo_1024p.py", line 122, in
My GPU is Tesla K80 with 12 GB memory. @CQFIO can you help me to fix this memory problem?
yaxingwang, can you specify please more on which parts you made changes inside demo_256.py to launch it. I have 500 images data for Label256Full RGB256Full set in resolution 256x256.