Qifeng Chen
Qifeng Chen
It is also available in MatConvnet. http://www.vlfeat.org/matconvnet/
I have a similar issue. I can only run the program in CPU mode because it requires about 18 Gb in memory.
I only tried predict.py for a single image. I get "out of memory" error with Titan X.
@fyu I need to install cuDNN to make it work.
@manuelschmidt Depending on the model of GPU, the dilation network needs different amount of memory (which is quite strange to me). There is a simple solution. Just make the input...
I finally make it work in TF 1.2 only. I could not make it run in TF 1.7
Basically, you need to change the lines with comments like "training label", "training image" and "test label". Sorry that I have not reorganized this part in a clear way.
If you want to train a model for Cityscapes dataset, you can download data from https://www.cityscapes-dataset.com/
I have Titan X (Pascal) which has 12Gb in GPU memory. My code only uses one GPU. You might reduce the number of feature channels so that the model can...
@jkschin Yes, I use a single Titan X (Pascal) GPU for training. I always use batch size 1.