keras-deeplab-v3-plus
keras-deeplab-v3-plus copied to clipboard
question for batch size
Thank you for your awesome code !!
i'm gonna re-training deeplabv3+ with xception, OS16, my own dataset(tfrecord). input size is (512,1024,3)
my question is, how many batches can be trained per gpu with above setting? (i'm using titan xp, 12GB)
in my experiment, only 1 batch is running and above 2 occurs OOM T.T
is it normal case? or did I something wrong?
On my 1080ti it is possible to have a batch of 6 images with size (512, 512) and OS 16 don't know what's wrong with your code.
with input size (512,512,3) and some changes (remove horovod, replace tfrecord dataset input pipeline to numpy), i got 4 batch size.
but It is impossible to get 6 batch size with my current settings
can you give me more information about your experiment settings?
FYI: with input size (512, 512, 3) I can go up to a size of 16 using a node of 4 x GTX 1080 TI. 32 is definitely too much.
In original paper, the model was trained using batches of size 20.
Tesla K80 12Gwith Golbstein generator can run with 2 batch. I am working on it for days, I tried tf1 and tf2, I also Tried with 1080Ti, and all driver is update.
@pissw2016 Did you find a way to reach a batch size of 8, which might be somewhat optimal?