DeepRTplus icon indicating copy to clipboard operation
DeepRTplus copied to clipboard

CPU vs. GPU speed

Open bfurtwa opened this issue 7 years ago • 3 comments

I'm training with a i5-2500 quad-processor. It does 4it/s and 1:15h/epoch. Approximately, how fast would the training be with a GPU?

bfurtwa avatar Sep 20 '18 14:09 bfurtwa

Hi, I did some tests on my laptop:

with i7-7700HQ CPU: 4.72it/s and 40s/epoch; with GTX-1070 GPU: 35.50it/s and 5s/epoch;

It was tested using HeLa data (mod.txt), what's your training file?

On CPU you may set batch size to be very large e.g. set BATCH_SIZE = 2000 (if you have large enough memory), which will be much faster.

horsepurve avatar Sep 20 '18 18:09 horsepurve

I'm using a big dataset with around 300,000 peptides. So each iteration is one batch? That means I did 4*16=64 samples/s with batch size 16. With batch size 2000 I do 74 samples/s. Unfortunately that's not a big speedup.

bfurtwa avatar Sep 21 '18 07:09 bfurtwa

I see. With such a big dataset, I think a subset with high quality (say q-value<0.001 or smaller) can be chosen to train the model. Or there are also some GPU cloud servers. Hope that may help.

horsepurve avatar Sep 21 '18 15:09 horsepurve