convnet-benchmarks icon indicating copy to clipboard operation
convnet-benchmarks copied to clipboard

Add caffe with CuDNN[R4] to benchmark.

Open cesarsalgado opened this issue 9 years ago • 7 comments

cesarsalgado avatar Feb 29 '16 00:02 cesarsalgado

i do not think this will give us a lot more data points, but i am happy to do it. Caffe install is always a bit of a tightrope balancing act to get right, i'll do it in a few days.

soumith avatar Feb 29 '16 00:02 soumith

Thanks!

cesarsalgado avatar Feb 29 '16 02:02 cesarsalgado

I've ran the caffe numbers here: https://github.com/soumith/convnet-benchmarks/commit/6f718dbcfdaefe1af6c04ab2be3927e0728b599e

It is strange, because the caffe numbers look to be quite off. Alexnet: 128ms vs 81ms (Torch-fp32) Overfeat: 430ms vs 268ms (Torch-fp32) VGG-A: 680ms vs 529ms (Torch-fp32) Googlenet: 484ms. vs 470ms (Torch-fp32)

The only thing I can think of right now is that Torch enables the CuDNN autotuner (via a caching mechanism on sizes / strides ), and I suspect that Caffe does not enable it, and just uses cudnn heuristics, which are not always best perf.

In fact, now I am suspecting that maybe TF also does not enable autotuner.

The only network where Caffe looks close to Torch is Googlenet, and it seems to have serious perf regressions for the other 3. (though both are using the same code, i.e. CuDNN R4 + CuBLAS 7.5)

Should I add these numbers to the readme? Considering how sensitive the benchmarks have become, I would want someone from the Caffe ecosystem to have a quick look at the prototxt files to see if there's any new settings I should add that were recent.

soumith avatar Feb 29 '16 04:02 soumith

Adding them with a slight warning containing your second paragraph seems a good thing to do... better than keeping with the 'native' bench IMO... Thanks for the great work. I can take a look at the Caffe bench and prototxt files a bit later in the day if this helps.

beniz avatar Feb 29 '16 07:02 beniz

OK, so quick remarks:

  • the .prototxt files are in old format, not that it matters much I believe, but I could PR an update if you're interested
  • alexnet.prototxt is missing the ReLU in between fc6,fc7,fc8, not sure whether this is on purpose ?

beniz avatar Feb 29 '16 07:02 beniz

@beniz def up for a PR to make it up to date. the missing ReLU are def an oversight, have to be added.

soumith avatar Feb 29 '16 07:02 soumith

I recently looked into the performance of Caffe when bringing our framework Leaf up to speed and I can confirm that the biggest speed hit comes from not using the autotuner. Caffe is also loosing a bit of time (IIRC 2-3ms) because it reshapes its layers on every forward pass, where it reallocates some cuDNN descriptors.

hobofan avatar Feb 29 '16 08:02 hobofan