convnet-benchmarks icon indicating copy to clipboard operation
convnet-benchmarks copied to clipboard

Separating the library and the kernel in the results?

Open benanne opened this issue 10 years ago • 1 comments

I noticed that the previous cuda-convnet result was replaced by a better one, and this time the wrapper used is pylearn2. I think both results are relevant and interesting, as both the kernel and the library used will affect performance. It would also be very interesting (for me personally at least) to see how the pylearn2-wrapped cuda-convnet compares with using cuda-convnet's own Python bindings, for example (and of course Torch's).

Additionally, some libraries (like Theano/pylearn2 and Torch) support different kernels, so it would be useful to get numbers for all of the options.

So I thought it would be useful to have separate "library" and "kernel" columns to indicate more clearly which libraries have been benchmarked, and which kernels were used, instead of listing a subset of library+kernel combinations. Just an idea :)

On a somewhat related note, I apologize if I was a bit too eager spreading the link to this repo around, as some people seem to be reacting strongly to these results :) I just thought it's really cool that someone is taking the time to compare these various options and publishing some hard numbers. Kudos to you!

benanne avatar Jul 29 '14 08:07 benanne

I noticed that the previous cuda-convnet result was replaced by a better one, and this time the wrapper used is pylearn2. I think both results are relevant and interesting, as both the kernel and the library used will affect performance.

That seems fair, I'll add a column in there.

It would also be very interesting (for me personally at least) to see how the pylearn2-wrapped cuda-convnet compares with using cuda-convnet's own Python bindings, for example (and of course Torch's).

It looks like doing benchmarking for kernels around cuda-convnet's python bindings is quite difficult.

On a somewhat related note, I apologize if I was a bit too eager spreading the link to this repo around, as some people seem to be reacting strongly to these results

Neither did I think these results were even relevant, i haven't even finished them (benchmarking one layer gives you a rough idea, but doesn't really tell you much else!), hope bridges are not broken over these.

soumith avatar Jul 29 '14 13:07 soumith