Soumith Chintala

Results 312 comments of Soumith Chintala

So, getting back to the discussion. I have volunteers from a few frameworks, looking for some. # Volunteers available Torch: Soumith, @SeanNaren , Shubo Sengupta (Baidu) TensorFlow: @vrv and @martinwicke...

Excellent @pranv and @f0k . @craffel was interested in partly doing Theano benchmarking as well.

i dont have plans to benchmark / maintain these tables. If you send a PR with README tables updated for all the frameworks, i'm happy to merge.

you need to add two lines to the near top of your script: ``` import torch.backends.cudnn as cudnn cudnn.benchmark = True ``` That will turn on the cudnn autotuner that...

Both MXNet and Chainer scripts are ready, thanks to Kentaa Oono and @antinucleon . As some of you might know, ICLR deadline is on Thursday, a bit too busy with...

To be fair to both Chainer and MXNet folks, they gave me scripts to benchmark. I put it off because of NIPS / ICLR, and their libs have changed APIs,...

Just finished Chainer. Working on MXNet ...

I committed MXNet AlexNet + Googlenet scripts that @antinucleon had given me. I wanted to get some experience with MXNet before I benchmarked it, because it can use multiple threads...

For comparison, here's the log of Caffe + OpenBLAS numbers on the same machine (It's the Digits box ;-) ) https://github.com/soumith/convnet-benchmarks/blob/cpu/caffe/output_alexnet.log

More info is in the CPU branch: https://github.com/soumith/convnet-benchmarks/tree/cpu The alexnet-owt protobuf, with the same architecture I use for the GPU versions is here: https://github.com/soumith/convnet-benchmarks/blob/cpu/caffe/imagenet_winners/alexnet.prototxt The intel-adapted version is here: https://github.com/soumith/convnet-benchmarks/blob/cpu/intel_optimized_technical_preview_for_multinode_caffe_1.0/models/intel_alexnet/alexnet.prototxt