Use core.matrix rather than clatrix directly
Hi, Synaptic looks very interesting. Thanks for making it open source. I was wondering if you knew about core.matrix, and if you had a reason to use clatrix directly rather than through the core.matrix API? The overhead of protocol dispatch is very minimal, especially since it happens once for a matrix multiply or dot product.
Cheers, Jeff
In theory you are right @rosejn . I started with core.matrix as well, but criterium benchmarks showed that inlining https://github.com/whilo/boltzmann/blob/master/src/boltzmann/jblas.clj#L42 even helped quite a bit over using clatrix directly. But this depended on the power management of my laptop as well and I have benchmarked it some nights, to get it to comparable performance of theano, for which I failed (factor 2-4 slower for CPU, GPU is 10x faster or more).
If you want to help core.matrix, reuse clatrix and then core.matrix there again and show in benchmarks that it is negligible. For instance the discussions in the group between the author of neanderthal and @mikera resonate with me a bit, as the important thing about matrix libraries is not interchangeability, but performance. I still think it should be possible to use core.matrix and I like its approach more than neanderthal, but at the moment there exists no fast backend to prove it, not even for the CPU.
On my machine inlining the protocols was a double digit percentage improvement, if I recall correctly. Post your findings at https://groups.google.com/forum/#!forum/numerical-clojure or here, there are quite some arguments about core.matrix and performance. My impression is that this performance tuning needs quite some work, otherwise Clojure will not be competitive to e.g. Python for scientific computing.