Johnnie Gray

Results 265 comments of Johnnie Gray

@stsievert your answers are mostly (all?) what I'd call a batched contraction - a 'normal' contraction with one index shared by every array. I wrote some code to do this...

Hi @igtrnt yes I think you're right - the algorithm was written with 'standard' einsums in mind where two different intermediates sharing the exact same indices is not possible. Out...

> I also feel that you might have a similar issue with greedy. I think with greedy, the optimizer actively performs 'deduplication' as the first step i.e. repeated terms `"ab,ab,..."`...

Ah I see, you are right! I think I was overestimating `greedy`'s preprocessing / combining it with `dp`'s. `dp` does 'single term' reductions, so in this case `'ij'->j`, `jk->j` and...

I would also be in favour of the simplification i.e. 2. I guess the question is to what functions do the ``backend_kwargs`` get passed to? Just the last function call?...

Using ``pypy`` also gives a decent speed-up (~300% for the 'dp' algorithm in some cases), suggesting a thorough cythonization might also be pretty beneficial.

Yeah this is not something I have experience with either. I guess one question would be whether to maintain cython and python versions of the relevant bits (which would probably...

I note that one possibility now is just to type-annotate ``opt_einsum`` and then let cython do its thing in [pure-python mode](https://cython.readthedocs.io/en/latest/src/tutorial/pure.html), though this might not maximize performance.

I'm adding a few tweaks to the 'dp' algorithm, (with an eye on it replacing the current 'optimal' implementation) but that's all pretty much self contained!

So in the extreme case, you just sum all over the indices which ends up being exactly the same as pure einsum. For this case the largest intermediate is now...