Sergio Sánchez Ramírez
Sergio Sánchez Ramírez
Okay, fixed now.
+1 for this
An alternative for batch TN contraction is to add a "batch" hyperindex
Reactant.jl will be capable of this when we add proper support for it.
Moved to [bsc-quantic/Tensors.jl](https://github.com/bsc-quantic/Tensors.jl)
Moved to [bsc-quantic/Tensors.jl](https://github.com/bsc-quantic/Tensors.jl)
Updated the code to the last `Tenet` and `Dagger` changes. Also it now calls `Tenet.contract` instead of `OMEinsum.EinCode`. @jofrevalles I also added some tests.
Moved to [bsc-quantic/Tensors.jl](https://github.com/bsc-quantic/Tensors.jl)
The source of compilation time seems to come from 2 sources: - `OMEinsum` uses dynamic dispatch with runtime-generated `Val` values, that's why first contraction always takes so much time -...
#55 includes a new `replace!` method that can replace a `Tensor` with a `TensorNetwork` as long as the open indices of the `TensorNetwork` match the labels of the `Tensor`.