Matt Fishman

Results 497 comments of Matt Fishman

Never mind, I see it now.

> About the trivial type, I think here it's all about the code and what makes the code cleanest. With the QN system we didn't have this problem since there...

I'm thinking of using `A ⊗ B` as a notation for lazily computing ITensors (thought of as a tensor product, so you could think of it as tensors producted together...

Looks like a good start, nice to see it is pretty simple.

Thanks, seems like we can promote the tensors to a common type ourselves to circumvent that.

@kmp5VT I think we should only send tensors with `Dense` storage wrapping `CuArray` data to the `cuTENSOR` backend, for example: ```julia function NDTensors.contract( Etensor1::Exposed{

For some more context, here is where the dense blocks are contracted when two tensors with `BlockSparse` storage are contracted: https://github.com/ITensor/ITensors.jl/blob/v0.4.0/NDTensors/src/blocksparse/contract_generic.jl#L141-L150. `R[blockR]`, `tensor1[blocktensor1]`, and `tensor2[blocktensor2]` are blocks of the block...

You say "cuTENSOR has a compat restriction on TensorOperations" but `cuTENSOR` is a lower level library that likely doesn't know anything about `TensorOperations`, maybe you mean the other way around?

In the `TensorOperations.jl` `Project.toml`: https://github.com/Jutho/TensorOperations.jl/blob/master/Project.toml I see they have a `cuTENSOR` extension and the `[compat]` entry is set to `cuTENSOR = "1"`, while the latest `cuTENSOR.jl` version is `v2.1.0` (https://github.com/JuliaGPU/CUDA.jl/blob/master/lib/cutensor/Project.toml)....

I see there is an open PR about upgrading `TensorOperations` to `cuTENSOR` `v2` here: https://github.com/Jutho/TensorOperations.jl/pull/160. Also note that `cuTENSOR.jl` `v2` only supports Julia 1.8 and onward (https://github.com/JuliaGPU/CUDA.jl/blob/v5.3.3/lib/cutensor/Project.toml#L20).