Matt Fishman
Matt Fishman
I guess the latest `CUDA.jl` version now requires Julia 1.8 and up anyway (https://github.com/JuliaGPU/CUDA.jl/blob/v5.3.3/Project.toml#L80) so we could have the same restriction for `NDTensorsCUDAExt` and `NDTensorscuTENSORExt`.
I think the best course of action would be something like what you said with manually adding and removing packages as needed in the tests instead of putting them as...
> @mtfishman Sorry, I did misunderstand what was going on. Thank you for looking into that and sending me this information! If we are supporting `CUDA` and `cuTENSOR` only in...
Sounds good, seems like there are a few wrinkles to work out but mostly coming along.
Thanks for the context @lkdvos. It makes sense to start by improving the `cuTENSOR` wrapper code before going through a big refactor of `TensorOperations`. This isn't causing serious issues for...
Thanks @kmp5VT, this is a great step to get to for our GPU backends! Can you update the table entry in https://github.com/ITensor/ITensors.jl/blob/v0.5.0/docs/src/RunningOnGPUs.md#gpu-backends?
Another idea is to provide a macro `@preserve_ortho`. This would provide a block of code that the user specifies will preserve the orthogonality limits of the MPS/MPO. The user may...
We should also reconsider the behavior of `siteinds(::MPO)`. Right now, it acts like the proposed function `allsiteinds(::MPO)`, in that it returns a vector of IndexSets of all site indices on...
This will be fixed in a rewrite I will do of GradedAxes which will account for the new [BlockArrays.jl v1.0 release](https://github.com/JuliaArrays/BlockArrays.jl/pull/255).
It was easy enough to fix in #1468 without rewriting the GradedUnitRange type for now.