Start on CUDA extension
This splits off basic support for CUDA tensor maps from https://github.com/QuantumKitHub/TensorKit.jl/pull/320, and doesn't cover Diagonals or factorizations at all. I made some modifications to things in src to make them a little more generic also. Some things aren't working yet:
- converting from an
AdjointTensorMapof aCuTensorMapto aCuTensorMapof a different scalar type - converting to a
CuArray - the
hasfusiontensor(I)tests -
diag/diagm - index flipping
Your PR requires formatting changes to meet the project's style guidelines.
Please consider running Runic (git runic main) to apply these changes.
Click here to view the suggested changes.
diff --git a/src/tensors/tensor.jl b/src/tensors/tensor.jl
index 6946796..4744ca8 100644
--- a/src/tensors/tensor.jl
+++ b/src/tensors/tensor.jl
@@ -316,15 +316,15 @@ end
for randf in (:rand, :randn, :randexp, :randisometry)
_docstr = """
- $randf([rng=default_rng()], [TorA=Float64], codomain::ProductSpace{S,N₁},
+ $randf([rng=default_rng()], [TorA=Float64], codomain::ProductSpace{S,N₁},
domain::ProductSpace{S,N₂}) where {S,N₁,N₂,T} -> t
- $randf([rng=default_rng()], [TorA=Float64], codomain ← domain) -> t
+ $randf([rng=default_rng()], [TorA=Float64], codomain ← domain) -> t
Generate a tensor `t` with entries generated by `$randf`.
- The type `TorA` can be used to control the element type and
- data type generated. For example, if `TorA` is a `CuVector{ComplexF32}`
- or `ROCVector{Float64}`, then the final output `TensorMap` will have
- that as its storage type.
+ The type `TorA` can be used to control the element type and
+ data type generated. For example, if `TorA` is a `CuVector{ComplexF32}`
+ or `ROCVector{Float64}`, then the final output `TensorMap` will have
+ that as its storage type.
See also [`Random.$(randf)!`](@ref).
"""
I kind of failed to realize that fusiontensor(a, b, c) always returns arrays so these convert(Array, t) based methods are a bit more painful than expected. It might be reasonable to consider testing the GPU methods by comparing to the CPU implementations rather than comparing to the Array implementations?
I think I have them mostly working now, actually... let me push in a moment
Codecov Report
:x: Patch coverage is 58.41584% with 42 lines in your changes missing coverage. Please review.
| Files with missing lines | Patch % | Lines |
|---|---|---|
| ext/TensorKitCUDAExt/cutensormap.jl | 52.77% | 34 Missing :warning: |
| src/tensors/linalg.jl | 44.44% | 5 Missing :warning: |
| src/tensors/tensor.jl | 84.21% | 3 Missing :warning: |
| Files with missing lines | Coverage Δ | |
|---|---|---|
| ext/TensorKitCUDAExt/TensorKitCUDAExt.jl | 100.00% <100.00%> (ø) |
|
| src/tensors/tensor.jl | 75.14% <84.21%> (-10.15%) |
:arrow_down: |
| src/tensors/linalg.jl | 62.27% <44.44%> (-20.43%) |
:arrow_down: |
| ext/TensorKitCUDAExt/cutensormap.jl | 52.77% <52.77%> (ø) |
... and 30 files with indirect coverage changes
:rocket: New features to boost your workflow:
- :snowflake: Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
Haven't looked at the tests in detail yet, but certainly looks very good already.
Re the tests, they are mostly copied from the CPU tensors tests. I have added a few to test CUDA specific code paths and to test conversion to/from CPU based TensorMap. Some tests do not work yet (the basic linear algebra ones, sylvester) because some of the functionality isn't yet supported on CUDA.
OK I think I've responded to everything 🧐
I can't get the format issue to appear locally...
OK, is there anything else this needs besides https://github.com/QuantumKitHub/TensorKit.jl/pull/333 and somehow fixing the format check?