QuantumOptics.jl icon indicating copy to clipboard operation
QuantumOptics.jl copied to clipboard

Correctness checks and unit tests for GPU arrays used as storage backend for various states and operators

Open Krastanov opened this issue 6 months ago • 0 comments

By virtue of its parameterized types and multiple dispatch, libraries in julia can support GPU acceleration even if they were not specifically designed with such capabilities in mind.

In this particular library, objects like kets and operators can be parameterized by the storage that is used for their numerical data (dense or sparse arrays, arrays stored in various GPU memories, other weirder types of arrays).

For instance:

julia> using QuantumOpticsBase

julia> op = create(FockBasis(20))

julia> typeof(op)
Operator{FockBasis{Int64}, FockBasis{Int64}, SparseArrays.SparseMatrixCSC{ComplexF64, Int64}}

julia> typeof(op.data)
SparseArrays.SparseMatrixCSC{ComplexF64, Int64}

julia> op1 = dense(op)

julia> typeof(op1.data)
Matrix{ComplexF64} (alias for Array{Complex{Float64}, 2})

Here you can see that both op and op1 were the creation operator, but behind the scenes one of them was stored as a sparse matrix and the other one as a dense matrix.

Similarly, one can use GPU-memory arrays like CUDA.CuArray or AMDGPU.ROCArray or OpenCL.CLArray or other:

julia> using pocl_jll, OpenCL, QuantumOpticsBase, Adapt

julia> op = create(FockBasis(20));

julia> op1 = adapt(CLArray, op)

julia> typeof(op1.data)
CLArray{ComplexF64, 2, OpenCL.cl.UnifiedDeviceMemory}

Here we used an OpenCL array (which happened to be stored in CPU memory, so that one can test GPU codepaths without owning a GPU (the pocl_jll provides an OpenCL runtime even if you have none available on your system)). We also used the Adapt.jl library which provides a standard interface for converting to GPU storage of complicated objects.

This bounty is for the creation of a "test suite" that verifies the GPU capabilities of QuantumOptics.jl

The test suite is supposed to check that for a given GPU array type T:

  • conversion of sparse and dense operators and of kets and bras to GPU storage works
  • multiplication of operators and operators with bras/kets, partial traces, inner products, tensor products, outer products, creation of projectors, and other similar "basic linear algebra" operations work
  • finding steady states, spectra, eigenvalues, eigenvectors, and other "features of Hamiltonians" work
  • operator exponentiation works
  • lazy operators and their application to states works
  • solving differential equations like schroedinger, lindblad, stochastic, mcwf, etc ((dynamic or not)) work
  • solvers with lazy operators work

The test suite should be implemented as a non-public function or multiple functions that run all these operations and report if they succeeded. It can look something like

julia> array_type_support(CuArray) # returns a list of named tuples
[
(operation=:conversion_of_operator, runs=true, iscorrect=true),
...
...
(operation=:eigenvalues, runs=true, iscorrect=false),
...
(operation=:schroedinger, runs=false, iscorrect=nothing),
]

Above we had a list of reports where for each report we have the operation being attempted, whether it runs without causing exceptions, and whether if is giving correct result (e.g. as compared with the result executed with standard CPU arrays). You do not need to structure your reports exactly as this if you find a different setup to be neater.

For the bounty to be completed:

  • implement an adapt for Kets and Bras, similar to the existing one for operators
  • you need the test function as described above
  • you need to report in your PR as a comment how it runs with OpenCL arrays
  • you need to report in your PR how it runs on CUDA arrays (e.g. by using google colab which is a free service providing GPUs in the cloud and which supports julia)
  • you need to add the OpenCL runner to the test suite

If you are new to julia, make sure to:

Krastanov avatar May 19 '25 23:05 Krastanov