Integrating GPU acceleration support in OpenQuantumTools
Just as a first-case test, we implemented a GPU solver as a separate function, https://github.com/USCqserver/OpenQuantumTools.jl/blob/f0e34ec2f94357d9bc1c2fa65459646a8c5b3857/src/QSolver/closed_system_solvers.jl#L30-L44
Ideally, we would integrate better. How we do this is effectively solved by solving the issue raised in OpenQuantumBase.jl: https://github.com/USCqserver/OpenQuantumBase.jl/issues/40#issue-758982509
I raised here simply because we'd need to make changes here as well after resolving the issue in Base.
The following commit in gpu-accel branch is my proposed solution: 24ed5d4dc5c01a049bb441c6cf22e1c852c6c341. As described, it works by multiple dispatch and assumes we created a CuAnnealing object in OpenQuantumBase rather than making an additional flag=True/False for GPU usage with each solver.
If you find this a satisfactory solution @neversakura , I will close the issue, and future updates to gpu-accel for other solvers will follow the same paradigm.
If you are strong for the flag, I'd like to hear your thoughts.
Thanks. I like your solution. A possible simpler approach is to keep the current Annealing object and use its type parameter hType as the dispatch argument. I don't see any differences between those two methods for what we are trying to do now.
However, introducing a new CuAnnealing could make room for future GPU-specific optimizations as it is completely decoupled to the Annealing object. So @naezzell I have no objection to your proposal and feel free to close this issue.
By the way, I think we can still use the same constructor of CuAnnealing and Annealing so the user only needs to define the CuHamiltonian type.