CuYao.jl icon indicating copy to clipboard operation
CuYao.jl copied to clipboard

Calculating gradients with cureg

Open kyriienko opened this issue 5 years ago • 9 comments

Not sure if it should an enhancement or fix, but I noticed that quantum backpropagation does not work with a GPU register (reg |> cu). It raises "scalar getindex is disallowed" warning. Is there a workaround? GPU operation for expect'(..) can speed up calculations by a lot.

kyriienko avatar May 26 '20 23:05 kyriienko

Hi,

If your operation contains only rotation block, a simple patch should make it work. (Or decompose your circuits to rotation gates)

e.g. the variational circuit in YaoExtensions

julia> using CuYao

julia> reg = rand_state(20) |> cu
ArrayReg{1, Complex{Float64}, CuArray...}
    active qubits: 20/20

julia> using YaoExtensions

julia> c = variational_circuit(20, 10)
nqubits: 20
chain
├─ chain
│  ├─ chain
│  │  ├─ put on (1)
│  │  │  └─ rot(X, 0.0)
│  │  └─ put on (1)
│  │     └─ rot(Z, 0.0)
....

julia> using CuArrays

julia> YaoBlocks.AD.as_scalar(x::CuArray) = Array(x)[]   # the patch!

julia> expect'(heisenberg(20), reg=>c)
ArrayReg{1, Complex{Float64}, CuArray...}
    active qubits: 20/20 => [0.006080860730138406, 0.0012417146161135244, 0.0007451153265575091, 0.0011842974467126087, 0.008333432623283833, -0.004009333674749263, 0.002908613188036634, 0.0033043493935384135, -0.003400034029173132, -0.004841144309199607  …  -0.0007351555978013533, -0.001984034465488934, 8.974137744951463e-5, -0.002936202848530067, 0.0014057599337781372, 0.0027325843231737826, -0.0003051050671262587, -0.004119464235446285, 0.003910598509407808, 0.0031037580186253078]

This is because we forbid scalar access to CuArrays in CuYao to prevent incorrect access. We will add this patch in CuYao later.

For non-rotation block, a lot extra work is required to work around the share write issue on GPU.

GiggleLiu avatar May 27 '20 00:05 GiggleLiu

@GiggleLiu Thank you, Leo! This worked well for rotation gates.

For the generic case, in principle one can use a decomposition into cnots + rotations. There is a problem however when rotations shall have same angle -- dispatch! and update! will treat them as separate variables (e.g. if we want to use Rx(-ϕ) CNOT Rx(ϕ) ). Is there a way to link rotations together? I know repeat(Rx(ϕ), 1:n) will work like this, but has limited applicability.

kyriienko avatar May 28 '20 00:05 kyriienko

Good point. Yao can not handle shared parameters properly. But Zygote can.

https://github.com/QuantumBFS/QuAlgorithmZoo.jl/blob/master/examples/PortZygote/shared_parameters.jl

Note: (or negation) is a classical operation, Yao is unable to handle classical computational graph. This part should be done with Zygote. The zygote patch defines the adjoint for dispatch!, which can help you port the world of Yao and Zygote.

FYI:

Zygote is a quite flexible AD framework, but requires some try and error. If you have any problem using it feel free to ping me in this issue or in Julia slack, #autodiff, #quantum-computing or #yao-dev channel. (https://slackinvite.julialang.org/)

Here is a project that combines Yao and neural networks, https://github.com/wangleiphy/BetaVQE.jl

GiggleLiu avatar May 28 '20 03:05 GiggleLiu

I guess the @GiggleLiu 's point is not to replace expect' by Zygote, but combine them together. For example, you can fill a parameter array with shared parameters, and then dispatch them into a circuit. Then expect' will give you gradient of the parameter array, and Zygote will correctly collect gradient of shared parameters.

wangleiphy avatar May 28 '20 06:05 wangleiphy

Sure, I am not suggesting using the backward rules generated by Zygote, but gluing the backward rules in Yao with Zygote.

BTW: mutating arrays are not properly supported in Zygote yet, so it will throw an error if you don't import the patch file in the example.

GiggleLiu avatar May 28 '20 07:05 GiggleLiu

Thanks @GiggleLiu @wangleiphy I joined Slack channels, so will be learning AD, and look into beta-VQE.

kyriienko avatar May 28 '20 18:05 kyriienko

I have got GPU backprop to work for gates of the form e^{-i \theta H} with a known and simple H as follows:

"""
    HEVGate <: PrimitiveBlock{2} 
    $(FIELDS)

Time evolution with the Heisenberg interaction gate.
"""
mutable struct HEVGate{T<:Real} <: YaoBlocks.PrimitiveBlock{2}
    theta::T
end

function Yao.mat(::Type{T}, gate::HEVGate) where {T<:Real}
    return Complex{T}[exp(-1im*gate.theta) 0 0 0;
                      0 exp(1im*gate.theta)*cos(2gate.theta) -1im*exp(1im*gate.theta)*sin(2gate.theta) 0;
                      0 -1im*exp(1im*gate.theta)*sin(2gate.theta) exp(1im*gate.theta)*cos(2gate.theta) 0;
                      0 0 0 exp(-1im*gate.theta)]
end

function Yao.mat(::Type{T}, gate::HEVGate) where {T<:Complex}
    return T[exp(-1im*gate.theta) 0 0 0;
             0 exp(1im*gate.theta)*cos(2gate.theta) -1im*exp(1im*gate.theta)*sin(2gate.theta) 0;
             0 -1im*exp(1im*gate.theta)*sin(2gate.theta) exp(1im*gate.theta)*cos(2gate.theta) 0;
             0 0 0 exp(-1im*gate.theta)]
end

@const_gate HEVGenerator::ComplexF64 = [1 0 0 0; 0 -1 2 0; 0 2 -1 0; 0 0 0 1]
@const_gate HEVGenerator::ComplexF32

Base.show(io::IO, block::HEVGate{T}) where {T} = print(io, "$(HEVGate){$T}($(block.theta))")
YaoBase.niparams(::HEVGate) = 1
YaoBase.getiparams(block::HEVGate) = block.theta
YaoBase.setiparams!(block::HEVGate, param::Real) = (block.theta = param; block)
Base.adjoint(block::HEVGate) = HEVGate(-block.theta)
YaoBlocks.cache_key(block::HEVGate) = block.theta
YaoAPI.nqudits(::HEVGate) = 2


# Here comes the actual AD part. 
function YaoBlocks.AD.backward_params!(st, block::PutBlock{D,<:Any, HEVGate{T}}, collector) where {D,T}
    out, outδ = st
    in = copy(out)
    ham = put(nqudits(block), block.locs => HEVGate)
    g = state(in |> ham) ⋅ state(outδ)
    pushfirst!(collector, -imag(g))
    nothing
end

function YaoBlocks.AD.apply_back!(st, block::PutBlock{D,<:Any, HEVGate{T}}, collector) where {D,T}
    out, outδ = st
    adjblock = block'
    YaoBlocks.AD.backward_params!((out, outδ), block, collector)
    in = apply!(out, adjblock)
    inδ = apply!(outδ, adjblock)
    return (in, inδ)
end

Maybe this helps getting AD to work more generally on GPUs?

jlbosse avatar Apr 20 '22 09:04 jlbosse

Thanks for the note! we recently have an efficient CUDA implementation of time evolution using Krylov to be open sourcing soon, we will integrate that implementation to solve this issue eventually.

Roger-luo avatar Apr 21 '22 01:04 Roger-luo

Thank you very much for the note! The AD rule in YaoBlocks is actually correct. But the time evolution on GPU somehow does not work. I am submiting a PR to ExponentialUtilities to fix the issue: https://github.com/SciML/ExponentialUtilities.jl/pull/84

It is only a temporary solution, @Roger-luo 's implementation is definitely more trust-worthy.

GiggleLiu avatar Apr 21 '22 01:04 GiggleLiu