pennylane
pennylane copied to clipboard
GPU support in `default.mixed` for PyTorch
Feature details
Support for backpropagation in the default.mixed
device was previously implemented for the majority of the interfaces. One of the remaining interfaces yet to support this is PyTorch with GPU support. We are already able to do something like
dev = qml.device("default.mixed", wires=1)
@qml.qnode(dev, interface="torch", diff_method="backprop")
def circuit(x):
qml.RY(x, wires=0)
return qml.expval(qml.PauliZ(0))
>>> x = torch.tensor(0.3, requires_grad=True)
>>> out = circuit(x)
>>> out.is_cuda
False
>>> out.backward()
>>> x.grad
tensor(-0.2955)
>>> x.grad.is_cuda
False
What we want is the option to place tensors on the GPU so that all computation is performed there. Ideally this should be dependent on if any parameters are already on the GPU:
>>> x = torch.tensor(0.3, requires_grad=True).to("cuda:0")
>>> out = circuit(x)
>>> out.is_cuda
True
>>> out.backward()
>>> x.grad
tensor(-0.2955)
>>> x.grad.is_cuda
True
Implementation
Since we want the implementation of default.mixed
to be interface-agnostic, we mostly want to modify the dispatch functions in qml.math
.
This change would likely involve having more sophisticated dispatch logic for functions with multiple tensor arguments, where if at least one of the arguments reside on the GPU, then all other arguments are moved to the GPU.
How important would you say this feature is?
1: Not important. Would be nice to have.
Additional information
No response