[Return-types #6] Update adjoint differentiation
Context: Part of the ongoing effort to support gradient methods with the new return types system. This PR focuses on supporting the adjoint differentiation method.
Description of the Change:
Add a new method adjoint_jacobian_new to QubitDevice that computes the gradient using the adjoint method, and returns the result as a tuple.
Example:
dev = qml.device("default.qubit", wires=2)
with qml.tape.QuantumTape() as tape:
qml.RX(0.5, wires=0)
qml.CNOT(wires=[0, 1])
qml.RY(0.8, wires=1)
qml.expval(qml.PauliZ(0))
qml.expval(qml.PauliZ(1))
qml.expval(qml.PauliZ(0) @ qml.PauliZ(1))
>>> dev.adjoint_jacobian_new(tape)
(array([-4.79425539e-01, -3.46944695e-18]), array([-0.33401899, -0.6295392 ]), array([ 0. , -0.71735609]))
Hello. You may have forgotten to update the changelog!
Please edit doc/releases/changelog-dev.md with:
- A one-to-two sentence description of the change. You may include a small working example for new features.
- A link back to this PR.
- Your name (or GitHub username) in the contributors section.
Codecov Report
Merging #3052 (c7cb940) into master (0cf1bd4) will increase coverage by
0.00%. The diff coverage is100.00%.
@@ Coverage Diff @@
## master #3052 +/- ##
=======================================
Coverage 99.68% 99.68%
=======================================
Files 273 273
Lines 23594 23604 +10
=======================================
+ Hits 23520 23530 +10
Misses 74 74
| Impacted Files | Coverage Δ | |
|---|---|---|
| pennylane/gradients/finite_difference.py | 100.00% <ø> (ø) |
|
| pennylane/_qubit_device.py | 99.66% <100.00%> (+<0.01%) |
:arrow_up: |
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.
qml.enable_return()
dev = qml.device("default.qubit", wires=3)
params = np.array([np.pi, np.pi / 2, np.pi / 3])
with qml.tape.QuantumTape() as tape:
qml.RX(params[0], wires=0)
qml.RX(params[1], wires=1)
qml.RX(params[2], wires=2)
for idx in range(3):
qml.expval(qml.PauliZ(idx))
# circuit jacobians
dev_jacobian = dev.adjoint_jacobian(tape)
print("Ajoint", dev_jacobian)
tapes, fn = qml.gradients.finite_diff(tape)
print("Finite diff", fn(dev.batch_execute(tapes)))
tapes, fn = qml.gradients.param_shift(tape)
print("Param shift", fn(dev.batch_execute(tapes)))
Ajoint (array([-1.22464680e-16, 0.00000000e+00, 2.77555756e-17]), array([ 1.27035909e-33, -1.00000000e+00, 2.77555756e-17]), array([-4.08147434e-33, 5.55111512e-17, -8.66025404e-01]))
Finite diff ((array(5.10702591e-08), array(0.), array(-2.22044605e-09)), (array(0.), array(-1.), array(1.11022302e-09)), (array(-1.11022302e-09), array(1.11022302e-09), array(-0.86602543)))
Param shift ((array(-2.77555756e-16), array(0.), array(2.22044605e-16)), (array(0.), array(-1.), array(0.)), (array(0.), array(0.), array(-0.8660254)))
@eddddddy @antalszava From the example above, I notice that there is a small problem with multiple measurement and multiple parameters. The shape of the inner structure should be multiple tuple when they are multiple measurements. Some more post-processing is needed to change that behavior.
@eddddddy I think there might be some problems that come from the finite diff PR merge, let me know if you want me to take a look 👍
[sc-28430]