pennylane
pennylane copied to clipboard
[WIP] [ClassicalShadow 3] diffable `classical_shadow_expval`
Attempting to define a new measurement classical_shadow_expval
which allows to differentiate expectation values evaluated with classical shadows.
Context:
The problem with post-processing the classical shadows obtained from the classical_shadow
measurement process is that it involves some sort of op-flow logic that assigns a fixed value or matrix depending on the index of the bitstring or recipe of the classical shadow. We (most likely) cannot differentiate through this, so moving everything on the measurement level to avoid losing the gradient info.
Disclaimer
This is just a prototype to experiment with some ideas.
Hello. You may have forgotten to update the changelog!
Please edit doc/releases/changelog-dev.md
with:
- A one-to-two sentence description of the change. You may include a small working example for new features.
- A link back to this PR.
- Your name (or GitHub username) in the contributors section.
This hacky solution would allow for sensible gradients:
wires=range(2)
dev = qml.device("default.qubit", wires=wires, shots=10000)
H = qml.PauliZ(0)@qml.PauliZ(1)
@qml.qnode(dev)
def qnode(x):
qml.RY(x, wires=0)
return classical_shadow_expval(H)
@qml.qnode(dev)
def qnode0(x):
qml.RY(x, wires=0)
return qml.expval(H)
x = np.array(0.5, requires_grad=True)
>>> qnode(x), qnode0(x)
(tensor(0.918, requires_grad=True), tensor(0.8804, requires_grad=True))
>>> qml.grad(qnode)(x)
-0.5265000000000002
>>> qml.grad(qnode0)(x)
-0.46110000000000007
One of the problems is that it is quite slow:
>>> %timeit qml.grad(qnode)(x)
12.5 s ± 390 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
Using the vectorized default.qubit
version gradient evaluation is, as expected, much faster:
>>> %timeit qml.grad(qnode)(x)
123 ms ± 2.04 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
Still relatively slow, but maybe acceptable?
Codecov Report
Merging #2871 (683c89c) into master (a8ec232) will increase coverage by
0.00%
. The diff coverage is100.00%
.
@@ Coverage Diff @@
## master #2871 +/- ##
=======================================
Coverage 99.65% 99.65%
=======================================
Files 267 267
Lines 22420 22460 +40
=======================================
+ Hits 22343 22383 +40
Misses 77 77
Impacted Files | Coverage Δ | |
---|---|---|
pennylane/__init__.py | 100.00% <ø> (ø) |
|
pennylane/_device.py | 98.13% <100.00%> (+<0.01%) |
:arrow_up: |
pennylane/_qubit_device.py | 99.64% <100.00%> (+<0.01%) |
:arrow_up: |
pennylane/devices/default_qubit.py | 100.00% <100.00%> (ø) |
|
pennylane/measurements.py | 100.00% <100.00%> (ø) |
|
pennylane/shadows/__init__.py | 100.00% <100.00%> (ø) |
|
pennylane/shadows/classical_shadow.py | 100.00% <100.00%> (ø) |
|
pennylane/tape/tape.py | 99.33% <100.00%> (ø) |
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.
I think the name could do some tuning, how about shadow_expval
?
I am always in favour of shorter, more concise names :) (and fewer underscores!)
Currently there is a big overlap between the class and module doc-string. I am leaning towards shrinking the class one to have everything concentrated in the module description. Thoughts?
Other than that the review comments should be addressed now :)
Currently there is a big overlap between the class and module doc-string. I am leaning towards shrinking the class one to have everything concentrated in the module description. Thoughts?
If I had to choose, I would go the opposite --- keep the module docstring minimal, and instead link to the docstring for more details :)
Partly for maintainability, and partly because I think the docstrings are more likely to be higher in search engine rankings
Don't forget the changelog entry 😄