pennylane icon indicating copy to clipboard operation
pennylane copied to clipboard

Upgrade null.qubit to the new device API

Open timmysilv opened this issue 1 year ago • 5 comments

Context: null.qubit doesn't quite do everything as it should, so it needs an upgrade of sorts.

Description of the Change: Port null.qubit to the new Device API. The diff for the device file can be tough to parse, so let me share some context for what I did. In the fist commit, I just copy-pasted the contents of default_qubit.py, then replaced all actual computation methods with dummy methods (eg. _simulate(), _vjp(), etc.). It's probably best to filter out that commit using the github UI when reviewing the changes. I'd even recommend reviewing commits 2+3 together, then commits 4-end together as well, just so it's clear to see what I've actually done.

As well, I defined ClassicalShadow.shape according to what it returns with default.qubit. While testing, I found that it ignores batching entirely. We might want to raise an error for that, but for now I'm just raising the error in null.qubit when it tries to go down that road.

Benefits: null.qubit does what it's supposed to!

Possible Drawbacks: NullQubit.compute_vjp doesn't work in all cases, particularly with jax. this can be improved in the future, but the reason appears to be that jax is doing some smart stuff behind the scenes with inputs and outputs that I'm not anticipating

[sc-51495]

timmysilv avatar Feb 14 '24 23:02 timmysilv

Hello. You may have forgotten to update the changelog! Please edit doc/releases/changelog-dev.md with:

  • A one-to-two sentence description of the change. You may include a small working example for new features.
  • A link back to this PR.
  • Your name (or GitHub username) in the contributors section.

github-actions[bot] avatar Feb 14 '24 23:02 github-actions[bot]

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Project coverage is 99.65%. Comparing base (f9bdb32) to head (1d991e5). Report is 2 commits behind head on master.

Additional details and impacted files
@@            Coverage Diff             @@
##           master    #5211      +/-   ##
==========================================
- Coverage   99.65%   99.65%   -0.01%     
==========================================
  Files         399      399              
  Lines       36901    36703     -198     
==========================================
- Hits        36774    36575     -199     
- Misses        127      128       +1     

:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.

codecov[bot] avatar Feb 14 '24 23:02 codecov[bot]

Potentially worth seeing how this PR would change with #5200 .

albi3ro avatar Feb 16 '24 15:02 albi3ro

Note: I replaced null qubit preprocessing with default qubit preprocessing.

Some initial results from running:

n_wires = 40
n_layers = 5

shape = qml.StronglyEntanglingLayers.shape(n_layers=n_layers, n_wires=n_wires)
rng = np.random.default_rng(seed=1239594)
params = qml.numpy.array(rng.random(shape))

@qml.qnode(dev, diff_method="parameter-shift")
def circuit(params):
    qml.StronglyEntanglingLayers(params, wires=range(n_wires))
    return [qml.expval(qml.Z(i)) for i in range(n_wires)]


def cost(params):
    out = circuit(params)
    return qml.math.sum(out)

qml.grad(cost)(params)

Half of the time is in QuantumScript.hash, and 33% is in convert_to_numpy_parameters .... hmmm... We might want to make caching only default with higher order derivatives.

Screenshot 2024-02-16 at 10 28 57 AM

albi3ro avatar Feb 16 '24 15:02 albi3ro

also plenty of time spent evaluating the batch size : ) So.. this is null.qubit doing what it does best, right? helping us benchmark and improve PennyLane-at-large?

timmysilv avatar Feb 20 '24 14:02 timmysilv

Great, I'll start reviewing it today. Realistically, it's going to take a few days for me to properly go over this PR, but I'll do it asap

PietropaoloFrisoni avatar Feb 27 '24 19:02 PietropaoloFrisoni