mitiq icon indicating copy to clipboard operation
mitiq copied to clipboard

Learning function for biased noise

Open Misty-W opened this issue 2 years ago • 3 comments

Description

Add high-level learning function for learning-based PEC.

API-doc is updated, user guide update to follow in separate issue/PR : #1365

Fixes #821

Checklist

Check off the following once complete (or if not applicable) after opening the PR. The PR will be reviewed once this checklist is complete and all tests are passing.

  • [x] I added unit tests for new code.
  • [x] I used type hints in function signatures.
  • [x] I used Google-style docstrings for functions.
  • [x] I updated the documentation where relevant.
  • [x] Added myself / the copyright holder to the AUTHORS file

If some items remain, you can mark this a draft pull request.

License

  • [x] I license this contribution under the terms of the GNU GPL, version 3 and grant Unitary Fund the right to provide additional permissions as described in section 7 of the GNU GPL, version 3.

Tips

  • If the validation check fails:

    1. Run make check-types (from the root directory of the repository) and fix any mypy errors.

    2. Run make check-style and fix any flake8 errors.

    3. Run make format to format your code with the black autoformatter.

    For more information, check the Mitiq style guidelines.

  • Write "Fixes #XYZ" in the description if this PR fixes Issue #XYZ.

Misty-W avatar Jun 14 '22 17:06 Misty-W

Binder :point_left: Launch a binder notebook on branch unitaryfund/mitiq/learning-functions-for-biased-noise

github-actions[bot] avatar Jun 14 '22 17:06 github-actions[bot]

Codecov Report

Merging #1358 (e71074c) into master (6bcf212) will increase coverage by 0.00%. The diff coverage is 100.00%.

@@           Coverage Diff           @@
##           master    #1358   +/-   ##
=======================================
  Coverage   98.19%   98.19%           
=======================================
  Files          62       62           
  Lines        2819     2833   +14     
=======================================
+ Hits         2768     2782   +14     
  Misses         51       51           
Impacted Files Coverage Δ
mitiq/pec/representations/__init__.py 100.00% <ø> (ø)
mitiq/pec/representations/learning.py 100.00% <100.00%> (ø)
mitiq/executor/executor.py 98.87% <0.00%> (-0.02%) :arrow_down:
mitiq/_typing.py 100.00% <0.00%> (ø)
mitiq/cdr/cdr.py 100.00% <0.00%> (ø)
mitiq/__init__.py 100.00% <0.00%> (ø)
mitiq/cdr/__init__.py 100.00% <0.00%> (ø)
mitiq/rem/__init__.py 100.00% <0.00%> (ø)
mitiq/ddd/insertion.py 100.00% <0.00%> (ø)
mitiq/rem/measurement_result.py
... and 2 more

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

codecov[bot] avatar Jun 21 '22 21:06 codecov[bot]

As demonstrated in PR #1364, multiple frontends are supported and therefore the learning and loss functions are made public.

Misty-W avatar Jun 24 '22 04:06 Misty-W

    1. Seed in tests may help with avoid stochastic fails in continuous integration and pin down the problem
    1. Many files are changed. Rebasing against master may help solve this?
    1. Simplifying the problem used in test (e.g., simpler quantum circuit) may help prevent having the optimization fail/be unstable?

nathanshammah avatar Aug 17 '22 16:08 nathanshammah

    1. Seed in tests may help with avoid stochastic fails in continuous integration and pin down the problem
    1. Many files are changed. Rebasing against master may help solve this?
    1. Simplifying the problem used in test (e.g., simpler quantum circuit) may help prevent having the optimization fail/be unstable?
    1. Implemented
    1. Fixed with rebase
    1. Pinpointed problem to nonsensically high PEC-mitigated expectation values ~30,000 for a couple of the training circuits. If it's not clear what's causing this on the existing circuit I'll try a simpler one.

Misty-W avatar Aug 23 '22 20:08 Misty-W

@Misty-W great for the update. +1 on trying a simpler circuit.

nathanshammah avatar Aug 24 '22 16:08 nathanshammah

@andreamari @nathanshammah, some good news: tests for Rx, Rz, and CNOT, for epsilon 0.05 and 0.1 are passing! The bad news is the execution time is still too long, especially in CI. I'm not sure why the Linux tests in CI were about 1h longer than my local run, and the non-Linux tests were even slower. I also had to re-run the non-Linux tests, but I believe that was due to the long execution time.

To do:

  1. Combine test functions for CNOT and rotation gates- right now they are separate because one executor gives the correct result for CNOT and the other gives the correct result for rotation gates.
  2. Find ways of shortening execution time --> suggestions welcome! Here are the ideas I have so far: a. Making the executor used in the Rx/Rz test work correctly for 2-qubit gates as well, because it doesn't iterate over the circuit each time to check whether/where to insert the noisy operation b. Try different insertion strategies for the noisy operation to see if they make a difference in execution time- I don't expect them to make a difference in this case, but it might be worth a try. c. The way I wrote the Rx and Rz tests makes them redundant because I'm just inserting one custom noisy operation defined in terms of the biased noise channel. Unless I'm forgetting something, I think we can eliminate Rx or Rz without losing actual coverage.
  3. Other cleanup: change hardcoded random seed for generate_training_circuits into an argument in learn_biased_noise_parameters and clean up gitignore

Misty-W avatar Sep 14 '22 04:09 Misty-W

@Misty-W

  1. Find ways of shortening execution time --> suggestions welcome!

Have you tried setting initial conditions for epsilon closer or (even equal) to the theoretical optimal values. If execution time is so large, we may need similar extreme workarounds.

andreamari avatar Sep 15 '22 18:09 andreamari

@Misty-W

  1. Find ways of shortening execution time --> suggestions welcome!

Have you tried setting initial conditions for epsilon closer or (even equal) to the theoretical optimal values. If execution time is so large, we may need similar extreme workarounds.

@andreamari, indeed the intention of the test is to iterate only a few times by setting the initial value of the noise strength close to the true value. An initial offset of 1% from the true value works for CNOT but in the case of Rx and Rz causes assert abs(epsilon_opt - epsilon) < abs(epsilon0 - epsilon) to fail. I'm experimenting with using different offset values for different gates.

We could also consider an alternate condition that allows for smaller offsets even if the optimization is not that accurate, but we run the risk of missing situations where the optimization doesn't run but the test still passes, as happened when I first opened this PR.

Misty-W avatar Sep 15 '22 19:09 Misty-W

@andreamari, tests are passing and execution time is looking much better!

Using

  • Nelder-Mead solver instead of BFGS
  • num_samples = 50 instead of num_samples = 5000
  • Applied initial noise strength offset in the opposite direction, i.e. epsilon0 = (1 - offset) * epsilon instead of epsilon0 = (1 + offset) * epsilon

I also removed the Qiskit test cases for now as they are still slow (probably due to a "mismatched" application of noise in the executor). Unless we find a blocking issue regarding the test_learn_biased_noise_parameters cases, I'll work on reducing the Qiskit test execution time.

Misty-W avatar Sep 19 '22 02:09 Misty-W

Thanks @Misty-W! Adding non-cirq tests would make CI too slow? What about if we add a non-cirq test just for biased_noise_loss_function? Still too slow?

A few other comments are below.

Thanks for the suggestion @andreamari, non-Cirq test on biased_noise_loss_function is a good compromise. It only adds ~20s running locally, hopefully it isn't much slower on GitHub :)

Misty-W avatar Sep 21 '22 22:09 Misty-W

@andreamari, thanks for the review yesterday! Anything else to address before merging?

Misty-W avatar Sep 22 '22 15:09 Misty-W

@andreamari, thanks for the review yesterday! Anything else to address before merging?

After small changes discussed during the Mitiq call, this is ready to go for me!

andreamari avatar Sep 23 '22 16:09 andreamari

Merged at last! Many thanks for your guidance and feedback @andreamari and @nathanshammah!

cc: @willzeng, @natestemen

Misty-W avatar Sep 23 '22 19:09 Misty-W

Great work and fantastic to see this feature now integrated in.

On Fri, Sep 23, 2022 at 3:07 PM Misty Wahl @.***> wrote:

Merged at last! Many thanks for your guidance and feedback @andreamari https://github.com/andreamari and @nathanshamah!

cc: @willzeng https://github.com/willzeng, @natestemen https://github.com/natestemen

— Reply to this email directly, view it on GitHub https://github.com/unitaryfund/mitiq/pull/1358#issuecomment-1256576962, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABHZDAX7ZADFSHHD5P2QRX3V7X5Y5ANCNFSM5YYRZXJQ . You are receiving this because you were mentioned.Message ID: @.***>

-- https://unitary.fund/ Creating a quantum technology ecosystem that benefits the most people.

willzeng avatar Sep 28 '22 03:09 willzeng