documentation icon indicating copy to clipboard operation
documentation copied to clipboard

Utility scale QAOA tutorial

Open miamico opened this issue 1 year ago • 13 comments
trafficstars

Adds a new tutorial for QAOA with code for running experiment at utility scale. All dependencies from Application modules are removed

miamico avatar Apr 24 '24 18:04 miamico

Thanks for contributing to Qiskit documentation!

Before your PR can be merged, it will first need to pass continuous integration tests and be reviewed. Sometimes the review process can be slow, so please be patient. Thanks! 🙌

qiskit-bot avatar Apr 24 '24 18:04 qiskit-bot

CLA assistant check
All committers have signed the CLA.

CLAassistant avatar Apr 24 '24 18:04 CLAassistant

Check out this pull request on  ReviewNB

See visual diffs & provide feedback on Jupyter Notebooks.


Powered by ReviewNB

Happy to do style/language review when this is ready to go. Just lmk!

abbycross avatar Apr 25 '24 14:04 abbycross

posting here some comments received through other channels:

  • before digging into the tutorial, I think it would help a ton to motivate the problem a bit more. You mention some applications, which is good, but I think it’s also worth mentioning that it’s an NP hard problem and then comparing the best classical algorithm to the quantum algorithm. At the end of the tutorial you say that current processors are noisy so the quantum algorithm doesn’t offer an advantage. But how much better can they do in principle? If you just leave it at, “quantum computers can do no better than classical,” then the reader will wonder why they’re bothering with learning this.

  • I find the starting formulation of the graph problem → QUBO → hamiltonian mapping a little hard to follow and also maybe unnecessarily convoluted? If you formulate the initial graph problem a little differently (as Jochen Rau does in this video) can’t you just map directly from the graph cost function to a Hamiltonian? If there’s something wrong with this approach and you’d like to stick with what you’re doing, I would suggest explaining the steps a little more and maybe even avoiding ever writing “QUBO.” It sounds intimidating and the term itself is entirely unnecessary for the learner’s understanding of the problem at hand.

  • Update to primitives V2

miamico avatar Apr 29 '24 21:04 miamico

View / edit / reply to this conversation on ReviewNB

javabster commented on 2024-05-02T20:53:17Z ----------------------------------------------------------------

I think we can remove the first cell with the authors and the patterns, it feels a bit out of place with the style of all the other tutorials


View / edit / reply to this conversation on ReviewNB

javabster commented on 2024-05-02T20:53:18Z ----------------------------------------------------------------

I think these cells with the call to action and team 1 or 2 should be removed, I guess that was part of an interactive wrokshop but it doesnt really work in tutotiral format


View / edit / reply to this conversation on ReviewNB

javabster commented on 2024-05-02T20:53:18Z ----------------------------------------------------------------

also these call to action sections with the commented out code should probably be uncommented i guess?


@miamico could you also please replace the networkx code with rustworkx? It'll help keep our requirements down.

E.g. for the first example:

import rustworkx as rx
from rustworkx.visualization import mpl_draw as draw_graph
import numpy as np

n = 5

graph = rx.PyGraph()
graph.add_nodes_from(np.arange(0, n, 1))
edge_list = [(0, 1, 1.0), (0, 2, 1.0), (0, 4, 1.0), (1, 2, 1.0), (2, 3, 1.0), (3, 4, 1.0)]
graph.add_edges_from(edge_list)
draw_graph(graph, node_size=600, with_labels=True)

frankharkins avatar May 07 '24 17:05 frankharkins

Github won't let me suggest edits because of the size of the notebook file, so I am making some IBM style updates (such as rewording not to use "we") locally, and will merge to this branch as soon as I've gotten through it!

abbycross avatar May 20 '24 14:05 abbycross

Looking at Pattern step 2 in the text: we should add a link to the docs for transpiler passes. Is this the right link to add? https://docs.quantum.ibm.com/api/qiskit/transpiler_passes#transpiler-passes -- or instead, something in the platform docs (https://docs.quantum.ibm.com/transpile)? cc: @kaelynj Screenshot 2024-05-20 at 11 42 27 AM

abbycross avatar May 20 '24 15:05 abbycross

I am hesitant to "IBM-ify" the style of the Appendices, but also wonder if they belong at the end of the notebook since we don't use appendices anywhere else. Is there a better place to store and display that content? Thoughts @javabster @kaelynj @miamico ?

abbycross avatar May 20 '24 20:05 abbycross

I would vote for having appendices as this seems to be a common occurence in the utility notebook I've been working on. Maybe a link to external reference would be enough though

miamico avatar May 21 '24 15:05 miamico

I suggest the approach Kevin Sung used in opening #1355, which is to incorporate appendix material into the docs. I will open an issue to track this

abbycross avatar May 31 '24 15:05 abbycross

Is it possible to have an alternate cell to the COBYLA optimization that open users can use to just load the output of a previous job? As an open user, sessions doesn't work, so I have to wait ~3 hrs for each iteration of this to run. This makes this notebook practically unrunnable for an open user.

kcmccormibm avatar May 31 '24 19:05 kcmccormibm

Thanks folks, this is very nearly there! Some last comments from me:

  • In the intro can you explain that you'll first run a small scale version of this problem and then scale it up to utility scale? Also in general I think we should follow the same heading structure as other similar tutorials like this one
  • notebook is missing a "requirements" section at the start
  • The first section under the "Part 1 (small scale) heading feels like it should be part of the "map" step, no?
  • in general can you double check that the pattern headings are the same as the other recent tutorials we've shipped
  • also can we add an estimated QPU time at the top like we do for the other notebooks
  • small jargon thing - I've heard from leadership/marketing that we're supposed to say "AI enhanced transpiler passes" not "AI transpiler passes"
  • As the appendices are so small I think it would be better to just include them directly in the tutorial where it's most relevant.

Lastly, @kcmccormibm has a fair point, this tutorial in particular is supposed to be accessible for open users. The original notebook was runnable within the 10min QPU time available to open users, is that still the case after all the edits that have been made? I'm not sure there's much we can do about the classical optimization part taking a long time though 🤔 does anyone have any suggestions for how to improve that?

javabster avatar Jun 03 '24 18:06 javabster

Hi! I just wanted to clarify that I don't think the QPU time is a problem (although I haven't been able to fully run it to know for sure) -- for open users I think the main pain point will be the time in the queue. Waiting several hours in the queue between each iteration of the COBYLA optimization does not seem great, but then again that's probably unavoidable...

kcmccormibm avatar Jun 03 '24 19:06 kcmccormibm

Thanks folks, this is very nearly there! Some last comments from me:

  • In the intro can you explain that you'll first run a small scale version of this problem and then scale it up to utility scale? Also in general I think we should follow the same heading structure as other similar tutorials like this one

  • notebook is missing a "requirements" section at the start

  • The first section under the "Part 1 (small scale) heading feels like it should be part of the "map" step, no?

  • in general can you double check that the pattern headings are the same as the other recent tutorials we've shipped

  • also can we add an estimated QPU time at the top like we do for the other notebooks

  • small jargon thing - I've heard from leadership/marketing that we're supposed to say "AI enhanced transpiler passes" not "AI transpiler passes"

  • As the appendices are so small I think it would be better to just include them directly in the tutorial where it's most relevant.

Most of this feedback has now been incorporated. I've moved the content from the appendices into an expanding subsection and double checked the headers. The section header for part 1 and 2 I think should stay separated from the patterns steps headers since we're essentially creating two different patterns workflows (one small scale and another at utility scale) and the reader should be able to follow that.

Lastly, @kcmccormibm has a fair point, this tutorial in particular is supposed to be accessible for open users. The original notebook was runnable within the 10min QPU time available to open users, is that still the case after all the edits that have been made? I'm not sure there's much we can do about the classical optimization part taking a long time though 🤔 does anyone have any suggestions for how to improve that?

As for this unfortunately I don't think there's much we can do about queue times/executing within a session and as far s I know we don't have the infrastructure set up to keep a pre-baked set of results ready to go for the reader (right @Eric-Arellano?). Also @miamico do you have a rough idea of the QPU time that was needed to run this notebook?

kaelynj avatar Jun 04 '24 20:06 kaelynj

and as far s I know we don't have the infrastructure set up to keep a pre-baked set of results ready to go for the reader (right @Eric-Arellano?).

The only way I can think toto do that would be hardcoding the results in the Jupyter notebook and doing something like this:

# Reader: change this value to `False` if you want to run on hardware to get
# your own results
USE_CACHED_RESULTS = True


...

if USE_CACHED_RESULTS:
  my_result = ...
else:
   my_result = sampler.run()

Eric-Arellano avatar Jun 05 '24 13:06 Eric-Arellano

Hi All, this was just shared with me. Fantastic work! A few suggestions:

  • I think the current version is missing a "from qiskit import transpile" line right above "candidate_circuit = transpile(circuit, backend, optimization_level=3)"
  • The first 5-qubit run often returns non-optimal cuts, even on a noise-free simulator. When I increased reps to 2, and added parameters accordingly, I never again got a non-optimal cut. So maybe that's worth implementing to have a solid small scale example.
  • I propose specifying 'tol' and 'maxiter' for COBYLA, although now that I changed the reps, it doesn't seem to matter.
  • Is there a reason for sticking with Sherbrooke as opposed to least_busy? No worries if there is something about benchmarking that required a consistent backend. But switching to least_busy saved me hours and hours when checking this.
  • There should probably be a deeper QAOA ansatz (more than just reps = 1) for the utility scale job. I'm not proposing very deep, since the goal is to make it runnable in open-plan time. But if the 5 qubit job needs reps = 2, then the large one should at least match that and address in text the need to tailor settings accordingly for utility scale jobs.

Thanks all! Beautiful tutorial!

christopherporter1 avatar Jun 05 '24 15:06 christopherporter1

also, IMPORTANT: we should not be using transpile() anymore, we should be using preset passmangers instead

javabster avatar Jun 10 '24 20:06 javabster

I've added the estimate for the QPU time. In my experiments it took 6s per function call of COBYLA (and about 40 calls). I've also switched from transpile to the passmanager

miamico avatar Jun 11 '24 03:06 miamico

Just noticed/fixed a bug in the small scale example where the call to instantiate the sampler wasn't using the session being generated.

I'm also wondering about editing the content from @christopherporter1's feedback:

  • There should probably be a deeper QAOA ansatz (more than just reps = 1) for the utility scale job. I'm not proposing very deep, since the goal is to make it runnable in open-plan time. But if the 5 qubit job needs reps = 2, then the large one should at least match that and address in text the need to tailor settings accordingly for utility scale jobs.

I switched reps=2 for the small scale experiment and introduced a tol=1e-3 to the minimization loop. Is that a reasonable enough tolerance to set @miamico (especially for the larger scale example)? Or could we get away with a lower tolerance?

kaelynj avatar Jun 11 '24 18:06 kaelynj

Just noticed/fixed a bug in the small scale example where the call to instantiate the sampler wasn't using the session being generated.

I'm also wondering about editing the content from @christopherporter1's feedback:

  • There should probably be a deeper QAOA ansatz (more than just reps = 1) for the utility scale job. I'm not proposing very deep, since the goal is to make it runnable in open-plan time. But if the 5 qubit job needs reps = 2, then the large one should at least match that and address in text the need to tailor settings accordingly for utility scale jobs.

I switched reps=2 for the small scale experiment and introduced a tol=1e-3 to the minimization loop. Is that a reasonable enough tolerance to set @miamico (especially for the larger scale example)? Or could we get away with a lower tolerance?

I think we could go even lower, may be worth a try

miamico avatar Jun 17 '24 14:06 miamico

I see some more wording tweaks to make but GitHub won't let me do it here since the file is so big - so I will make the changes locally and put them into one commit.

abbycross avatar Jun 27 '24 13:06 abbycross

Had some merge conflict problems, so another commit is coming from me - stand by before merging, please

abbycross avatar Jun 27 '24 15:06 abbycross

Just running through the notebook and noticed deprecation warning -- perhaps we should update the session formatting?

DeprecationWarning: The 'session' keyword arguments are deprecated as of qiskit-ibm-runtime 0.23.0 and will be removed no sooner than 3 months after the release date. Please use the 'mode' parameter instead. estimator = Estimator(session=session)

nathanearnestnoble avatar Jun 27 '24 15:06 nathanearnestnoble

this is looking good to me, I just have one question - I see a lot of references to quantum annealing and im just wondering how relevant it is to mention? I personally don't know much about quantum annealing so it doesn't really mean much to me, and it is somewhat confusing bc we are a superconducting qubit company not an annealing one so the reference feels odd. Unless it is super important for the reader's understanding then I would just remove those references. Also there is a typo in the very last sentence in the notebook (should be "around" not aroud")

javabster avatar Jun 28 '24 20:06 javabster

this is looking good to me, I just have one question - I see a lot of references to quantum annealing and im just wondering how relevant it is to mention? I personally don't know much about quantum annealing so it doesn't really mean much to me, and it is somewhat confusing bc we are a superconducting qubit company not an annealing one so the reference feels odd. Unless it is super important for the reader's understanding then I would just remove those references. Also there is a typo in the very last sentence in the notebook (should be "around" not aroud")

I think it's helpful context to mention annealing here. QAOA is inspired/based on quantum annealing, but is adapted for a gate-based quantum computer and the only references I see to annealing provide that context to the reader.

kaelynj avatar Jul 01 '24 13:07 kaelynj

Just running through the notebook and noticed deprecation warning -- perhaps we should update the session formatting?

DeprecationWarning: The 'session' keyword arguments are deprecated as of qiskit-ibm-runtime 0.23.0 and will be removed no sooner than 3 months after the release date. Please use the 'mode' parameter instead. estimator = Estimator(session=session)

With regard to this, I think the mode argument was only just released this month and so users may not be running on the very latest version. We could simply add in a comment within the relevant code cells to mention that if they using qiskit-ibm-runtime>0.23.0 they should use the mode argument.

kaelynj avatar Jul 01 '24 13:07 kaelynj