pyGSTi icon indicating copy to clipboard operation
pyGSTi copied to clipboard

ModelTesting-functions tutorial fails with error

Open johnkgamble opened this issue 2 years ago • 3 comments

Describe the bug The fifth code cell on the ModelTesting-functions fails when run.

To Reproduce Launch the ModelTesting-functions and execute the cells in the notebook. Cell 5 gives:

results.add_estimates(results2)
results.add_estimates(results3)

pygsti.report.construct_standard_report(
    results, title="Model Test Example Report", verbosity=1
).write_html("../tutorial_files/modeltest_report", auto_open=True, verbosity=1)
Running idle tomography
Computing switchable properties
---------------------------------------------------------------------------
KeyError                                  Traceback (most recent call last)
Input In [5], in <cell line: 4>()
      1 results.add_estimates(results2)
      2 results.add_estimates(results3)
----> 4 pygsti.report.construct_standard_report(
      5     results, title="Model Test Example Report", verbosity=1
      6 ).write_html("../tutorial_files/modeltest_report", auto_open=True, verbosity=1)

File ~/repos/pyGSTi/pygsti/report/factory.py:1263, in construct_standard_report(results, title, confidence_level, comm, ws, advanced_options, verbosity)
   1259         _warnings.warn("Not all data sets are comparable - no comparisions will be made.")
   1261 printer.log("Computing switchable properties")
   1262 switchBd, dataset_labels, est_labels, gauge_opt_labels, Ls, swLs = \
-> 1263     _create_master_switchboard(ws, results, confidence_level,
   1264                                nmthreshold, printer, None,
   1265                                combine_robust, idt_results, embed_figures)
   1267 if len(Ls) > 0 and Ls[0] == 0:
   1268     _warnings.warn(("Setting the first 'max-length' to zero, e.g. using"
   1269                     " [0,1,2,4] instead of [1,2,4], is deprecated and"
   1270                     " may cause 'no data to plot' errors when creating"
   1271                     " this report.  Please remove this leading zero."))

File ~/repos/pyGSTi/pygsti/report/factory.py:330, in _create_master_switchboard(ws, results_dict, confidence_level, nmthreshold, printer, fmt, combine_robust, idt_results_dict, embed_figures)
    327 switchBd.clifford_compilation[d, i] = est.parameters.get("clifford compilation", 'auto')
    328 if switchBd.clifford_compilation[d, i] == 'auto':
    329     switchBd.clifford_compilation[d, i] = find_std_clifford_compilation(
--> 330         est.models['target'], printer)
    332 switchBd.profiler[d, i] = est_modvi.parameters.get('profiler', None)
    333 switchBd.meta_stdout[d, i] = est_modvi.meta.get('stdout', [('LOG', 1, "No standard output recorded")])

KeyError: 'target'

Expected behavior I would expect a report to be generated and open.

Environment (please complete the following information):

  • pyGSTi version [0.9.10.1]
  • python version [3.10.4]
  • OS [MacOS 12.3.1]

johnkgamble avatar May 26 '22 18:05 johnkgamble

Thanks for the bug report, @johnkgamble. I can reproduce this on the latest version of the master branch and also confirmed that the notebook works as of pygsti 0.9.10.post4. I dug into this a bit more and I'm pretty sure this was broken by this commit: 78a357a on March 15th. This commit removed create_target_model from HasProcessorSpec and also removed the creation/inclusion of a target model in ModelTestEstimateResults by default. ModelTest can still take in a target model at init in which case I think this all would work as-is, but run_model_test from the drivers package which is being used in this notebook doesn't pass in a target model when it initializes its ModelTest object.

I can see us fixing this a couple ways. 1) We could add some plumbing in run_model_test to pass in a target model. A target model isn't always a well-defined concept in the model testing setting, however. 2) We could stick the call to find_std_clifford_compilation in the report factory in a try-except block and have it simply skip that step when a target model is not present in the estimates object (and have it print something to the log indicating as much). Maybe both?

Any other places in the code that assume the existence of a target model? Maybe we should do a scan for these down the road.

coreyostrove avatar Jul 12 '22 20:07 coreyostrove

@coreyostrove , thanks for the follow-up. This isn't critical for me right now, but I think that this example should be removed at the very least. Have the pygsti devs considered running the notebooks in CI? I can't think of another way to keep regressions like this from happening in the future.

I've set up things like this before, so I'm happy to have a discussion about it if you're interested.

johnkgamble avatar Jul 13 '22 04:07 johnkgamble

Yes we have considered running the Jupyter notebooks in CI, it just hasn't been a high priority. I'll revisit that the next time I have a few extra cycles. :)

sserita avatar Jul 13 '22 14:07 sserita

A patch for this issue with the ModelTest not getting passed in a target model exists on the branch feature-circuitlistsdesign-reporting included in commit 7c1e01b. Once this branch gets merged in we should be good to go.

P.S. We have automated notebook regression testing up and running on the latest version of develop, so we should be better equipped to catch these problems going forward.

coreyostrove avatar Apr 30 '23 19:04 coreyostrove