qibolab
qibolab copied to clipboard
Pulses debug mode
Pulses are now stored with IDs, but there are cases in which the ID would mask too much of the information, and we need a way to easily retrieve the identity of the pulse.
My idea is to accompany pulses with a dump of their registry.
So, every time that we execute a sequence, we should propagate the information about the input. This could be done:
- serializing the sequence on a log file
- propagating a handle to the sequence inside the drivers, such that the information is always aside the ID
I'm a bit against the second option, but it could make someone's life easier.
Instead, for the first one we could have a CLI, or some script, to handle a sequence storage, such that given a sequence ID and pulse ID, you could always retrieve enough information to reconstruct the whole pulse (or the whole sequence).
$ qibolab pulse 12903470.1203947
{
"frequency": ...,
...
}
$ qibolab pulse 12398412
[{ ... }, { ... }, ...]
and in python this could be the JSON of the dict
obtained from asdict(pulse)
, [asdict(p) for p in sequence]
.
We could just avoid logging by default (to have a disk-free production mode), and if configured with a suitable path instead log every sequence that will be executed (or played, according if we want to do it at platform or controller level).
@stavros11
Maybe I am missing something, but is the proposal to dump a mapping from id(pulse)
to pulse parameters asdict(pulse)
to disk? For debugging QUA scripts this would work, it would just be a bit more annoying as we would have to dump two files (qua script + pulse log) and match ids between the two.
but is the proposal to dump a mapping from
id(pulse)
to pulse parametersasdict(pulse)
to disk?
Yes, and additional utilities on that base.
E.g. for QUA we could make a simple (external) tool that does the replacement for you.
E.g. for QUA we could make a simple (external) tool that does the replacement for you.
Indeed, that works for me. I just have two suggestions for the actual implementation:
- include an identifier in the pulse log file, both inside the json and even in the file name, that shows which execution that produced that log. This could be a simple timestamp. I think it is relevant because the
id
s are different in every execution anyway. - control the debug mode through environment variable, so that it is possible to turn on/off without touching the code we are debugging. For pulses we could have something like
debug_path = os.environ.get("QIBOLAB_DEBUG", None)
if debug_path is not None:
sequence.dump(debug_path)
inside platform.execute
, to also allow the user to select where to dump the logs (and we can use the same path in the drivers if applicable).
- include an identifier in the pulse log file, both inside the json and even in the file name, that shows which execution that produced that log. This could be a simple timestamp. I think it is relevant because the
id
s are different in every execution anyway.
I was considering doing this anyhow, but thinking a bit more I'm not sure it is actually needed: using UUIDs for pulses they will always be unique, also across runs (they essentially contain a timestamp). So, in principle, we could even store all of them in a single file (it will just take more to load, if used a lot).
- control the debug mode through environment variable, so that it is possible to turn on/off without touching the code we are debugging. For pulses we could have something like
Agreed: as said above, you do not want to dump in production, and the env var might be convenient to avoid dedicated scripts for testing.
Internal feature, postponed to after 0.2.0 release.