Custom engine with sequenced acquisitions
Hi all,
We are finally diving into sub-classing the acquisition engine and the new approaches for on-the-fly and file saving. I think with all of these tools (again, thank you so much for your work), we should be able to meet on our needs.
I had a question about callbacks and saving images when doing a sequence acquisition. I see in the current acquisition code (here) that all of the images are collected from the sequence before they are returned.
We run some fairly intensive acquisitions where we need to stream the data to disk because we cannot hold all of the sequence in RAM. Any suggestions on how to handle this in our own acquisition engine?
In our current code, we:
- Setup hardware sequenced acquisition with camera as master
- Create Zarr store with optimized chunk size
- Start sequenced acquisition through the core
- pop "N" images (set by Zarr chunk size) out of the circular buffer
- Stop pulling images from the buffer and write to the Zarr store
- Repeat 4 & 5 until no more images remain in buffer
Looking at this really helpful post, I see that we can connect to frameReady, but it wasn't clear to me about how to do that for a sequenced acquisition.
Thanks!
Hey Doug, great question. I can imagine a few possible solutions to this, and you can guide me to what seems like it would work best for you.
- the code you linked could very easily be improved by changing the signature of
exec_eventto return anIterable[PImagePayload]rather than a sequence, that way it could yield frames as they're popped off the buffer. That is how it should have been implemented to begin with and will be an easy improvement. - the chunk of code that does the popping could be extracted into a new method that you could more easily override in a subclass if you wanted to fully customize the popping/yielding of frames
- as for "Stop pulling images from the buffer and write to the Zarr store", you may want to simply circumvent using
frameReadyaltogether for your needs (this remains to be seen). There's technically nothing stopping you from doing the writing to zarr in your acquisition engine directly, but if possible, i would encourage you to try to resist that for now, since going through theframeReadyemission on the runner will allow you to use more of the built-in-functionality. (let's continue discussing that bit)
i would also encourage you to have a look at the Zarr PR https://github.com/pymmcore-plus/pymmcore-plus/pull/263 ... it's not far from merging, and you could perhaps use that instead of implementing your own zarr logic. I believe it should be flexible enough, but have a look at the __init__ and see if you'd need more things exposed. The way that would be used is something like this:
writer = OMEZarrWriter("file.zarr", ...)
with mda_listeners_connected(writer):
core.mda.run(sequence)
but again, for your needs, it's definitely possible that something in there is simply not performant enough, in which case we should see whether it can be fixed.
those are some thoughts... let's keep discussing
- the code you linked could very easily be improved by changing the signature of
exec_eventto return anIterable[PImagePayload]rather than a sequence, that way it could yield frames as they're popped off the buffer. That is how it should have been implemented to begin with and will be an easy improvement.
Sounds good!
- the chunk of code that does the popping could be extracted into a new method that you could more easily override in a subclass if you wanted to fully customize the popping/yielding of frames.
I don't know if this is necessary, as the logic that is there pretty much matches ours. Plus, we need to sub-class the exec_event anyways for our use because we are using our own NI-DAQ class that mmcore doesn't handle.
- as for "Stop pulling images from the buffer and write to the Zarr store", you may want to simply circumvent using
frameReadyaltogether for your needs (this remains to be seen). There's technically nothing stopping you from doing the writing to zarr in your acquisition engine directly, but if possible, i would encourage you to try to resist that for now, since going through theframeReadyemission on the runner will allow you to use more of the built-in-functionality. (let's continue discussing that bit)
I'm 100% on board with using the built-in functionality. I want us to do this "right" the first time so we can give it as an example to others both within and outside my group.
i would also encourage you to have a look at the Zarr PR #263 ... it's not far from merging, and you could perhaps use that instead of implementing your own zarr logic. I believe it should be flexible enough, but have a look at the
__init__and see if you'd need more things exposed. The way that would be used is something like this:
I definitely think we should re-use your zarr code. We will look into the __init__ and see if we need anything else exposed.
We also need to do on-the-fly processing of the data to deskew/rotate the images from our OPM. Even "snaps" for the OPM microscope setup require sequenced acquisitions because we are always using the DAQ to blank the laser(s) and apply a voltage to the galvo driver.
In addition to the saving pattern above, for display we need something like:
img_processor = DeskewOpm(acq_settings)
with mda_listeners_connected(img_processor):
core.mda.run(sequence)
Ideally, the img_processor needs to know the number of channels being acquired ahead of time (driven by changes the user makes to the GUI) and then yield one layer per channel of the processed data back to Napari. In this case, it will be on-the-fly deskewed data using a Numba-accelerated function. We already do this our fully custom GUI and it works fine.
Beyond that, we are still figuring out how to make the above process loop until the user hits "stop". I don't have a lot of experience with signals, etc... so I'm not super helpful to group that way. In our current code, we check if the user has clicked the stop button or changed the acquisition settings every time the code yields a deskewed image for display. If no changes, the same sequenced acquisition is run again. If there are setup changes, the sequencing is setup again before running. If stopped, the sequence stops.
That does mean the GUI doesn't respond to the stop button until data is yielded for our current strategy.
Which one do think makes sense to tackle first? We have a draft of the custom engine, maybe we should test that a bit more and share with you?
ok @dpshepherd, frameReady should now be immediately emitted with #290. will cut a new release soon. will comment on the processing in a moment
regarding on-the-fly processing, or indeed any sort of serial processing before display or saving, this is something that I've intentionally left out of pymmcore-plus for the moment, knowing it would be an important topic. You can currently do this sort of thing yourself by chaining together or subclassing processors (and the pattern I'll show in a moment will probably continue to work indefinitely), but I also hope to provide a nicer interface eventually. Here's the deal:
-
currently, every "handler" you provide to
mda_listeners_connectedwill be passed arguments to their event handlers that are unaffected by other handlers. There is an intermediateThreadRelayobject that looks something like this:graph LR; core.mda.events -->|frameReady| ThreadRelay; ThreadRelay --> Handler1; ThreadRelay --> Handler...; ThreadRelay --> HandlerN;
at the moment, napari-micromanager itself doesn't use this pattern, but i intend to use that in napari-micromanager. The idea being that the image data is dumped to the ThreadRelay and control returns immediately to acquisition, so that any sort of processing/saving (that doesn't need to modify the next image acquisition event) is handled asynchronously and doesn't slow down the imaging.
This means that if you wanted one of your handlers (whether it be a display or saving handler) to process data first, you currently could subclass like this:
class SavingHandler:
def frameReady(self, frame):
print("Saving frame")
class ProcessingMixin:
def frameReady(self, frame):
# as shown, this mixin can only be used with a
# super class that has a frameReady method
print("processing frame")
super().frameReady(frame)
class MyProcessor(ProcessingMixin, SavingHandler):
...
img_processor = MyProcessor()
with mda_listeners_connected(img_processor):
core.mda.run(sequence)
To be clear, I don't think that's a pattern that everyone should have to do forever... but it's one way that this could work until we have a more formal framework for chaining together processors (with the possibility to fork the chain after a processor into, say, display, and saving handlers). That said, it's a low level construct that should allow you to do pretty much anything you want.
Ideally, the img_processor needs to know the number of channels being acquired ahead of time (driven by changes the user makes to the GUI) and then yield one layer per channel of the processed data back to Napari.
the way that the handler pattern works is that any methods on your handler that have names that match the events emitted by mda.events will be connected. So, to get information about the number of channels (or anything about the currently executing sequence), you can add a sequenceStarted method to your handler. Note also that the second argument to frameReady is an instance of MDAEvent... and if you used useq.MDASequence to generate your list of events, then the event.sequence attribute will point to the parent sequence that created it. You can see an example of us using that in the OMEZarrWriter here. Here's a quick example of both of those patterns
class MyProcessor:
# grab info about the full experiment at the beginning
def sequenceStarted(sequence: MDASequence, metadata: dict):
self.n_channels = len(s.channels)
def frameReady(self, frame, event):
# if not using the sequenceStarted method
if event.sequence:
# grab info from event.sequence
self.n_channels = len(event.sequence.channels)
about to take off on a flight... so will address remaining questions soon
hey @dpshepherd, just checking in. How are things going? Anything waiting on my feedback or action at this point?
@tlambert03 - we have a new team member, @nng-thienphu, who built a DAQ widget for digital and analog waveform generation. He is currently just running a small Qt app w/ pymmcore-plus calling the camera to test synchronizing the rolling shutter of a camera to a moving light sheet (ASLM style).
The next step for him is to start working on the custom acquisition engine, so probably something concrete in a couple weeks!