pymmcore-plus icon indicating copy to clipboard operation
pymmcore-plus copied to clipboard

difficulty with PVCAM

Open ZBaker11 opened this issue 1 year ago • 36 comments

It seems to take pymm+ longer to change the filter wheel in my CREST Xlite-V3 than it does in micromanagers MDA window.

It seems that the LDI laser unit is confirming its setting between each Z position thought the channel was not changed.

Any help would be greatly appreciated.

ZBaker11 avatar Mar 07 '25 19:03 ZBaker11

longer to change the filter wheel ... confirming its setting between each Z position thought the channel was not changed.

just to clarify: are you saying it feels like it's taking longer to actually change the wheel (as in, time from sending the command to being done changing), or that it's pausing at each z position, as if it was going to change the filter wheel, but then not actually doing anything (because it doesn't need to), and it's that additional pause that you're noticing

tlambert03 avatar Mar 07 '25 20:03 tlambert03

ZBaker11 avatar Mar 07 '25 22:03 ZBaker11

also, enabling ZStage-Use Sequencing in MDA-micromanager does make it instant, but it's not actually taking an image at each z-layer, it takes 50 images, then makes one 50um move at the end.

ZBaker11 avatar Mar 07 '25 22:03 ZBaker11

longer to change the filter wheel ... confirming its setting between each Z position thought the channel was not changed.

just to clarify: are you saying it feels like it's taking longer to actually change the wheel (as in, time from sending the command to being done changing), or that it's pausing at each z position, as if it was going to change the filter wheel, but then not actually doing anything (because it doesn't need to), and it's that additional pause that you're noticing

I was there with him at the scope. If you listened to the filter wheel changing between each channels Image it seemed like there was a lag between each channel. Not that the wheel inside was moving slow.

ssalteko avatar Mar 11 '25 02:03 ssalteko

Thanks @ssalteko. @fdrgsp told me he was chatting with you guys about this so I’ll let him pick it up :)

I do think it’s possible that it’s taking extra time to set something that is already “set”. We don’t (currently) diff the command to prune things that are no different than the existing state, so that’s a possible improvement… we can check whether mmstudio does that

tlambert03 avatar Mar 11 '25 11:03 tlambert03

Thank you! That PR definitely sped up the single channel stack significantly, around 2x. I tried profiling with py-spy (using PID) after installing the PR and I'm not sure if it worked like it's supposed to, since it was sampling at 1000Hz for 100x 1um slices, single channel 10ms exposure, but only had around 700 samples, but here it is: profile.json

And here is a profile from the same thing but two channels: profile.json

And the current log file: pymmcore-plus.log

I notice when set config is called, I see these lines: 2025-03-13T10:18:37.146814 tid20784 [IFO,dev:89 North Laser Diode Illuminator] SENDING: RUN 2025-03-13T10:18:37.146817 tid20784 [IFO,dev:89 North Laser Diode Illuminator] RUN 2025-03-13T10:18:37.179701 tid20784 [IFO,dev:89 North Laser Diode Illuminator] RECEIVED: ok 2025-03-13T10:18:37.179704 tid20784 [IFO,dev:89 North Laser Diode Illuminator] SENDING: RUN 2025-03-13T10:18:37.179708 tid20784 [IFO,dev:89 North Laser Diode Illuminator] RUN 2025-03-13T10:18:37.212671 tid20784 [IFO,dev:89 North Laser Diode Illuminator] RECEIVED: ok

which looks like the LDI is being set to run twice for each image, but I think if it was just set to run once at the beginning of the acquisition it would be the same, since it never goes into idle because we use autoshutter.

I'll also leave a bench here in case it's useful: bench.txt

I think another potential speedup might be to not issue a move command to the XYStage if their coordinates haven't changed. @ssalteko set up our stage to have a small backlash on the XYStage, which helps somewhat with accuracy, but it means if X or Y are given a move(0) command for each z slice, they will still engage the backlash and move forward and back slightly each time.

I also had another question I might as well ask here -- in pycromanager, I was using a global variable to keep track of the post-offset position, and updating future events using a hook to make sure it was at the right height (z-slice * z_number + post-focus offset + autofocus height), but I can't do that in pymmcore+ since the MDA events list is frozen. What's the best way I can handle the post-autofocus height with pymmcore+?

Thanks!

ZBaker11 avatar Mar 13 '25 14:03 ZBaker11

Awesome, thanks so much for the rapid testing! We can get this in and release soon, and continue improving. Just getting off a plane now but will follow up on the other stuff soon

tlambert03 avatar Mar 13 '25 14:03 tlambert03

great. that new profiles look good.

I notice when set config is called, I see these lines:

does that happen even if you manually just call core.setConfig(...)? run something like this:

from pymmcore_plus import CMMCorePlus

core = CMMCorePlus()
core.loadSystemConfiguration("your_config_file.cfg")
core.setConfig("Channel", "DAPI")  # or whatever

and then look at the most recent logs with (e.g.) mmcore logs -n 50...

I assume this duplicate is happening somewhere because of our engine, but it would be good to know for sure.

I think another potential speedup might be to not issue a move command to the XYStage if their coordinates haven't changed.

I just added this to #448 as well. The way I added it is to store the last commanded position, rather than checking whether the current position is exactly equal to the new commanded position. However, I'm a bit worried about that strategy. Feel free to try it out as is, but I want to consider it more.

What's the best way I can handle the post-autofocus height with pymmcore+?

Yeah: we don't use hooks in the same way. The MDAEvents are immutable records of what the user requested the microscope to do... and any customizations on top of that need to happen in the engine (or a custom engine subclass). It's documented a bit here: https://pymmcore-plus.github.io/pymmcore-plus/guides/custom_engine/

I will note that we already have some logic here to store the last offset measured during an autofocus routine... which is then applied here.

So you could either

  1. create a custom Acquisition Engine subclass that has any logic you'd like (there's lots to be discussed there if you'd like to know more)
  2. if you see any additions to our default acquisition engine that might make your life easier, feel free to make a suggestion or open a PR

tlambert03 avatar Mar 13 '25 19:03 tlambert03

@ZBaker11, could you run one more test for me when you have a moment? On the command line, please just run

mmcore bench -c your_config.cfg

that will (hopefully) let us know which specific devices take longer than others to do stuff

tlambert03 avatar Mar 14 '25 18:03 tlambert03

@tlambert03 Sure, here are the results. I'm not sure why the camera failed, I tried turning it on and off but it didn't help.

bench.txt

ZBaker11 avatar Mar 14 '25 20:03 ZBaker11

thanks!

tlambert03 avatar Mar 14 '25 20:03 tlambert03

And running mmc.setConfig("Confocal", "CF CY5")

The log was: PS C:\Users\Crest\Desktop\incoming> mmcore logs -n 50 2025-03-17T09:52:50.454985 tid10872 [IFO,dev:89 North Laser Diode Illuminator] RECEIVED: SET:638=0 2025-03-17T09:52:50.454996 tid10872 [IFO,dev:89 North Laser Diode Illuminator] SENDING: SHUTTER?638 2025-03-17T09:52:50.454996 tid10872 [IFO,dev:89 North Laser Diode Illuminator] SHUTTER?638 2025-03-17T09:52:50.484974 tid10872 [IFO,dev:89 North Laser Diode Illuminator] RECEIVED: SHUTTER:638=CLOSED 2025-03-17T09:52:50.484983 tid10872 [IFO,dev:89 North Laser Diode Illuminator] SENDING: TTL_INVERT?638 2025-03-17T09:52:50.484984 tid10872 [IFO,dev:89 North Laser Diode Illuminator] TTL_INVERT?638 2025-03-17T09:52:50.514975 tid10872 [IFO,dev:89 North Laser Diode Illuminator] RECEIVED: TTL_INVERT:638=OFF 2025-03-17T09:52:50.514986 tid10872 [IFO,dev:89 North Laser Diode Illuminator] SENDING: SET?640 2025-03-17T09:52:50.514987 tid10872 [IFO,dev:89 North Laser Diode Illuminator] SET?640 2025-03-17T09:52:50.545505 tid10872 [IFO,dev:89 North Laser Diode Illuminator] RECEIVED: SET:640=0 2025-03-17T09:52:50.545517 tid10872 [IFO,dev:89 North Laser Diode Illuminator] SENDING: SHUTTER?640 2025-03-17T09:52:50.545518 tid10872 [IFO,dev:89 North Laser Diode Illuminator] SHUTTER?640 2025-03-17T09:52:50.561505 tid10872 [IFO,dev:89 North Laser Diode Illuminator] RECEIVED: SHUTTER:640=CLOSED 2025-03-17T09:52:50.561514 tid10872 [IFO,dev:89 North Laser Diode Illuminator] SENDING: TTL_INVERT?640 2025-03-17T09:52:50.561515 tid10872 [IFO,dev:89 North Laser Diode Illuminator] TTL_INVERT?640 2025-03-17T09:52:50.575507 tid10872 [IFO,dev:89 North Laser Diode Illuminator] RECEIVED: TTL_INVERT:640=OFF 2025-03-17T09:52:50.575515 tid10872 [IFO,dev:89 North Laser Diode Illuminator] SENDING: SET?730 2025-03-17T09:52:50.575516 tid10872 [IFO,dev:89 North Laser Diode Illuminator] SET?730 2025-03-17T09:52:50.589580 tid10872 [IFO,dev:89 North Laser Diode Illuminator] RECEIVED: SET:730=0 2025-03-17T09:52:50.589590 tid10872 [IFO,dev:89 North Laser Diode Illuminator] SENDING: SHUTTER?730 2025-03-17T09:52:50.589591 tid10872 [IFO,dev:89 North Laser Diode Illuminator] SHUTTER?730 2025-03-17T09:52:50.605505 tid10872 [IFO,dev:89 North Laser Diode Illuminator] RECEIVED: SHUTTER:730=CLOSED 2025-03-17T09:52:50.605513 tid10872 [IFO,dev:89 North Laser Diode Illuminator] SENDING: TTL_INVERT?730 2025-03-17T09:52:50.605514 tid10872 [IFO,dev:89 North Laser Diode Illuminator] TTL_INVERT?730 2025-03-17T09:52:50.621504 tid10872 [IFO,dev:89 North Laser Diode Illuminator] RECEIVED: TTL_INVERT:730=OFF 2025-03-17T09:52:50.621524 tid10872 [IFO,dev:89 North Laser Diode Illuminator] SENDING: SPECKLE? 2025-03-17T09:52:50.621525 tid10872 [IFO,dev:89 North Laser Diode Illuminator] SPECKLE? 2025-03-17T09:52:50.635507 tid10872 [IFO,dev:89 North Laser Diode Illuminator] RECEIVED: SPECKLE=ON 2025-03-17T09:52:50.635531 tid10872 [IFO,dev:89 North Laser Diode Illuminator] SENDING: FAULT? 2025-03-17T09:52:50.635533 tid10872 [IFO,dev:89 North Laser Diode Illuminator] FAULT? 2025-03-17T09:52:50.651504 tid10872 [IFO,dev:89 North Laser Diode Illuminator] RECEIVED: ok 2025-03-17T09:52:50.651510 tid10872 [IFO,dev:89 North Laser Diode Illuminator] SENDING: F_MODE? 2025-03-17T09:52:50.651512 tid10872 [IFO,dev:89 North Laser Diode Illuminator] F_MODE? 2025-03-17T09:52:50.665556 tid10872 [IFO,dev:89 North Laser Diode Illuminator] RECEIVED: F_MODE=IDLE 2025-03-17T09:52:50.665564 tid10872 [IFO,dev:89 North Laser Diode Illuminator] SENDING: INT_MODE? 2025-03-17T09:52:50.665565 tid10872 [IFO,dev:89 North Laser Diode Illuminator] INT_MODE? 2025-03-17T09:52:50.679580 tid10872 [IFO,dev:89 North Laser Diode Illuminator] RECEIVED: INT_MODE=PC 2025-03-17T09:52:50.679592 tid10872 [IFO,dev:89 North Laser Diode Illuminator] SENDING: SH_MODE? 2025-03-17T09:52:50.679593 tid10872 [IFO,dev:89 North Laser Diode Illuminator] SH_MODE? 2025-03-17T09:52:50.695505 tid10872 [IFO,dev:89 North Laser Diode Illuminator] RECEIVED: SH_MODE=PC 2025-03-17T09:52:50.695513 tid10872 [IFO,dev:89 North Laser Diode Illuminator] SENDING: SLEEP? 2025-03-17T09:52:50.695514 tid10872 [IFO,dev:89 North Laser Diode Illuminator] SLEEP? 2025-03-17T09:52:50.711507 tid10872 [IFO,dev:89 North Laser Diode Illuminator] RECEIVED: SLEEP=0 2025-03-17T09:52:51.041638 tid10872 [IFO,Core] Did update system state cache 2025-03-17T09:52:54.656314 tid10872 [IFO,dev:89 North Laser Diode Illuminator] SENDING: RUN 2025-03-17T09:52:54.656316 tid10872 [IFO,dev:89 North Laser Diode Illuminator] RUN 2025-03-17T09:52:54.686269 tid10872 [IFO,dev:89 North Laser Diode Illuminator] RECEIVED: ok 2025-03-17T09:52:54.686271 tid10872 [IFO,dev:89 North Laser Diode Illuminator] SENDING: RUN 2025-03-17T09:52:54.686273 tid10872 [IFO,dev:89 North Laser Diode Illuminator] RUN 2025-03-17T09:52:54.717275 tid10872 [IFO,dev:89 North Laser Diode Illuminator] RECEIVED: ok

ZBaker11 avatar Mar 17 '25 13:03 ZBaker11

ok, then that suggests that the dual-call to SENDING: RUN is caused upstream, in the driver itself (or core, or your config), and not something that pymmcore-plus is doing

tlambert03 avatar Mar 17 '25 13:03 tlambert03

I've been busy with some other projects this week, but I'm planning on writing some acquisition scripts / acquisition engine / more testing this week. One more thing I was interested in before I dive in was regarding displaying images as they come in. What options are there for that? Is that where Napari comes in? I wrote a custom image saving hook for pycromanager that I'm happy with and should be transferable to pymmcore+, so I'm not too worried about that.

Thanks again!

ZBaker11 avatar Mar 19 '25 21:03 ZBaker11

hey @ZBaker11, there are tons of options for that. it might be useful for us to have a zoom at some point. but briefly:

  • a very quick way to just see the most recent image with minimal dependencies is the ImagePreview widget in pymmcore-widgets. here's an example that also adds some convenience snap/live buttons
  • most of our efforts these days are on using ndv. But using it as a streaming viewer is still an open PR ...
  • you can also use pymmcore-gui or napari-micromanager ... both of those will respond to programmatic usage as well

however, to keep things simple while exploring basic acquisitions, I would suggest starting with the ImagePreview widget mentioned above.

For saving, there are lots of built in options here already... they are defined here. The simplest way to use it is to just use the output parameter to mda.run:

mmc.run_mda(sequence, output="~/Desktop/thing.ome.tiff")

tlambert03 avatar Mar 19 '25 21:03 tlambert03

Thanks @tlambert03, that sounds good. I'm trying to set up a custom acquisition engine, but I'm struggling a bit. The goals I have in mind are:

  1. Use my own autofocus method, which I'll paste below
  2. Be able to autofocus once per well. I'd like to be able to use an MDASequence where each event somehow contains information about whether to autofocus. Our workflow is to take 4x4 mosaics of each of our wells with the 4x objective, then use those images to place locations to image at higher magnifications. The problem is that the autofocus is a bit slow, and also doesn't seem to work as well if not over tissue, so we would like to try autofocusing once in the center of the well, and using that height (plus offset) for the whole 4x4 mosaic. Alternatively (or just to start with) we could autofocus at each mosaic image except the corners, which have the most problems with autofocus. Actually, now that I'm writing this, that could definitely end up being what we go with.
  3. Be fast

here's the autofocus method I'd like to use: CRISP_device_name = 'CRISP' def lock_crisp(mmc): """Lock the CRISP unit.""" mmc.setProperty(CRISP_device_name, 'CRISP State', 'Lock') print('CRISP state has been set to lock')

def wait_for_crisp_lock(mmc): """Wait for CRISP to enter the In Focus state.""" crisp_state = get_crisp_state(mmc)

while crisp_state != "In Focus":
    crisp_state = get_crisp_state(mmc)

print('CRISP is now in the In Focus state')

def unlock_crisp(mmc): """Unlock the CRISP unit.""" sleep(0.05) mmc.setProperty(CRISP_device_name, 'CRISP State', 'Ready') sleep(0.05) print('CRISP state has been set to READY state (unlock)')

def idle_crisp(mmc): """Set the CRISP state to Idle.""" mmc.setProperty(CRISP_device_name, 'CRISP State', 'Idle') print('CRISP state has been set to idle')

def get_crisp_state(mmc): """Get the current CRISP state.""" crisp_state = mmc.getProperty(CRISP_device_name, 'CRISP State') return crisp_state

def autofocus(mmc): idle_crisp(mmc) sleep(0.05) lock_crisp(mmc) wait_for_crisp_lock(mmc) unlock_crisp(mmc) idle_crisp(mmc) sleep(0.05)

Could you point me in the right direction? Thanks!

ZBaker11 avatar Apr 01 '25 16:04 ZBaker11

I'm also having some problems with displaying images as they come in -- it works for the first image in an MDASequence, but after that will give the error: Exception in thread Thread-1 (run): Traceback (most recent call last): File "C:\Users\Crest\AppData\Local\Programs\Python\Python313\Lib\threading.py", line 1041, in _bootstrap_inner self.run() ~~~~~~~~^^ File "C:\Users\Crest\AppData\Local\Programs\Python\Python313\Lib\threading.py", line 992, in run self._target(*self._args, **self._kwargs) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Crest\AppData\Local\Programs\Python\Python313\Lib\site-packages\pymmcore_plus\mda_runner.py", line 239, in run raise error File "C:\Users\Crest\AppData\Local\Programs\Python\Python313\Lib\site-packages\pymmcore_plus\mda_runner.py", line 233, in run self._run(engine, events) ~~~~~~~~~^^^^^^^^^^^^^^^^ File "C:\Users\Crest\AppData\Local\Programs\Python\Python313\Lib\site-packages\pymmcore_plus\mda_runner.py", line 347, in _run for payload in output: ^^^^^^ File "C:\Users\Crest\AppData\Local\Programs\Python\Python313\Lib\site-packages\pymmcore_plus\mda_engine.py", line 221, in exec_event yield from self.exec_single_event(event) File "C:\Users\Crest\AppData\Local\Programs\Python\Python313\Lib\site-packages\pymmcore_plus\mda_engine.py", line 317, in exec_single_event yield ImagePayload(self._mmc.getImage(cam), event, meta) # type: ignore[misc] ~~~~~~~~~~~~~~~~~~^^^^^ File "C:\Users\Crest\AppData\Local\Programs\Python\Python313\Lib\site-packages\pymmcore_plus\core_mmcore_plus.py", line 1711, in getImage super().getImage(numChannel) ~~~~~~~~~~~~~~~~^^^^^^^^^^^^ Runtime Error: Camera image buffer read failed.

Update: I made some progress on the custom acquisition engine, but I'm still looking for a way to pass some information about whether or not to autofocus via the event. I'm guessing I'll need to set up each event independently and include it in the metadata, rather than using an MDASequence? I'm also still stuck on the error with displaying images.

mmcore2.py.txt

Update 2: I made some more changes to the acquisition engine, and it's much faster now. This is probably more because of our inefficient config than the default engine, though. I'm now setting all of the properties common between our channel presets at the beginning of the run, and setting the properties unique to the channels as the events happen. There are still four things I have open questions about:

  1. I'd like the option to disable printing, or at least rich printing, since I think it's taking up a non-insignificant amount of time.
  2. I see a duplicated shutter close command sent after the image has been captured. I don't think I'm the one doing this, and I'm not sure if it needs to be sent at all. I'll paste the log below.
  3. I still have the error with displaying images as they come up. This might be related to some lines regarding camera buffer init in the log? It seems like there are a couple errors during initialization where I'm not sure if they're important or how to fix them.
  4. Is there any way to pass information on a per-event basis using MDASequence? I could keep track of this (whether to autofocus) with global vars or something, but I'm wondering if I can do something a little more elegant.

pymmcore-plus.log

mmcore2.py.txt

ExpressiveConfig14.cfg.txt

another update: I added multithreading to the save function, which also helped a little with speed, and changed the pathing to be snaking across the plate. Both just things on my side, but I might as well keep the code here updated in case you guys are able to find any obvious things I'm doing wrong. After implementing exec_event() in my custom engine (which I basically just copied from the default engine, so I don't know why this helped), I now only see one shutter=1 and one shutter=0 command for each image, which is good. I'm still not sure if the shutter=0 command is needed with autoshutter, or where it's coming from.

mmcore2.py.txt

ZBaker11 avatar Apr 01 '25 18:04 ZBaker11

Hi @tlambert03 , I don't mean to bother you, but I found another issue. I was timing some calls this morning, and noticed that this call: self._mmc.setProperty('Camera-1', 'Exposure', int(event.exposure)) takes ~108ms.

and this call: self._mmc.snapImage() takes {exposure time + ~95ms}

Which makes me think there might be some problem with how the camera is being addressed. I'm fairly confident micromanager doesn't have this delay. Do you have any ideas?

Since my last update, I've made a gui / app, mostly intended to mimic micromanager, but tailored to our specific use case. It's probably a bit messy, I haven't made any other medium-scale python apps before. app.zip

Thanks again for any help!

ZBaker11 avatar Apr 11 '25 19:04 ZBaker11

self._mmc.snapImage() literally does nothing but call the C++ code via pymmcore. so the camera isn't being addressed here any differently than it would be via the java mm studio app. I'm afraid I don't have any immediate suggestions. I think it would be best if you tried to narrow it down a bit further. For example, make a config that only has a camera, then call and time snapImage() programmatically from both python, and from the java app (either using pycromanager or java beanshell) and see what you get. If you still see a difference in that minimal example, that would be very interesting. If not, then we'll need to build it back up device at a time (or something)( to see what might possibly be going on.

import time
from pymmcore_plus import CMMCorePlus

core = CMMCorePlus()
# replace with your camera
core.loadDevice("Camera", "DemoCamera", "DCam")
core.initializeAllDevices()
core.setCameraDevice("Camera")
t0 = time.time()
core.snapImage()
t1 = time.time()
print("Time taken to snap image: ", t1 - t0)

also, check running mmcore bench and see if the snapImage line is as slow as it is in your full program.

check things like your autoshutter setting. and any delays that you might have on the autoshutter device.

etc...

tlambert03 avatar Apr 11 '25 21:04 tlambert03

literally does nothing but call the C++ code via pymmcore

actually... i take that back... it does have a little additional logic related to autoshutter:

https://github.com/pymmcore-plus/pymmcore-plus/blob/75cea0737ee6624209709c907991df4d8185f325/src/pymmcore_plus/core/_mmcore_plus.py#L1429-L1444

so, perhaps also check to see if, with your config on your system, core.getAutoShutter or core.getShutterDevice() are slow? or if you have connected events to core.events.propertyChanged?

tlambert03 avatar Apr 11 '25 21:04 tlambert03

So I'm not really sure what's going on with this, but from the tests:

mmcore bench with the config:

┃ Method ┃ Time (ms) ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ Device: Core │ ------ │ │ getDeviceAdapterNames │ 1.0520 │ │ getLoadedDevices │ 0.0040 │ │ getSystemState │ 1774.6210 │ │ Device: Camera-1 │ ------ │ │ getMultiROI │ 0.0330 │ │ getExposure │ 0.0030 │ │ snapImage │ Error in device "Camera-1": Unknown error in the device (1) │ │ getImage │ Camera image buffer read failed. │ │ getImageWidth │ 0.0010 │ │ getImageHeight │ 0.0010 │ │ getImageBufferSize │ 0.0010 │ │ getImageBitDepth │ 0.0010 │ │ getNumberOfComponents │ 0.0000 │ │ getNumberOfCameraChannels │ 0.0000 │ │ Device: XYStage │ ------ │ │ getXYPosition │ 29.8980 │ │ getXPosition │ 30.1640 │ │ getYPosition │ 29.9780 │ │ setXYPosition │ 29.8790 │ │ setRelativeXYPosition │ 30.0240 │ │ isXYStageSequenceable │ 0.0020 │

getShutterDevice() and getAutoShutter() take microseconds, and I haven't connected any methods to propertyChanged.

Loading just the camera from PVCAM, setting the exposure takes 61ms. core.snapImage() does not work, and (I think) has never worked, while doing anything other than MDA.

If I change our config to load the camera from DemoCamera as DCam instead of from PVCAM as Camera-1, setting the exposure and calling snapImage() are both near-instant.

We have a Prime BSI Express and I'm using the latest MM nightly build. Is there anything else I can do to debug?

ZBaker11 avatar Apr 15 '25 15:04 ZBaker11

Ah yes, I’ve always had difficulty using the pvcam driver simply (I’ve seen that buffer error that you mention). But I can’t remember the details on that driver. @marktsuchida? Do you remember?

tlambert03 avatar Apr 15 '25 16:04 tlambert03

I don't remember any problem with PVCAM in general. (There used to(?) be occasional cases where the driver went into a bad state (fixed by rebooting the PC), but probably not in such a subtle way.)

One slightly unusual thing is that PVCAM makes use of the OnPropertiesChanged (note the plural) notification as a way to indicate that ranges or allowed values of properties have changed. This may sometimes(?) happen when the exposure is set, too.

Setting the exposure could well take time, depending on how the driver is implemented and the device, though I don't know if 61 ms is normal. It does look like the device adapter updates quite a bit of state when almost anything (including exposure) changes.

You can look at the time taken by the PVCAM device adapter (as opposed to [py]mmcore[-plus]) by enabling debug logging and looking for Will set property/Did set property entries (or Will set camera ... exposure etc.)

I would have expected snaps to work -- does it also not work in MMStudio?

marktsuchida avatar Apr 15 '25 18:04 marktsuchida

I don't remember any problem with PVCAM in general

what I'm recalling is that, with other photometrics cameras I have used. Simply doing the standard loadDevice, initializeDevice, snapImage programmatically hasn't worked... and, while I can't remember the details, i remember needing to somehow go live first (almost as if it needed a brief sequence acquisition to prime some internal buffer or something like that).

In any case, I think this is sort of a tangential issue here (it's just getting in the way of debugging @ZBaker11's slow snapImage() performance because it makes it harder to isolate the problem without snapping in the context of a big MDA or something like that.

@ZBaker11, I'm afraid you might need to dig a bit deeper on your own... try to snapImage() in various contexts, in and outside of an MDA, and time it. See if you can figure out the bare minimum required to get snapImage() to work, including preceding it by a brief startSequenceAcquisition/stopSequenceAcquisition?

tlambert03 avatar Apr 15 '25 18:04 tlambert03

From a bit more testing, start/stop SequenceAcquisition don't seem to help, but I can call snapImage() without error if I first call getImage(). So maybe the camera's image buffer is getting clogged? I think the last time I might've been trying to call snapImage() more than once without calling getImage().

When I load just the camera, and call setProperty for exposure (using 200, 201, 200, 201 ...), then snapImage, then getImage, I get these mean times:

setProperty mean time: 0.108435 s snapImage mean time: 0.235138 s getImage mean time: 0.001850 s

the snapImage time seems somewhat reasonable -- it seems like there is something else in the config that adds some time to that for whatever reason. The exposure setting time is probably more of a problem.

I also notice core.getLastImage() seems to always throw IndexError: Circular buffer is empty.

These things would be good to fix for speedups, but I think the last functional problem that I'd like to fix is the camera image buffer read failed error, which is happening on the ImagePreview widget for viewing MDA images as they come in. The error message is the same as when I:

core.snapImage() core.getImage() core.getImage()

ZBaker11 avatar Apr 22 '25 21:04 ZBaker11

hey @ZBaker11 glad you're still working at it. a couple quick notes here:

it is interesting that setProperty alone incurs 100ms... that probably has to do with the camera driver itself, and it would be good to verify that (programmatically) on the java side too. (i don't have much experience there, but see https://micro-manager.org/Script_Panel_GUI)

I can call snapImage() without error if I first call getImage() ...

  • that's pretty strange... If you call getImage() before ever having called snapImage() then I would expect you to receive a RuntimeError: Issue snapImage before getImage. ... so you probably have called snapImage before. In all cases, here, if you can show me some actual code (i.e. paste the code you ran, and the output you got), it would be easier to help you debug.

maybe the camera's image buffer is getting clogged

I can't think of anything that would fit that description...

I also notice core.getLastImage() seems to always throw IndexError: Circular buffer is empty.

yes, getLastImage is fundamentally different from getImage(). the former is strictly used to retrieve images from the circular buffer, which is where images are stored when you start a sequence with startSequenceAcquisition or startContinuousSequenceAcquisition. So, unless you have called one of those methods, then none of the methods that retrieve images from the circular buffer will work. See also getRemainingImageCount.

These things would be good to fix for speedups

what would need to be done done first is to actually figure out at what level the delays are happening. remember, this is python code that wraps c++ code. So in every case, we need to try to determine whether it's the C++ driver that is causing the problem or the python wrapper. That's why reducing these things to the smallest reproducible example is useful.

tlambert03 avatar Apr 23 '25 13:04 tlambert03

Sorry, I said that the wrong way. What I meant to say was:

snapImage() -> fine

snapImage() getImage() -> fine

getImage() -> RuntimeError: Issue snapImage before getImage.

snapImage() snapImage() -> RuntimeError: Error in device "Camera-1": Unknown error in the device (1)

snapImage() getImage() getImage() -> RuntimeError: Camera image buffer read failed.

Sorry that formatted weird; I'm not very good at markdown

Maybe that's all intended? At the moment I'm most interested in getting the ImagePreview widget to work with the MDA, which right now is giving the Camera image buffer read failed error.

Also here is the code I used to get the mean times: setProperty mean time: 0.108492 s snapImage mean time: 0.235628 s getImage mean time: 0.001822 s

testing.py.txt

ZBaker11 avatar Apr 23 '25 20:04 ZBaker11

Aha! Double snap image without get :). I see!

That’s helpful. And I don’t think that should happen… so let’s look in the c++ code for hints

tlambert03 avatar Apr 23 '25 22:04 tlambert03

Hi @tlambert03 I'm back after working on some other things. Sorry to keep bothering you. There is still the issue of the added delays when changing exposure, but I think I'm going to call the speed good enough, at least for now. I'm currently working on adding an image preview to the app I'm writing. When I try to use the ImagePreview Widget, I'll get one image that updates the image preview, and immediately get the Camera buffer read failed error. I think the ImagePreview takes the image out of the camera buffer, then the image saving function or something tries to read the image that isn't there and throws the error. Maybe NDV would be better? Could you point me towards any existing projects using NDV in a similar way, or how to get around the image buffer problem (buffer read without pulling the image out)?

ZBaker11 avatar Jun 02 '25 17:06 ZBaker11