Shalin Mehta
Shalin Mehta
> A special parser would be preferable, but (1) the format seems poorly documented compared to OME-TIFF, (2) there are variations between MicroManager versions, (3) there is no public set...
@ziw-liu > Another question is that how do we determine the training time Z sampling for better utilization of defocus information. This can potentially be estimated from magnification, Z step...
@yingmuzhi glad to hear that our data and code have been helpful. I suspect the problem you are seeing is due to normalization per FOV. When virtual staining across a...
Hi @stonedada, the 3D U-Net in [unet3d.py](https://github.com/mehta-lab/microDL/blob/master/micro_dl/networks/unet3D.py) requires different config parameters. However, we have stopped using 3D U-Nets in favor of 2.5D U-Net as @Christianfoley mentions. Which specific result are...
>> for P, we transfer the stage positions from XPositionUm and YPositionUm > Does this need to get encoded with a coordinate translation (as shown in https://github.com/czbiohub/iohub/issues/75#issuecomment-1497986967) or a dumped...
Thanks for alerting me to this issue @ziw-liu. Yes, It will be useful to have an API to add the label group and link it to the matching image for...
Are you thinking of benchmarks that should be run during CI to catch performance gains or drops? If yes, I'd suggest timing the write and read operations for a 1GB...
I agree with this. It is much better to build on top of `tifffile`. In addition to aligning the API for TIFF formats with API for Zarr formats, we should...
@ziw-liu @rachel-banks I think that we need just enough metadata so that ImageJ readers (the built-in hyperstack reader and BioFormats reader) can load the ND data in the correct order.
> I went back and started sampling patches from the dataset, saving them in an auxiliary directory, and using a standard directory dataset to process them. It's much faster, simpler,...