cellseg_models.pytorch icon indicating copy to clipboard operation
cellseg_models.pytorch copied to clipboard

Segmentation output issues

Open AmrutSa opened this issue 3 months ago • 5 comments

Hello. I have been trying to use this library to test a few segmentation models on some internal data. I am working with large tiff images of size 30000x30000 tiff images. Due to the size of the image I decided to use the SlidingWindowInferer with HoverNet, however it was not able to run on 1 gpu and kept resulting in an out of memory issue. I then decided to break the image into individual patches and save the individual files. Due to some normalization the values in the matrix are float, if I convert this to integer all the values become 0 and so this is resulting in an empty patch. I am saving them using the tifffile package and then feeding the path to the ResizeInferer. I preserve the floats in the patch and load in the patches with the tifffile package since opencv cannot open them. While I am able to use Resize Inferer, the HoverNet segmentation is coming out completely empty, and the segmentations for Cellpose and Stardist are very similar although I do see more segmentations. The behavior is very odd, directly after segmentation from Stardist the individual patches are giving images such as the following: Screenshot 2024-03-07 at 10 06 47 AM

Here are some of the parameters I use for Stardist (note some of these params are for other preprocessing or postprocessing steps): params: out_activations: '{"dist": None, "stardist": None}' out_boundary_weights: '{"dist": False, "stardist": True}' resize: '(256,256)' overlap: 248 patch: 256 instance_postproc: 'stardist' padding: '64' batch_size: '1' downsample_factor: 1 n_channels: 3 n_rays: '4'

Questions:

  1. Is it possible to use the Sliding Window Inferer on 1 gpu, if so what are some key considerations to take when setting the params to allow for this? Any tips would be a great help!
  2. I have checked the input images and they seem to be set up properly when inputted, and yet I still receive such odd results (shared screenshot). Do you have any recommendations on things to check for this?
  3. Do all the models need to be trained beforehand, or do all the base model versions in the package have a pretrained version that are directly called and can be utilized without training beforehand?

AmrutSa avatar Mar 07 '24 15:03 AmrutSa

Hello,

Those results are very odd indeed. Did you use some specific normalization during training? When you run inference with the Inferer-classes, you should also pass the same normalization method as a parameter to the class e.g. normalization="min-max". Usually, when I get weird results the reason is that I've forgot to use the normalization parameter.

Regarding your other questions:

  1. Yes, you can use sliding window inferer on 1gpu. However, the input images can't be super large like 30000x30000px. The reason is that the output will have the same shape as the input and it can't fit to gpu memory. I'll take a look if this could be avoided in the future.

  2. And yes, for now you need to train from scratch. In the future, I'll set up a hugginface space for pre-trained models that could be loaded directly. Once I just have the time.

okunator avatar Mar 08 '24 06:03 okunator

Hello,

Those results are very odd indeed. Did you use some specific normalization during training? When you run inference with the Inferer-classes, you should also pass the same normalization method as a parameter to the class e.g. normalization="min-max". Usually, when I get weird results the reason is that I've forgot the normalization parameter.

Regarding your other questions:

  1. Yes, you can use sliding window inferer on 1gpu. However, the input images can't be super large like 30000x30000px. The reason is that the output will have the same shape as the input and it can't fit to gpu memory. I'll take a look if this could be avoided in the future.

  2. And yes, for now you need to train from scratch. In the future, I'll set up a hugginface space for pre-trained models that could be loaded directly. Once I just have the time.

okunator avatar Mar 08 '24 06:03 okunator

Also, I noticed that you were using some parameters that don't exist like 'overlap' and 'patch'. Use 'stride' and 'patch_size' instead.

okunator avatar Mar 08 '24 06:03 okunator

Thank you for your quick response! I see, okay, so I will have to train all the models before running inference on my dataset. The overlap and patch parameters are used to manually create patches so that I can work with ResizeInferer, without having to actually resize(like in one of your examples). 'Stride' and 'patch_size' can be used with the Sliding Window Inferer, however what would their role be in the Resize Inferer? I will train these and let you know if things change!

AmrutSa avatar Mar 08 '24 13:03 AmrutSa

You're right, ResizeInferer does not use stride and patch_size parameters. My bad.

okunator avatar Mar 14 '24 07:03 okunator