pathml icon indicating copy to clipboard operation
pathml copied to clipboard

How to train hovernet starting with semantic-level mask image?

Open OmarAshkar opened this issue 2 years ago • 3 comments

Is your feature request related to a problem? Please describe. I have large WSI data and multi-class jpeg masks, but I am so tired to find a solution to make them work with any hovernet implementation.

Describe the solution you'd like I'd like to be able to feed my large WSI data along with the jpeg masks, and tiling and training then took place.

Describe alternatives you've considered If I still need instance masks, I can do that I do this in watershed. But still don't know what format PathML would want (e.g npy, mat, json, jpeg ..etc

Additional context

Any help is highly appreciated.

Thanks!

OmarAshkar avatar Dec 01 '22 23:12 OmarAshkar

You can provide masks alongside the WSI when intializing a SlideData object. Just load the masks into numpy arrays and pass a dictionary of masks where each item is a (key, mask) pair. Masks should be the same height and width as the WSI image. Then, when tiles are generated, each tile will also have the corresponding mask. Hope this helps point in the right direction

jacob-rosenthal avatar Dec 02 '22 05:12 jacob-rosenthal

@jacob-rosenthal Thank you. Just to follow, SlideData has attributes mask and labels, which one will take the type labels and which one in the instances? I need to set that for hovernet I believe.

OmarAshkar avatar Dec 02 '22 18:12 OmarAshkar

Labels is meant to hold slide-level metadata, e.g. tissue type. Masks is for pixel-level metadata, e.g. a segmentation mask labeling which class each pixel is in. For example, if you have a numpy array of a nucleus instance segmentation mask which is named nuclei_mask , you would load it into masks as a dictionary such as wsi = SlideData("/path/to/slide.svs", masks = {"nuclei" : nuclei_mask})

jacob-rosenthal avatar Dec 03 '22 03:12 jacob-rosenthal