axondeepseg icon indicating copy to clipboard operation
axondeepseg copied to clipboard

Add "myelin cutter" model for postprocessing

Open hermancollin opened this issue 1 year ago • 2 comments

Discussed in https://github.com/axondeepseg/axondeepseg/discussions/766

Originally posted by ktyssowski October 26, 2023 Axondeepseg is working great for me for separating myelin from axon, but I think the instance segmentation that happens when calculating morphometrics could be better -- in particular, it seems that for axons near thickly myelinated axons, the line between axons errs towards giving the less myelinated axons more area. I'm wondering if there is any way to tweak the instance segmentation process.

As discussed in the above discussion, I have this model to postprocess axonmyelin masks. I created the annotations myself and the data is located at https://github.com/axondeepseg/data_touching_myelin/. It's still unreleased because we discussed it but nothing ever really came out of it.

However, as seen in #766, it can actually still be pretty useful to get a better instance segmentation:

original instance seg postprocessed instance seg
something something

There is some enhancement in certain regions, especially at the contour of small axons and at the junction between big and small fibers.

The model was trained with ivadomed, which could be a problem. I don't want to waste time integrating it in ADS if we deprecate this dependency in the near future. An alternative would be to re-train the model with nnUNetv2, as we know we will add support for it soon. I wouldn't be surprised if the results were even better.

hermancollin avatar Oct 31 '23 18:10 hermancollin

@ArthurBoschet maybe this is something you could be interested in. Adding this feature would be a good way to get familiar with the codebase. Also, the data is not hosted on private data servers so you could re-train the model while you wait for you access.

hermancollin avatar Nov 07 '23 14:11 hermancollin

@hermancollin yes I can retrain using nnunetv2! I am just waiting for access to my POLY-GRAMES account but then I can do this task.

ArthurBoschet avatar Nov 07 '23 16:11 ArthurBoschet

@hermancollin , @ktyssowski inquired by email if this tool had been integrated into the main software yet. I don’t think it has, is it because it may be obsolete with the new generalist model you’re developing?

mathieuboudreau avatar Feb 27 '24 10:02 mathieuboudreau

@hermancollin , @ktyssowski inquired by email if this tool had been integrated into the main software yet. I don’t think it has, is it because it may be obsolete with the new generalist model you’re developing?

@mathieuboudreau I think it's kind of complementary to it. Its a postprocessing step applied on the mask. Arthur re-trained it with nnunet. We can release it I think its on duke rn. It could help some people.

hermancollin avatar Feb 27 '24 17:02 hermancollin

@hermancollin , @ktyssowski inquired by email if this tool had been integrated into the main software yet. I don’t think it has, is it because it may be obsolete with the new generalist model you’re developing?

@mathieuboudreau I think it's kind of complementary to it. Its a postprocessing step applied on the mask. Arthur re-trained it with nnunet. We can release it I think its on duke rn. It could help some people.

hi all -- would be super helpful to me if you released it! i've been trying to run it with ivadomed, but having some issue with the BIDS formatting that i've yet to figure out. anyway, i'm sure i can figure it out if need be (though might have some extra questions for you) -- but if you do plan to release it within the near future, i can pause my troubleshooting on that. thanks!

ktyssowski avatar Feb 27 '24 18:02 ktyssowski

hi all -- would be super helpful to me if you released it! i've been trying to run it with ivadomed, but having some issue with the BIDS formatting that i've yet to figure out. anyway, i'm sure i can figure it out if need be (though might have some extra questions for you) -- but if you do plan to release it within the near future, i can pause my troubleshooting on that. thanks!

@ktyssowski sorry for the delayed response. We are working on a manuscript right now, so I'm pretty swamped until next week. I would be more comfortable uploading the model after next week, if it's ok with you, because I don't want to rush this and make a mistake (need to coordinate with someone else who worked on this).

The latest version of this model was not trained on ivadomed. Also, BIDS formatting will not be required for this tool.

hermancollin avatar Feb 29 '24 18:02 hermancollin

Next week is perfect! Thanks!

ktyssowski avatar Mar 01 '24 10:03 ktyssowski

this is not out yet, correct? i just wanted to make sure i'm not missing it! if not - no worries -- i will just continue to troubleshoot the other method. thanks!!

ktyssowski avatar Mar 11 '24 14:03 ktyssowski

@ktyssowski I am going to upload and write a small guide for installation/use today. In the meantime, can you provide an example mask that you wanted to postprocess? This way I could make sure everything works quicker

hermancollin avatar Mar 12 '24 15:03 hermancollin

Great! Thank you!

here is a mask: sub-7790_sample-031_TEM

ktyssowski avatar Mar 12 '24 15:03 ktyssowski

Hey @ktyssowski. I tested the model on my CPU and it is slow, so I would definitely recommend GPU acceleration (which makes inference almost instantaneous). However, it works. I ended up only testing it on a small ROI taken from your mask (upper-right corner). Here is what it gave me:

img pred after
test_img_ktyssowski_roi test_img_ktyssowski_roi_pred test_img_ktyssowski_roi_postprocessed

You can download the model and use it with github.com/axondeepseg/nn-axondeepseg. Simply follow the instructions. The full command I used for the model was this:

python nn_axondeepseg.py --seg-type UM --path-dataset path/to/folder/with/masks --path-out [...] --path-model [...] --use-best

Please note that the --use-best option is required for this model. We also need to specify --seg-type UM because this is 1-class segmentation.

Curious to hear your feedback/questions! Keep in mind this feature is only experimental for now, we haven't had time to refine it as much as I would have wanted to.

hermancollin avatar Mar 13 '24 16:03 hermancollin

Great! Thanks so much! Just to make sure I'm understanding -- to get from pred --> after above, do you simply just subtract pred from img?

ktyssowski avatar Mar 13 '24 17:03 ktyssowski

Great! Thanks so much! Just to make sure I'm understanding -- to get from pred --> after above, do you simply just subtract pred from img?

@ktyssowski Exactly. I did it in an image processing software but if you want to do it for a lot of images I can write a small script to automate this image subtraction.

hermancollin avatar Mar 13 '24 17:03 hermancollin

I got everything to run on GPU and it looks like it works pretty well!

I've been subtracting images using imagemagick, but I think maybe it's ending up in the wrong format because when I run the morphometrics, I'm getting TypeError: Cannot handle this data type: (1, 1, 3), <i4 If you have any ideas as to what's causing that or if you have code i could try for the subtraction, that would be appreciated!

I also notice that the output is a png with values 0 or 1 whereas the axonmyelin mask output from ADS is on a 0 to 255 scale, so I have to rescale before running the subtraction.

ETA: it's definitely something odd with my subtraction code because if i subtract in imageJ, everything works! so if you have subtraction code handy, that would be great -- if not, imageJ works for me for now.

I'm attaching my axonmyelin mask png after subtraction, and here's the morphometrics command i'm running with full error message:


`(ads_venv) kelsey out >> axondeepseg_morphometrics -i result.png -a circle -s .001465 -f 7792.csv -b -c
2024-03-14 18:09:35.829 | INFO     | AxonDeepSeg.morphometrics.launch_morphometrics_computation:main:152 - Logging initialized for morphometrics in "/Users/kelsey/Dropbox (Harvard University)/EM_data/to_analyze/to_cut_new/out".
  0%|                                                               | 0/1 [00:00<?, ?it/s]2024-03-14 18:09:58.314 | INFO     | AxonDeepSeg.visualization.colorization:colorize_instance_segmentation:102 - Colorizing 0 instances.
  0%|                                                               | 0/1 [00:22<?, ?it/s]
Traceback (most recent call last):
  File "/Users/kelsey/mambaforge/envs/ads_venv/lib/python3.8/site-packages/PIL/Image.py", line 3080, in fromarray
    mode, rawmode = _fromarray_typemap[typekey]
KeyError: ((1, 1, 3), '<i4')

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/Users/kelsey/mambaforge/envs/ads_venv/bin/axondeepseg_morphometrics", line 33, in <module>
    sys.exit(load_entry_point('AxonDeepSeg', 'console_scripts', 'axondeepseg_morphometrics')())
  File "/Users/kelsey/axondeepseg/AxonDeepSeg/morphometrics/launch_morphometrics_computation.py", line 207, in main
    morph_output = get_axon_morphometrics(
  File "/Users/kelsey/axondeepseg/AxonDeepSeg/morphometrics/compute_morphometrics.py", line 132, in get_axon_morphometrics
    im_instance_seg = colorize_instance_segmentation(im_axonmyelin_label)
  File "/Users/kelsey/axondeepseg/AxonDeepSeg/visualization/colorization.py", line 104, in colorize_instance_segmentation
    colorized = Image.fromarray(instance_seg)
  File "/Users/kelsey/mambaforge/envs/ads_venv/lib/python3.8/site-packages/PIL/Image.py", line 3083, in fromarray
    raise TypeError(msg) from e
TypeError: Cannot handle this data type: (1, 1, 3), <i4`

result_seg-axonmyelin

ktyssowski avatar Mar 14 '24 22:03 ktyssowski

I got everything to run on GPU and it looks like it works pretty well!

:heart:

I've been subtracting images using imagemagick, but I think maybe it's ending up in the wrong format because when I run the morphometrics, I'm getting TypeError: Cannot handle this data type: (1, 1, 3), <i4 If you have any ideas as to what's causing that or if you have code i could try for the subtraction, that would be appreciated!

I'll give you a short script to try.

I also notice that the output is a png with values 0 or 1 whereas the axonmyelin mask output from ADS is on a 0 to 255 scale, so I have to rescale before running the subtraction.

That's actually weird, but I'm glad you were able to catch this. I am not able to reproduce this on my side. The nn-axondeepseg package I provided is supposed to rescale predictions to [0,255] so I'm a bit stumped. When you apply the postprocessing model, can you confirm something along this line is displayed in the console?

2024-03-13 11:38:35.509 | INFO     | __main__:main:119 - Rescaling predictions to 8-bit range.

ETA: it's definitely something odd with my subtraction code because if i subtract in imageJ, everything works! so if you have subtraction code handy, that would be great -- if not, imageJ works for me for now.

I'm attaching my axonmyelin mask png after subtraction, and here's the morphometrics command i'm running with full error message:

How was the image you provided generated? With your script or ImageJ? Because the morphometrics/colorization run successfully on my side: her_result_after_trying_instance-map

hermancollin avatar Mar 15 '24 16:03 hermancollin

@ktyssowski actually can you provide your ImageMagick script/command?

hermancollin avatar Mar 15 '24 16:03 hermancollin

Sorry to have dropped the ball on this! I figured out the subtraction thing -- it was because I was only editing the axonmyelin file and messed up with the editing of the myelin file.

Two updates: (1) the code that draws lines between axons definitely is outputting a [0,1] file even though it does SAY in the terminal output that it's rescaling (2) I actually ended up figuring out how to get the older ivadomed version working, and that seems to run faster and work better for me than the newer version. even when i run the newer one on GPU, it still takes a bit, but it also seems to draw shorter lines than the previous version, which makes it work less well for the segmentation

ktyssowski avatar Mar 27 '24 14:03 ktyssowski