spinalcordtoolbox
spinalcordtoolbox copied to clipboard
High resolution mouse data requires a new model (`seg_mouse_gm_wm_t1w` fails)
Description
I received a 2D nifti image on which we would like to run inference using the seg_mouse_gm_wm_t1w model. Unfortunately, because the image is 2D it cannot be used with sct_deepseg -task seg_mouse_gm_wm_t1w.
To resolve this, I tried the following:
- stacking the same image multiple times to create a fake 3D image:
import nibabel as nib
import numpy as np
from image import Image
# Load the 2D image
path_image = '/Users/plbenveniste/Desktop/mouse_spinal_cord/3822_vGluT3_thor.nii.gz'
# Load the 2D image
img = nib.load(path_image)
# Concatenate the image with itself to create a 3D image
img_data_3d = np.stack([img.get_fdata()]*10)
# Update the header to reflect the new shape
img.header.set_data_shape(img_data_3d.shape)
# Update the image data
img = nib.Nifti1Image(img_data_3d, img.affine, img.header)
# Correct the resolution from 1mm to 1 um
img.header['pixdim'][1:4] = [0.01,0.001,0.001]
# Ad the resolution unit which was missing
img.header['xyzt_units'] = 10
# Save the 3D image
nib.save(img, '/Users/plbenveniste/Desktop/mouse_spinal_cord/thor_3d_10.nii.gz')
The inference didn't work on this image, so I thought it was maybe failing because of the resolution
- I resampled the image:
sct_resample -i thor_3d_10.nii.gz -mm 0.05 -o thor_3d_10_resamp.nii.gzIt didn't work on this image as well.
I am therefore asking for help on this matter.
Data can be found here: ~/Duke/temp/plben/mouse_sc_seg/3822_vGluT3_thor.nii.gz
Just for the sake of easy visual comparison, here is the test image we use for this model (0.05mm isotropic):
And here is the input image 3822_vGluT3_thor.nii.gz (0.001mm, zoomed in to show detail):
Resampled from 0.001mm to 0.05mm (i.e. 50x reduction):
I notice the GM/WM intensity and contrast is much different in the input image (~2000/~8000) vs. the test image (~0.2/~0.1).
Not only is the GM much darker, but the intensity range as a whole is very different.
Good observations @joshuacwnewton. About the intensity range and contrast, I'm not too worried as this is something we could tweak before inference (eg: multiply the image by -1).
However, more problematic, I think, is the lack of context because this image only shows the spinal cord, without the rest of the tissues, which the model was not trained to identify.
Eventhough, I don't have a correct spinal cord segmentation (WM/GM) for this image, I solved the problem of failing inference. The problem came from the image header (even though I was setting the correct voxel dimensions, because the image had a "bad" QForm, the total image dimensions were incorrect). I fixed the header and the orientation using the following code (thanks @mguaypaq for the help)!
import nibabel as nib
import numpy as np
# Load the 2D image
path_image = '3822_vGluT3_thor.nii.gz'
# Load the 2D image
img = nib.load(path_image)
# Concatenate the image with itself to create a 3D image
img_data_3d = np.stack([img.get_fdata()]*10)
# Update the header to reflect the new shape
img.header.set_data_shape(img_data_3d.shape)
# Update the image data
img = nib.Nifti1Image(img_data_3d, img.affine, img.header)
# Correct the resolution from 1mm to 1 um
img.header['pixdim'][1:4] = [1,0.001,0.001]
# Ad the resolution unit which was missing
img.header.set_xyzt_units(xyz='mm', t='sec')
# correct the sform
matrix = np.array([
[0, 0.001, 0, 0],
[0, 0, 0.001, 0],
[1, 0, 0, 0],
[0, 0, 0, 1]
])
img.set_sform(matrix, code='scanner')
img.set_qform(matrix, code='scanner')
# Save the 3D image
nib.save(img, 'thor_3d_10.nii.gz')
However, the output of the function sct_deepseg -i thor_3d_10.nii.gz -task seg_mouse_gm_wm_t1w failed as we can see in the below image (red=GM and green=WM).
It also failed on the image multiplied by -1 (using sct_maths -i thor_3d_10.nii.gz -mul -1 -o thor_3d_10_mul.nii.gz) with the -task seg_mouse_gm_wm_t1w outputting empty segmentation masks.
Here is the result with sct_deepseg -i thor_3d_10.nii.gz -task seg_mice_sc
The result with sct_deepseg -i thor_3d_10_mul.nii.gz -task seg_mice_sc (on the multiplied by -1 image)
FInally, the model seg_mice_gm doesn't segment anything on both the image and the image multiplied by -1.
I fixed the header and the orientation using the following code (thanks @mguaypaq for the help)!
Why not simply using sct_image -transpose?
Too bad for the segmentation results :-( But thank you for looking at it!
You're right sct_image -transpose would have been easier!
I also tried changing the resolution to try human segmentation models. (resolution was changed from [1,0.001,0.001] to [10,0.01,0.01] (in mm).
The seg_exvivo_gm-wm_t2 created empty masks on the image. However, on the image multiplied by -1 it did the following:
@jcohenadad Are there other investigations that you have in mind?
@jcohenadad Are there other investigations that you have in mind?
nope :-( At this point I would just re-train a specific model as mentioned in my response on the forum, which would ideally be done by the researcher (and we could help with the integration in SCT).
Hi, Thanks for running these images. Doesn't it still seem possible that there is some way to "smoothen" the sample, to look more (MRI-ish)?
For example, rather than downsampling the image, what if we optimized a bilateral filter the blurs the image while retaining edges (mostly; see below). These optical sections are ~7um thick. Another option would be to "smoothen" by averaging 10 or so adjacent sections in Z. I could send you some of these images if you think there is a chance of it working.
To Julien's message here here: How involved is the training process. Do you have any videos or tutorials that would give me a sense of the workflow? With clear instructions I have someone who could work on this, depending on the time investment.
A crude first example of a bilatreral filter Matlab: sigma_spatial = 400; % Spatial smoothing parameter sigma_range = 40; % Range smoothing parameter filtered_img = imbilatfilt(gray_img, sigma_spatial, sigma_range); % Bilateral filtering
Thanks again, Steve
Hi Steve, this may or may not work, but I am not too optimistic, given the results obtained from the models. As mentioned earlier, I think the way moving forward is to train a specific model. This could be done by manually segmenting a few slices, so hopefully not too much work for you.
Sure thing. I'm not surprised to hear this, since NNs tend to very specifically fit whatever they are trained on. So to move forward, would I send you a few segmentations and then you run the training? Is your NN trained from scratch? If we are running the training, then I'll likely need some help and documentation on this. If you want to try it out, I'd gladly send you segmentations. If we get a working model, would it be downloadable and sharable with the community?
@sulli419 training a model does take some time (adapting the code, curate the data, evaluate training, tweak the network, etc.), which we unfortunately do not have right now. The best I can do is redirect you to this repository, which describes how we trained the model. If you follow the instructions with your data (and, as mentioned, with a bit of 'tweaking' depending on your dataset), you should be able to get it working, and then we could incorporate the model into SCT.
Ok. I will look over the documentation and see how feasible this workflow is.
Related to mouse spinal cord data: Has anyone on your SCT team segmented the data from this paper? https://pubmed.ncbi.nlm.nih.gov/23246856/ Having this "ground truth" my might help us along the way, if you are willing the share the model.
Has anyone on your SCT team segmented the data from this paper? https://pubmed.ncbi.nlm.nih.gov/23246856/
No, my lab was not involved in this. Note that this is not a segmentation method (ie: there is no 'segmentation model' per se), but rather a labeling of the cord segments.
Yes, but I think these images are accessible, so the idea is that we would use SCT for segmentation. If you are at all interested I can reach out the authors and see if they are willing to share. I know having a complete set of segments and ground truth reference points would be useful to us... Food for thought.
Hi Julien (and all),
I followed up with the authors of that mouse MRI paper and they agreed to share. I will upload it to our shared dropbox folder (or do you have a different preferred method?). Do you guys want to test and see if it runs with your default deep learning pipeline? https://pubmed.ncbi.nlm.nih.gov/23246856/
I recently stumbled on this video by you. Do you know where this mouse dataset came from? https://www.youtube.com/watch?v=KVL-JzcSRTo
Best, Steve
Do you guys want to test and see if it runs with your default deep learning pipeline?
We don't really have the time to do this-- if you or a student would like to follow the recipe, we would be happy to assist
Do you know where this mouse dataset came from?
from there: https://github.com/ivadomed/model_seg_mouse-sc_wm-gm_t1