ANTsPyNet icon indicating copy to clipboard operation
ANTsPyNet copied to clipboard

Issue regarding to apply brain parcellation to MRI images for specific brain regions at P56 mice

Open TsungChihTsai opened this issue 1 year ago • 42 comments

Original title is "Issue regarding to apply "DevCCF Velocity Flow Model" to other MRI data". It's out of focus. Therefore, I revised the title above (19 Jun, 2024):

Issue regarding to apply "DevCCF Velocity Flow Model" to other MRI data

Thanks for your remarkable work. I'm not proficient in coding, I require your help. Before I raise the issue I'm facing, I'd like to explain my current situation.

I have referenced your work in "The ANTsX Ecosystem for Mapping the Mouse Brain (2024)". I've attempted to revise to execute the tutorial codes of python from https://gist.github.com/ntustison/12a656a5fc2f6f9c4494c88dc09c5621#mouse-applicatons. And, I also attempted to use the pipeline of "Table 1: Sampling of ANTsX functionality," there are still some challenges to overcome for me.

download

By using the mouse_brain_extraction by antspynet to process an open source MRI data (https://figshare.com/ndownloader/files/45289309). I can get a whole mouse brain. mouse_brain_extracted_animation

By using the ants.n4_bias_field_correction, ants.denoise_image , ants.registration, and antspynet.mouse_brain_parcellation, I can get major brain regions, such as the olfactory bulb, cortical area, midbrain, hippocampus, and cerebellum. mouse_brain_parcellation_segmentation_animation

My purpose is that effectively get some specific adult mouse brain area, such as the medial prefrontal cortex (mPFC), medial dorsal (MD) thalamus. Thus, I came across your work on the "Developmental Mouse Brain Common Coordinate Framework (2023)", where you introduced the developmental common coordinate framework (DevCCF). This model can show clear brain area in different developmental stages (https://kimlab.io/brain-map/DevCCF/) .

  1. After I attempted to use your code (https://github.com/ntustison/ANTsXMouseBrainMapping), I followed your procedure and built the DevCCF Velocity Flow Model (DevCCF_velocity_flow.nii.gz). However, when I ran the code for "Using the DevCCF Velocity Flow Model", the template_file did not define before this cell. thumbnail_image

  2. I am also unsure how could I apply this model to other MRI images data. Could you please guide me how to utilize your code (python) from the directories ANTsXMouseBrainMapping-main\Scripts\BrainExtractionNetwork, MiscScripts, and MultiTissueNetwork? thumbnail_image

Thank you. If you need more information, please let me know.

TsungChihTsai avatar Jun 18 '24 21:06 TsungChihTsai

Okay, I think you're conflating a couple things so let's try to sort them out first. The paper describes a couple different methods for completely separate tasks. The DevCCF velocity flow model has nothing to do with the mouse brain parcellation work.

From what you wrote, I'm guessing that you want to take as input a mouse brain MRI and parcellate it into a set of regions. Is that correct? If so, I have two questions:

  • What is the MRI modality that you're using?
  • What are your desired brain parcellation regions?

ntustison avatar Jun 18 '24 21:06 ntustison

Thanks for your clarification. I am afraid that I misunderstood your response. So I just confirm my understanding. The DevCCF velocity flow model serves as a comprehensive atlas for all developmental stages of the brain. By applying this model, we will be able to obtain the corresponding structures at each developmental stage. Initially, I also wanted to conduct longitudinal studies to observe changes in desired brain regions’ sizes at different developmental stages. Is it impossible to do so? However, brain parcellation is a different module to segment the brain area with different training dataset. Therefore, the DevCCF velocity flow model resolve the issue of having a mouse brain atlas for each developmental stage that we have never seen before?

Regarding the questions:

  1. Because now I am just evaluating whether it is possible to do segmentation from a T2 MRI image. I propose to use T2 MRI image.

  2. My desired brain parcellation regions are the medial prefrontal cortex (mPFC), medial dorsal (MD) thalamus, ventral hippocampus (vHP), and cerebellum.

TsungChihTsai avatar Jun 18 '24 22:06 TsungChihTsai

Thanks for your answers. Follow-up---

  • For your longitudinal studies, what ages are you imaging?

ntustison avatar Jun 18 '24 23:06 ntustison

I think anesthesia control is also difficult for pups and in obtaining longitudinalMRI images. If possible, I’d like to scan P10, P28 and P56. However, for the first step, I‘d like to check the difference at P56. Because I realize this is hard to get precise volume for my desired brain parcellation regions.

TsungChihTsai avatar Jun 19 '24 01:06 TsungChihTsai

Unfortunately, your responses are confusing such that I don't know what you're trying to accomplish using the tools I discuss in the manuscript. For example, I don't know what fMRI has to do with anything we've discussed nor do I understand what you mean by "the difference at P56."

Please lay out in very clear terms the design of your study. Otherwise, I'm going to suggest you try to read the paper again and look at the self-contained tutorials I've created to figure it out yourself.

ntustison avatar Jun 19 '24 14:06 ntustison

fMRI was a typo; I meant MRI. By “the difference at P56,” I am referring to different genotypes. I believe we could start by focusing on the brain parcellation regions at P56, which include the medial prefrontal cortex (mPFC), medial dorsal (MD) thalamus, ventral hippocampus (vHP), and cerebellum. However, I haven’t found a suitable method to accomplish this.

TsungChihTsai avatar Jun 19 '24 14:06 TsungChihTsai

Thanks. But this illustrates why our conversation has been so confusing. From the start I had assumed "focusing on the brain parcellation regions at P56" but then you brought in the DevCCF model and a potential longitudinal component which is not relevant to this specific task.

So, focusing on P56, let's start simple---are you able to run this example?

ntustison avatar Jun 19 '24 14:06 ntustison

Sorry about that. I lost my focus. Thank you for helping me get back on track.

I ran it, image

And, I also checked segmentation and probability images, I think it can separate olfactory bulb, cortical area, midbrain, hippocampus, and cerebellum. image image image .... image

the probability images have 7 sets.

However, I found it still have challenge to dissect "medial prefrontal cortex (mPFC), medial dorsal (MD) thalamus, ventral hippocampus (vHP), and cerebellum". Because I also saw this masterpiece: https://kimlab.io/brain-map/neuroglancer/DevCCF/index_P56.html, I think it has the potential to resolve this issue.

image Like this part, it can show the area of medial dorsal (MD). That's why I'm seeking this model and lost the focus.

TsungChihTsai avatar Jun 19 '24 14:06 TsungChihTsai

Okay, perfect. Let's stay simple for now and not bring up Yongsoo's work as I don't think it's relevant, at least not yet.

So what I propose in the manuscript is a specific brain parcellation scheme. This is one that I generated simply as an example to use with T2-w. That's why I refer to it in the option as which_parcellation = "nick". However, one can easily adapt the same framework for creating other parcellations. For example, after I wrote the paper, Yongsoo wanted to create a parcellation for serial two-photon tomography images consisting (STPT) of four regions. You can access that functionality using which_parcellation="jay". But note that it only works with STPT images.

It sounds like you want to do something analogous with your regions: medial prefrontal cortex (mPFC), medial dorsal (MD) thalamus, ventral hippocampus (vHP), and cerebellum. Since you're analyzing P56 images, I believe you can use the Allen brain atlas to get these regions. Are you familiar with the allensdk python library?

ntustison avatar Jun 19 '24 15:06 ntustison

Thanks for your suggestion! I am not familiar with allensdk python library. However, I can read this "https://allensdk.readthedocs.io/en/latest/reference_space.html" first. Could I use this "https://github.com/ntustison/ANTsXMouseBrainMapping/blob/main/Scripts/MiscScripts/get_allen_parcellation.py" to get these regions?

TsungChihTsai avatar Jun 19 '24 15:06 TsungChihTsai

Yes. See if you can run that script to get the original "nick" parcellation. Note that you'll have to download my reoriented version of the Allen template at 50 um resolution available here and replace the file path in the script. This reoriented file is necessary to bring Allen into canonical orientation (analogous to human brain imaging). Let me know when you're able to run the script successfully.

ntustison avatar Jun 19 '24 15:06 ntustison

I will. Thanks for your patience and guidance. Keep in touch.

TsungChihTsai avatar Jun 19 '24 15:06 TsungChihTsai

Thanks for your note.

I revised the "get_allen_parcellation.py" script to align with my desired brain parcellation regions: image

And, I ran "get_allen_parcellation.py"

It generates "combined_mask.nii.gz". The image shows as below: image

I get the mask it's what I need.

image So, will I create a new "which_parcellation" by this "combined_mask.nii.gz" to operate "antspynet.mouse_brain_parcellation"? Or, could I define the "mask" as this "combined_mask.nii.gz" for my next step?

TsungChihTsai avatar Jun 20 '24 00:06 TsungChihTsai

Great. Can you email it to me or post it somewhere so I can make sure you're ready for the next step?

ntustison avatar Jun 20 '24 00:06 ntustison

Sure!

Like this way? combined_mask.nii.gz

TsungChihTsai avatar Jun 20 '24 00:06 TsungChihTsai

Okay, so this is a great start. However, I would add some additional labels that complement your existing labels so that your entire brain is filled. So, for example, I would add the olfactory bulb, maybe the rest of the cerebral nuclei, cortex. And then whatever other label such that the entire brain region is filled. Does that make sense?

ntustison avatar Jun 20 '24 01:06 ntustison

Yes. It might be if I don't fill the whole brain, it cannot perform well for mouse_brain_parcellation.

I added more brain area, just mimic your choices. image image

Original one look like this: image

Revised one look like this: image

Is it okay? combined_mask.nii.gz

TsungChihTsai avatar Jun 20 '24 01:06 TsungChihTsai

Yeah, I would go with something like this. This assessment is based on multiple years of experience.

Do you have a GPU for training?

ntustison avatar Jun 20 '24 01:06 ntustison

I appreciate your guidance. It makes things straightforward.

I have a desktop in the lab with an NVIDIA Quadro RTX A2000 12GB. Is it available?

I can try it. But, I have a concern. Because when I attempted to do some spike sorting works in Matlab GUI, this desktop worked slowly. I'm not sure if this desktop is affordable for your training task.

TsungChihTsai avatar Jun 20 '24 02:06 TsungChihTsai

Yeah, that's kind of small. I'll throw it on my GPU tomorrow and we can take a look at the results in a couple days. I'll let you know.

ntustison avatar Jun 20 '24 02:06 ntustison

I was looking at the current results and comparing it with what you have above. It appears that you have 9 non-background regions but your combined mask only has 7:

>>> mask = ants.image_read("/Users/ntustison/Downloads/combined_mask.nii.gz") 
>>> geoms = ants.label_geometry_measures(mask)
>>> geoms['Label']
0     5
1     6
2     7
3     8
4    10
5    11
6    15
Name: Label, dtype: int64

I should've mentioned that your brain regions shouldn't overlap. I don't know if that is the case here but look at your combined mask and see if it has all the regions you want. If it's okay, then training should be pretty close to finished. Otherwise, you'll have fix your combined mask.

ntustison avatar Jun 21 '24 14:06 ntustison

I am sorry. I made a mistake I selected brain regions overlap, so I revised the selection of masks.

The first six masks are my interesting area, and added 26 other mask area to fill the background. image

The new combined_mask is attach below: combined_mask.nii.gz

I ran your code for checking the number of masks and labels. Like these six masks I selected, the others are background for me (total is 32 masks). image

Could we have a method to combine the masks from label 7 to label 32?

TsungChihTsai avatar Jun 21 '24 16:06 TsungChihTsai

Your proposed labeling is excessive for the image features that are typically visible in standard T2-w images.

ntustison avatar Jun 21 '24 16:06 ntustison

Nm, I see what you're saying. Yes, I can combine labels 7-32 to form a single label. Let me see how that looks.

ntustison avatar Jun 21 '24 16:06 ntustison

Here's the result.

combined_mask_tct.nii.gz

ntustison avatar Jun 21 '24 16:06 ntustison

Actually, I'd probably go with this:

combined_mask_tct2.nii.gz

ntustison avatar Jun 21 '24 16:06 ntustison

Thanks! So, you combine labels 7-32 to form a single label (index number 0) in combined_mask_tct2.nii.gz.

image

TsungChihTsai avatar Jun 21 '24 23:06 TsungChihTsai

Yes. Did you view it to make sure it's what you expected?

ntustison avatar Jun 22 '24 01:06 ntustison

Yes, they are in the right place.

TsungChihTsai avatar Jun 22 '24 01:06 TsungChihTsai

Okay, I restarted training with these updated labels. It'll probably take 2-3 days. I'll let you know when it's finished.

ntustison avatar Jun 22 '24 18:06 ntustison