DL-DiReCT icon indicating copy to clipboard operation
DL-DiReCT copied to clipboard

DL+DiReCT - Direct Cortical Thickness Estimation using Deep Learning-based Anatomy Segmentation and Cortex Parcellation

About DL+DiReCT

DL+DiReCT combines a deep learning-based neuroanatomy segmentation and cortex parcellation with a diffeomorphic registration technique to measure cortical thickness from T1w MRI.

Abstract

If you are using DL+DiReCT in your research, please cite (bibtex) the corresponding publication:

Rebsamen, M, Rummel, C, Reyes, M, Wiest, R, McKinley, R.
Direct cortical thickness estimation using deep learning‐based anatomy segmentation and cortex parcellation.
Human brain mapping. 2020; 41: 4804-4814. https://doi.org/10.1002/hbm.25159

Installation

Create virtual environment (optional)

Download and install Miniconda and create a new conda environment:

conda create -y -n DL_DiReCT python=3.10
source activate DL_DiReCT

Install DL+DiReCT

cd ${HOME}
git clone https://github.com/SCAN-NRAD/DL-DiReCT.git
cd DL-DiReCT
pip install numpy && pip install -e .

Usage

Run dl+direct on a T1-weighted MRI including skull-stripping (--bet) using HD-BET with:

source activate DL_DiReCT
dl+direct --subject <your_subj_id> --bet <path_to_t1_input.nii.gz> <output_dir>

Following files of interest are generated in the output directory:

- T1w_norm.nii.gz		Re-sampled input volume
- T1w_norm_seg.nii.gz		Segmentation
- T1w_norm_thickmap.nii.gz	Thickness map
- result-vol.csv		Segmentation volumes
- result-thick.csv		ROI-wise mean cortical thickness
- result-thickstd.csv		ROI-wise standard deviations of cortical thickness
- label_def.csv			Label definitions of the segmentation

Results may be collected into FreeSurfer alike statistics files with stats2table.

Contrast-enhanced (CE) MRI

To process images with a contrast agent (contrast-enhanced), use the option --model v6 (Rebsamen et al., 2022).

Available Models

The following models are available with the --model ... option:

Frequently Asked Questions

For further details, consult the corresponding publication and the FAQ or contact us