Couldn't run HCP tutorial in docker
Hello experts.
I'm trying to run HCP tutorial following your wiki page in docker.
Although I used hcp data and ran the command below, I encountered many errors.
TractSeg -i /data/hcp_tutorial/data.nii.gz -o /data/hcp_tutorial --bvals /data/hcp_tutorial/bvals --bvecs /data/hcp_tutorial/bvecs --raw_diffusion_input --csd_type csd_msmt_5tt --brain_mask nodif_brain_mask.nii.gz
The following errors occurred (Bold marked on the error part): #################################################################################################### /usr/local/lib/python3.7/site-packages/pandas/compat/init.py:124: UserWarning: Could not import the lzma module. Your installed Python is incomplete. Attempting to use lzma compression will result in a RuntimeError. warnings.warn(msg) Creating peaks (1 of 4)... 5ttgen: 5ttgen: Note that this script makes use of commands / algorithms that have relevant articles for citation; INCLUDING FROM EXTERNAL SOFTWARE PACKAGES. Please consult the help page (-help option) for more information. 5ttgen: 5ttgen: Generated temporary directory: /data/hcp_tutorial/5ttgen-tmp-9JYBXG/ Command: mrconvert /data/hcp_tutorial/T1w_acpc_dc_restore_brain.nii.gz /data/hcp_tutorial/5ttgen-tmp-9JYBXG/input.mif 5ttgen: Changing to temporary directory (/data/hcp_tutorial/5ttgen-tmp-9JYBXG/) 5ttgen: [ERROR] Atlases required for FSL's FIRST program not installed; please install fsl-first-data using your relevant package manager 5ttgen: Changing back to original directory (/data/hcp_tutorial) 5ttgen: Contents of temporary directory kept, location: /data/hcp_tutorial/5ttgen-tmp-9JYBXG/ Creating peaks (2 of 4)... dwi2response: dwi2response: Note that this script makes use of commands / algorithms that have relevant articles for citation. Please consult the help page (-help option) for more information. dwi2response: dwi2response: Generated temporary directory: /data/hcp_tutorial/dwi2response-tmp-YXGAXX/ Command: mrconvert /data/hcp_tutorial/data.nii.gz /data/hcp_tutorial/dwi2response-tmp-YXGAXX/dwi.mif -strides 0,0,0,1 -fslgrad /data/hcp_tutorial/bvecs /data/hcp_tutorial/bvals Command: mrconvert /data/hcp_tutorial/nodif_brain_mask.nii.gz /data/hcp_tutorial/dwi2response-tmp-YXGAXX/mask.mif -datatype bit Command: mrconvert /data/hcp_tutorial/5TT.nii.gz /data/hcp_tutorial/dwi2response-tmp-YXGAXX/5tt.mif dwi2response: dwi2response: [ERROR] Command failed: mrconvert /data/hcp_tutorial/5TT.nii.gz /data/hcp_tutorial/dwi2response-tmp-YXGAXX/5tt.mif (msmt_5tt.py:27) dwi2response: Output of failed command: mrconvert: [ERROR] cannot access file "/data/hcp_tutorial/5TT.nii.gz": No such file or directory mrconvert: [ERROR] error opening image "/data/hcp_tutorial/5TT.nii.gz" dwi2response: dwi2response: Script failed while executing the command: mrconvert /data/hcp_tutorial/5TT.nii.gz /data/hcp_tutorial/dwi2response-tmp-YXGAXX/5tt.mif dwi2response: For debugging, inspect contents of temporary directory: /data/hcp_tutorial/dwi2response-tmp-YXGAXX/ Creating peaks (3 of 4)... dwi2fod: [100%] uncompressing image "nodif_brain_mask.nii.gz" dwi2fod: [ERROR] no data in matrix file "/data/hcp_tutorial/RF_WM.txt" dwi2fod: [ERROR] File "/data/hcp_tutorial/RF_WM.txt" is not a valid response function file dwi2fod: [ERROR] MSMT_CSD algorithm expects the first file in each argument pair to be an input response function file Creating peaks (4 of 4)... sh2peaks: [ERROR] cannot access file "/data/hcp_tutorial/WM_FODs.nii.gz": No such file or directory sh2peaks: [ERROR] error opening image "/data/hcp_tutorial/WM_FODs.nii.gz" Traceback (most recent call last): File "/usr/local/lib/python3.7/site-packages/nibabel/loadsave.py", line 90, in load stat_result = os.stat(filename) FileNotFoundError: [Errno 2] No such file or directory: '/data/hcp_tutorial/peaks.nii.gz'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/bin/TractSeg", line 420, in
Are there hints to solve these errors? Any comments can help me a lot.
Thanks! Joseph.
The docker container seems to not work properly with --raw_diffusion_input. Use it without this option or run it without the docker container.
Thanks for your response!
I will use it without --raw_diffusion_input.
Best, Joseph.