Several issues with fmriprep 21.0.0
What happened?
I am trying to run four subjects with fmriprep-21.0.0 on Sherlock, from the generously supplied image by @mgxd. The subjects already have freesurfer files, and they have partial coverage, so I have manually aligned them prior to starting fMRIprep. I see the following issues:
- functional and anatomical alignment is off, despite images being well-aligned prior to running fMRIprep and using "--bold2t1w-init header".
- Node Name fmriprep_wf.single_subject_0011_wf.anat_preproc_wf.brain_extraction_wf.mrg_tmpl gives the error "ModuleNotFoundError: No module named 'niworkflows.interfaces.ants'"
- slice-timing correction gives the following error: "[7m** FATAL ERROR:[0m tpattern file slice_timing.1D has 32 values but have 256 slices"
- Slurm appears to terminate execution because time limit is reached: "slurmstepd: error: *** JOB 40542383 ON sh02-10n48 CANCELLED AT 2021-12-18T19:09:42 DUE TO TIME LIMIT ***"
What command did you use?
# jobarray.sbatch commands
module load system
unset PYTHONPATH
export FS_LICENSE=${HOME}/license.txt
export fmriprep_img=/home/groups/russpold/singularity_images/fmriprep-21.0.0.sif
eval $( sed "${SLURM_ARRAY_TASK_ID}q;d" headmodel_tasks_list.sh )
# content of headmodel_tasks_list.sh
singularity run ${fmriprep_img} ${SCRATCH}/raw/headmodel ${SCRATCH}/analyzed/headmodel participant -w ${SCRATCH}/temp --participant_label 0011 --ignore fieldmaps --output-spaces MNI152NLin2009cAsym anat fsaverage5 --bold2t1w-init header --mem-mb 5000 --nthreads 10 -vvv
singularity run ${fmriprep_img} ${SCRATCH}/raw/headmodel ${SCRATCH}/analyzed/headmodel participant -w ${SCRATCH}/temp --participant_label 0014 --ignore fieldmaps --output-spaces MNI152NLin2009cAsym anat fsaverage5 --bold2t1w-init header --mem-mb 5000 --nthreads 10 -vvv
singularity run ${fmriprep_img} ${SCRATCH}/raw/headmodel ${SCRATCH}/analyzed/headmodel participant -w ${SCRATCH}/temp --participant_label 0016 --ignore fieldmaps --output-spaces MNI152NLin2009cAsym anat fsaverage5 --bold2t1w-init header --mem-mb 5000 --nthreads 10 -vvv
singularity run ${fmriprep_img} ${SCRATCH}/raw/headmodel ${SCRATCH}/analyzed/headmodel participant -w ${SCRATCH}/temp --participant_label 0017 --ignore fieldmaps --output-spaces MNI152NLin2009cAsym anat fsaverage5 --bold2t1w-init header --mem-mb 5000 --nthreads 10 -vvv
What version of fMRIPrep are you running?
21.0.0
How are you running fMRIPrep?
Singularity
Is your data BIDS valid?
Yes
Are you reusing any previously computed results?
FreeSurfer
Please copy and paste any relevant log output.
# from visual report
Node Name: fmriprep_wf.single_subject_0011_wf.anat_preproc_wf.brain_extraction_wf.mrg_tmpl
File: /scratch/users/pjkohler/analyzed/headmodel/fmriprep/sub-0011/log/20211216-114736_170c9718-1207-4fa5-8b06-e68999b87244/crash-20211216-115028-pjkohler-mrg_tmpl-7101a4f5-b425-497b-978e-b45eb121542e.txt
Working Directory: /scratch/users/pjkohler/temp/fmriprep_wf/single_subject_0011_wf/anat_preproc_wf/brain_extraction_wf/mrg_tmpl
Inputs:
axis: vstack
in1: /home/users/pjkohler/.cache/templateflow/tpl-OASIS30ANTs/tpl-OASIS30ANTs_res-01_T1w.nii.gz
in2:
no_flatten: False
ravel_inputs: False
Traceback (most recent call last):
File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/plugins/multiproc.py", line 67, in run_node
result["result"] = node.run(updatehash=updatehash)
File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py", line 443, in run
cached, updated = self.is_cached()
File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py", line 332, in is_cached
hashed_inputs, hashvalue = self._get_hashval()
File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py", line 538, in _get_hashval
self._get_inputs()
File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py", line 580, in _get_inputs
outputs = _load_resultfile(results_fname).outputs
File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/utils.py", line 293, in load_resultfile
result = loadpkl(results_file)
File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/utils/filemanip.py", line 666, in loadpkl
raise e
File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/utils/filemanip.py", line 643, in loadpkl
unpkl = pickle.loads(pkl_contents)
ModuleNotFoundError: No module named 'niworkflows.interfaces.ants'
Node Name: fmriprep_wf.single_subject_0011_wf.func_preproc_ses_01_task_MT_run_01_wf.bold_stc_wf.slice_timing_correction
File: /scratch/users/pjkohler/analyzed/headmodel/fmriprep/sub-0011/log/20211216-114736_170c9718-1207-4fa5-8b06-e68999b87244/crash-20211216-115152-pjkohler-slice_timing_correction-cdef3e34-50b7-40e9-ae56-8d799c4a0b10.txt
Working Directory: /scratch/users/pjkohler/temp/fmriprep_wf/single_subject_0011_wf/func_preproc_ses_01_task_MT_run_01_wf/bold_stc_wf/slice_timing_correction
Inputs:
args:
environ: {}
ignore: 2
in_file: /scratch/users/pjkohler/temp/fmriprep_wf/single_subject_0011_wf/func_preproc_ses_01_task_MT_run_01_wf/initial_boldref_wf/val_bold/mapflow/_val_bold0/sub-0011_ses-01_task-MT_run-01_bold_valid.nii.gz
interp:
num_threads: 1
out_file:
outputtype: NIFTI_GZ
rlt:
rltplus:
slice_encoding_direction: k
slice_timing: [0.0, 1.0, 0.0625, 1.0625, 0.125, 1.125, 0.1875, 1.1875, 0.25, 1.25, 0.3125, 1.3125, 0.375, 1.375, 0.4375, 1.4375, 0.5, 1.5, 0.5625, 1.5625, 0.625, 1.625, 0.6875, 1.6875, 0.75, 1.75, 0.8125, 1.8125, 0.875, 1.875, 0.9375, 1.9375]
tpattern:
tr: 2.0s
tslice:
tzero:
Traceback (most recent call last):
File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/plugins/multiproc.py", line 67, in run_node
result["result"] = node.run(updatehash=updatehash)
File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py", line 516, in run
result = self._run_interface(execute=True)
File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py", line 635, in _run_interface
return self._run_command(execute)
File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py", line 741, in _run_command
result = self._interface.run(cwd=outdir)
File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/interfaces/base/core.py", line 428, in run
runtime = self._run_interface(runtime)
File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/interfaces/afni/base.py", line 125, in _run_interface
runtime, correct_return_codes
File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/interfaces/base/core.py", line 822, in _run_interface
self.raise_exception(runtime)
File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/interfaces/base/core.py", line 753, in raise_exception
).format(**runtime.dictcopy())
RuntimeError: Command:
3dTshift -ignore 2 -prefix sub-0011_ses-01_task-MT_run-01_bold_valid_tshift.nii.gz -tpattern @slice_timing.1D -TR 2.0s /scratch/users/pjkohler/temp/fmriprep_wf/single_subject_0011_wf/func_preproc_ses_01_task_MT_run_01_wf/bold_stc_wf/slice_timing_correction/sub-0011_ses-01_task-MT_run-01_bold_valid.nii.gz
Standard output:
Standard error:
++ 3dTshift: AFNI version=Debian-16.2.07~dfsg.1-5~nd16.04+1 (Jun 12 2017) [64-bit]
[7m** FATAL ERROR:[0m tpattern file slice_timing.1D has 32 values but have 256 slices
** Program compile date = Jun 12 2017
Return code: 1
# from error file
FileNotFoundError: /scratch/users/pjkohler/temp/fmriprep_wf/single_subject_0011_wf/func_preproc_ses_01_task_LOC_run_01_wf/initial_boldref_wf/gen_avg/result_gen_avg.pklz
slurmstepd: error: *** JOB 40542383 ON sh02-10n48 CANCELLED AT 2021-12-18T19:09:42 DUE TO TIME LIMIT ***
Additional information / screenshots
No response
Slurm job is 40542380.
For the ANTs issue, are you using a clean working directory? The error appears to be while loading a pickle file, which suggests a reused working directory.
For the slice-timing issue:
$ nib-ls /scratch/users/pjkohler/temp/fmriprep_wf/single_subject_0011_wf/func_preproc_ses_01_task_MT_run_01_wf/initial_boldref_wf/val_bold/mapflow/_val_bold0/sub-0011_ses-01_task-MT_run-01_bold_valid.nii.gz
/scratch/users/pjkohler/temp/fmriprep_wf/single_subject_0011_wf/func_preproc_ses_01_task_MT_run_01_wf/initial_boldref_wf/val_bold/mapflow/_val_bold0/sub-0011_ses-01_task-MT_run-01_bold_valid.nii.gz int16 [256, 256, 256, 132] 1.00x1.00x1.00x2.00 #exts: 1 sform
Your BOLD file is 256x256x256, so there's no way to correct that with 32 slice times. Can you verify that your input file also has that shape (I can't open /scratch/users/pjkohler/raw/headmodel)? What was done to generate the raw dataset?
For the alignment and execution time issues, I would say let's resolve these other two first.
Not sure what you mean by clean and reused working directory?
Was there already something in ${SCRATCH}/temp (passed to -w)?
There may have been. In the past, I have tried to clear that up by adding
rm -r /scratch/users/pjkohler/temp/*
to jobarray.sbatch prior to the eval statement, but I kept getting (non-critical) errors so I stopped doing that. What is the recommended way of handling that?
Generally you won't need to remove when not upgrading versions, so a manual removal now should be enough.
If I type
rm -r /scratch/users/pjkohler/temp/*
into the terminal, sherlock takes a long time to delete the files (>30 minutes). Is there a quicker way?
You could always do something like:
mv /scratch/users/pjkohler/temp /scratch/users/pjkohler/temp.old
mkdir /scratch/users/pjkohler/temp
# Start your job
rm -r /scratch/users/pjkohler/temp.old
My approach to manually aligning the images (using AFNI) involved resampling which I had somehow missed. Thus, 32 became 256. I just reran the four subjects after doing manually alignment with FSL instead. It finished without error and alignments look good. Only question now is, the output data appear to not use the fmriprep/freesurfer folder structure, but instead place the outputs in the top-level folder. Is this a new way of doing things with the new version of fMRIprep? fMRIprep finds and uses the existing freesurfer folders without problem.
Yes, we've updated the docs about the changed layout: https://fmriprep.org/en/stable/outputs.html#layout
okay ... I have a set of subjects for whom pre-alignment was not necessary that I analyzed using a previous version of fMRIprep. Should I re-run fMRIprep, or can I just move the files around?
We do not recommend mixing different versions. Mixing 20.2.0 thru 20.2.6 is generally fine (as long as you're not affected by https://github.com/nipreps/fmriprep/issues/2307, fixed in 20.2.2; see the later comments for a test), but across minor versions (20.1 and 20.2 and 21.0 are all different minor versions) we do not make strong consistency guarantees.
For more details see https://www.nipreps.org//devs/releases/. The docs are oriented to developers, not users, but the bug-fix release section should clarify what we consider acceptable changes within a minor release series.
All that said, see the legacy layout section for aligning derivatives.
okay. I'll try to run the new version on the other, not pre-aligned data. Hopefully alignment will be successful and everything will be using the same version. Thanks.
Please reopen if there are still issues here.
I had to pre-align all of my data to get good output alignment with 21.0. I now have good alignments for all of my subjects, except one where 2 out of 12 runs are consistently misaligned in the output. Any suggestions about what to do?
Sorry, to be clear, you've pre-aligned and they look good, and then fMRIPrep goes and makes them worse? Can you share your invocation?
Apologies for the late response. I really appreciate your help.
By invocation, do you mean the command?
singularity run ${fmriprep_img} ${SCRATCH}/raw/sym_4groups ${SCRATCH}/analyzed/sym_4groups participant -w ${SCRATCH}/temp --participant_label 0024 --ignore fieldmaps --output-spaces MNI152NLin2009cAsym anat fsaverage5 --bold2t1w-init header --mem-mb 5000 --nthreads 10 -vvv
Data are in: /scratch/users/pjkohler/analyzed/sym_4groups
The problematic subject is sub-0024.
Thanks.
Note that most runs are fine, but bad alignment is seen for two runs
- session 01, task P2, run 2.
- session 01, task P3, run 3.
Thanks.