qsiprep
qsiprep copied to clipboard
pyAFQ error
Hi @mattcieslak and @36000,
I was testing pyAFQ on some single shell data and got an error. I'm using a single-subject test case before running it on everyone. This particular individual was successfully reconstructed using both dsi_studio_gqi
and mrtrix_singleshel_ss3t_ACT-fast
using the exact same call (only changing the output folder and recon spec).
Call:
singularity run --cleanenv -B /xdisk/adamraikes/cornell/nifti:/nifti:ro -B /xdisk/adamraikes/cornell/derivatives/qsiprep-0.16.0RC3_SyNFS:/xdisk/adamraikes/cornell/derivatives/qsiprep-0.16.0RC3_SyNFS -B ~/misc/license.txt:/opt/freesurfer/license.txt /groups/adamraikes/singularity_images/qsiprep-0.16.0RC3.sif /nifti /xdisk/adamraikes/cornell/derivatives/qsiprep-0.16.0RC3_SyNFS/pyAFQ participant --participant-label 087 -w /xdisk/adamraikes/cornell/derivatives/qsiprep-0.16.0RC3_SyNFS/scratch --nthreads 12 --omp-nthreads 6 --recon-only --recon-input /xdisk/adamraikes/cornell/derivatives/qsiprep-0.16.0RC3_SyNFS/qsiprep --recon-spec pyAFQ
Error:
Node: qsirecon_wf.sub-087_pyAFQ_full.sub_087_space_T1w_desc_preproc_recon_wf.pyAFQ_full.run_afq
Working directory: /xdisk/adamraikes/cornell/derivatives/qsiprep-0.16.0RC3_SyNFS/scratch/qsirecon_wf/sub-087_pyAFQ_full/sub_087_space_T1w_desc_preproc_recon_wf/pyAFQ_full/run_afq
Node inputs:
bval_file = /xdisk/adamraikes/cornell/derivatives/qsiprep-0.16.0RC3_SyNFS/qsiprep/sub-087/dwi/sub-087_space-T1w_desc-preproc_dwi.bval
bvec_file = /xdisk/adamraikes/cornell/derivatives/qsiprep-0.16.0RC3_SyNFS/qsiprep/sub-087/dwi/sub-087_space-T1w_desc-preproc_dwi.bvec
dwi_file = /xdisk/adamraikes/cornell/derivatives/qsiprep-0.16.0RC3_SyNFS/qsiprep/sub-087/dwi/sub-087_space-T1w_desc-preproc_dwi.nii.gz
itk_file = /xdisk/adamraikes/cornell/derivatives/qsiprep-0.16.0RC3_SyNFS/qsiprep/sub-087/anat/sub-087_from-MNI152NLin2009cAsym_to-T1w_mode-image_xfm.h5
kwargs = {'directions': 'prob', 'max_angle': 30.0, 'sphere': None, 'seed_mask': None, 'seed_threshold': 0, 'n_seeds': 1, 'random_seeds': False, 'rng_seed': None, 'stop_mask': None, 'stop_threshold': 0, 'step_size': 0.5, 'min_length': 10, 'max_length': 1000, 'odf_model': 'CSD', 'tracker': 'local', 'nb_points': False, 'nb_streamlines': False, 'seg_algo': 'AFQ', 'reg_algo': None, 'clip_edges': False, 'parallel_segmentation': {'n_jobs': -1, 'engine': 'joblib', 'backend': 'loky'}, 'progressive': True, 'greater_than': 50, 'rm_small_clusters': 50, 'model_clust_thr': 1.25, 'reduction_thr': 25, 'refine': False, 'pruning_thr': 12, 'b0_threshold': 50, 'prob_threshold': 0, 'roi_dist_tie_break': False, 'dist_to_waypoint': None, 'rng': None, 'return_idx': False, 'presegment_bundle_dict': None, 'presegment_kwargs': {}, 'filter_by_endpoints': True, 'dist_to_atlas': 4, 'save_intermediates': None, 'n_points': 100, 'clean_rounds': 5, 'distance_threshold': 5, 'length_threshold': 4, 'min_sl': 20, 'stat': 'mean', 'min_bval': None, 'max_bval': None, 'filter_b': True, 'robust_tensor_fitting': False, 'csd_response': None, 'csd_sh_order': None, 'csd_lambda_': 1, 'csd_tau': 0.1, 'gtol': 0.01, 'brain_mask_definition': None, 'bundle_info': None, 'reg_template_spec': 'mni_T1', 'mapping_definition': None, 'reg_subject_spec': 'power_map', 'profile_weights': 'gauss', 'scalars': ['dti_fa', 'dti_md'], 'import_tract': None, 'sbv_lims_bundles': [None, None], 'volume_opacity_bundles': 0.3, 'n_points_bundles': 40, 'sbv_lims_indiv': [None, None], 'volume_opacity_indiv': 0.3, 'n_points_indiv': 40, 'viz_backend_spec': 'plotly_no_gif', 'virtual_frame_buffer': False, 'omp_nthreads': 6}
mask_file = /xdisk/adamraikes/cornell/derivatives/qsiprep-0.16.0RC3_SyNFS/scratch/qsirecon_wf/sub-087_pyAFQ_full/sub_087_space_T1w_desc_preproc_recon_wf/qsirecon_anat_wf/resample_mask/sub-087_desc-brain_mask_resample.nii.gz
tck_file = <undefined>
Traceback (most recent call last):
File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/plugins/multiproc.py", line 67, in run_node
result["result"] = node.run(updatehash=updatehash)
File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/engine/nodes.py", line 524, in run
result = self._run_interface(execute=True)
File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/engine/nodes.py", line 642, in _run_interface
return self._run_command(execute)
File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/engine/nodes.py", line 750, in _run_command
raise NodeExecutionError(
nipype.pipeline.engine.nodes.NodeExecutionError: Exception raised while executing Node run_afq.
Traceback (most recent call last):
File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/interfaces/base/core.py", line 398, in run
runtime = self._run_interface(runtime)
File "/usr/local/miniconda/lib/python3.8/site-packages/qsiprep/interfaces/pyafq.py", line 59, in _run_interface
tck_file = fname_presuffix(self.inputs.tck_file,
File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/utils/filemanip.py", line 140, in fname_presuffix
pth, fname, ext = split_filename(fname)
File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/utils/filemanip.py", line 92, in split_filename
pth = op.dirname(fname)
File "/usr/local/miniconda/lib/python3.8/posixpath.py", line 152, in dirname
p = os.fspath(p)
TypeError: expected str, bytes or os.PathLike object, not _Undefined
Hi, I believe I am getting the exact same error as you. Is there any chance that you solved this problem and would be willing to share? Thank you so much in advance!!
@araikes sorry for the delay in responding to this. @araikes or @margokersey , I think this issue may be fixed in #533 . Could you try building a qsiprep docker image using that branch and re-running, to see if that fixes the issue?
You can also try running the pyAFQ_import_trk pipeline instead. This will import the tractography from MRtrix instead of using the in-house DIPY tractography. You can customize which tractography to use inside of the pyAFQ_import_trk.json file.
Thank you!! Would love to test that out. Sorry this may be a very dumb question but how can I build the Singularity image from a Docker image for that branch? I can't figure it out after trying many times. I am using an HPC cluster that supports Singularity.
My current image was pulled with the command
singularity build ./qsiprep-0.16.1.sif docker://pennbbl/qsiprep:0.16.1
@margokersey, if you have access to a machine running Docker, one method would be to pull the branch, start a temporary local registry, build and push the docker image, and then building the singularity image from this locally hosted docker image:
# git clone branch from 36000
git clone -b pyafq_full_fix https://github.com/36000/qsiprep.git
# Start a docker registry
$ docker run -d -p 5000:5000 --restart=always --name registry registry:2
# go to git cloned folder
cd qsiprep
# Build and push local docker container to registry
docker build -t localhost:5000/qsiprep:pyafq-36000 .
docker push localhost:5000/qsiprep:pyafq-36000
# Build singularity container from local registry
SINGULARITY_NOHTTPS=1 singularity build qsiprep-pyafq-36000.sif docker://localhost:5000/qsiprep:pyafq-36000
Thank you so much @pcamach2 ! Apologies for the delay, as I am new to Docker/Singularity. I was able to push the local docker image to my Dockerhub username and then build the Singularity image from inside my HPC. When testing out the 36000 branch, unfortunately I am now getting a new error: @36000
CRITICAL:cli:QSIPrep failed: Traceback (most recent call last):
File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/plugins/multiproc.py", line 67, in run_node
result["result"] = node.run(updatehash=updatehash)
File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/engine/nodes.py", line 527, in run
result = self._run_interface(execute=True)
File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/engine/nodes.py", line 645, in _run_interface
return self._run_command(execute)
File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/engine/nodes.py", line 771, in _run_command
raise NodeExecutionError(msg)
nipype.pipeline.engine.nodes.NodeExecutionError: Exception raised while executing Node run_afq.
Traceback:
Traceback (most recent call last):
File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/interfaces/base/core.py", line 398, in run
runtime = self._run_interface(runtime)
File "/usr/local/miniconda/lib/python3.8/site-packages/qsiprep/interfaces/pyafq.py", line 101, in _run_interface
myafq = ParticipantAFQ(
TypeError: type object got multiple values for keyword argument 'import_tract'
pyafq fails
Overview
This looks to me like qsiprep does not consider pyafq_tractometry a legitimate option. I think I spelled it right. Thoughts?
Background
I've been trying the various reconstruction algorithms just by swapping them into place in the singularity command. This was working perfectly until I hit pyafq_tractometry!
Environment
Apptainer on HPC
Singularity command
singularity run --cleanenv --bind ${MRIS}/data:/data:ro
--bind ${MRIS}/derivatives2/qsiprep:/qsiprep-output:ro
--bind ${MRIS}/derivatives2:/out
--bind ${MRIS}/qsi_work3:/work
${SIF}/qsiprep.sif /data /out participant
--participant_label ${Subject}
--recon-input /qsiprep-output
--recon-spec pyafq_tractometry
--stop-on-first-crash
--output-resolution 1.3 -w /work -v -v
--n-cpus 16 --omp-nthreads 15
--skip-odf-reports --recon-only
--fs-license-file ${HOME}/license.txt
--freesurfer_input /out/fmriprep2301/sourcedata/freesurfer
SLURM Log
INFO: underlay of /etc/localtime required more than 50 (103) bind mounts 230427-16:59:18,481 nipype.workflow IMPORTANT:
Running qsirecon version 0.16.1:
* BIDS dataset path: /data.
* Participant list: ['CAM003'].
* Run identifier: 20230427-165916_fe4ef9e8-cc79-498c-a33a-df89768c4080.
IMPORTANT:nipype.workflow: Running qsirecon version 0.16.1: * BIDS dataset path: /data. * Participant list: ['CAM003']. * Run identifier: 20230427-165916_fe4ef9e8-cc79-498c-a33a-df89768c4080.
Process Process-2: Traceback (most recent call last): File "/usr/local/miniconda/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap self.run() File "/usr/local/miniconda/lib/python3.8/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/usr/local/miniconda/lib/python3.8/site-packages/qsiprep/cli/run.py", line 1175, in build_recon_workflow retval['workflow'] = init_qsirecon_wf( File "/usr/local/miniconda/lib/python3.8/site-packages/qsiprep/workflows/recon/base.py", line 89, in init_qsirecon_wf single_subject_wf = init_single_subject_wf( File "/usr/local/miniconda/lib/python3.8/site-packages/qsiprep/workflows/recon/base.py", line 168, in init_single_subject_wf spec = _load_recon_spec(recon_spec, sloppy=sloppy) File "/usr/local/miniconda/lib/python3.8/site-packages/qsiprep/workflows/recon/base.py", line 257, in _load_recon_spec raise Exception("{} is not a file that exists or in {}".format(spec_name, prepackaged)) Exception: pyafq_tractometry is not a file that exists or in ['amico_noddi', 'csdsi_3dshore', 'dipy_3dshore', 'dipy_dki', 'dipy_mapmri ', 'dsi_studio_gqi', 'mrtrix_multishell_msmt_ACT-fast', 'mrtrix_multishell_msmt_ACT-hsvs', 'mrtrix_multishell_msmt_noACT', 'mrtrix_sin gleshell_ss3t_ACT-fast', 'mrtrix_singleshell_ss3t_ACT-hsvs', 'mrtrix_singleshell_ss3t_noACT', 'mrtrix_tckglobal', 'multishell_scalarfe st', 'pyafq', 'pyafq_input_trk', 'reorient_fslstd'] Detailed performance metrics for this job will be available at https://metrics.hpc.arizona.edu/#job_viewer?action=show&realm=SUPREMM&r esource_id=73&local_job_id=7058510 by 8am on 2023/04/28. $
@dkp: I believe that this is a new name for that pyafq pipeline as of 0.17 (see https://github.com/PennLINC/qsiprep/pull/533). Could you please try running the newer version rather than 0.16.1?
Thanks, I did not realize there was a new version available. I will try it. Will the new version be able to use the results of my previous qsiprep and the work directory, or do i need to start fresh?
@arokem That works perfectly with qsiprep 0.17.0! Thanks so much.